diff --git a/datasets/event2Mind/README.md b/datasets/event2Mind/README.md
index 9607a21e64c..7a70d3d185d 100644
--- a/datasets/event2Mind/README.md
+++ b/datasets/event2Mind/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: event2mind
+pretty_name: Event2Mind
---
# Dataset Card for "event2Mind"
@@ -164,4 +165,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
diff --git a/datasets/factckbr/README.md b/datasets/factckbr/README.md
index 828bd8dee52..07bbf9cfd6d 100644
--- a/datasets/factckbr/README.md
+++ b/datasets/factckbr/README.md
@@ -18,9 +18,10 @@ task_categories:
task_ids:
- fact-checking
paperswithcode_id: null
+pretty_name: FACTCK BR
---
-# Dataset Card for [Dataset Name]
+# Dataset Card for FACTCK BR
## Table of Contents
- [Dataset Description](#dataset-description)
@@ -142,4 +143,4 @@ The FACTCK.BR dataset contains 1309 claims with its corresponding label.
### Contributions
-Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
\ No newline at end of file
+Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
diff --git a/datasets/fake_news_english/README.md b/datasets/fake_news_english/README.md
index 8423596cf94..918103aa251 100644
--- a/datasets/fake_news_english/README.md
+++ b/datasets/fake_news_english/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- multi-label-classification
paperswithcode_id: null
+pretty_name: Fake News English
---
# Dataset Card for Fake News English
@@ -165,4 +166,4 @@ doi = {10.1145/3201064.3201100}
### Contributions
-Thanks to [@MisbahKhan789](https://github.com/MisbahKhan789), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
\ No newline at end of file
+Thanks to [@MisbahKhan789](https://github.com/MisbahKhan789), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
diff --git a/datasets/fake_news_filipino/README.md b/datasets/fake_news_filipino/README.md
index ba76aaf4507..f912e832e86 100644
--- a/datasets/fake_news_filipino/README.md
+++ b/datasets/fake_news_filipino/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- fact-checking
paperswithcode_id: fake-news-filipino-dataset
+pretty_name: Fake News Filipino
---
# Dataset Card for Fake News Filipino
@@ -156,4 +157,4 @@ Jan Christian Blaise Cruz, Julianne Agatha Tan, and Charibeth Cheng
### Contributions
-Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.
\ No newline at end of file
+Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.
diff --git a/datasets/farsi_news/README.md b/datasets/farsi_news/README.md
index 4e75af177b7..6306f7ef535 100644
--- a/datasets/farsi_news/README.md
+++ b/datasets/farsi_news/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- language-modeling
paperswithcode_id: null
+pretty_name: FarsiNews
---
# Dataset Card Creation Guide
@@ -153,4 +154,4 @@ https://github.com/sci2lab/Farsi-datasets
### Contributions
-Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset.
\ No newline at end of file
+Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset.
diff --git a/datasets/fashion_mnist/README.md b/datasets/fashion_mnist/README.md
index 7ce704f56fb..27e2a2f9375 100644
--- a/datasets/fashion_mnist/README.md
+++ b/datasets/fashion_mnist/README.md
@@ -15,6 +15,7 @@ task_categories:
task_ids:
- other-other-image-classification
paperswithcode_id: fashion-mnist
+pretty_name: FashionMNIST
---
# Dataset Card for FashionMNIST
diff --git a/datasets/few_rel/README.md b/datasets/few_rel/README.md
index 79c03000d5d..3700387e6e2 100644
--- a/datasets/few_rel/README.md
+++ b/datasets/few_rel/README.md
@@ -22,6 +22,7 @@ task_categories:
task_ids:
- other-other-relation-extraction
paperswithcode_id: fewrel
+pretty_name: Few-Shot Relation Classification Dataset
---
# Dataset Card for few_rel
diff --git a/datasets/financial_phrasebank/README.md b/datasets/financial_phrasebank/README.md
index 0727e64e244..65690bbd95a 100644
--- a/datasets/financial_phrasebank/README.md
+++ b/datasets/financial_phrasebank/README.md
@@ -19,6 +19,7 @@ task_ids:
- multi-class-classification
- sentiment-classification
paperswithcode_id: null
+pretty_name: FinancialPhrasebank
---
# Dataset Card for financial_phrasebank
diff --git a/datasets/finer/README.md b/datasets/finer/README.md
index e462b505c72..d84d9a682e3 100644
--- a/datasets/finer/README.md
+++ b/datasets/finer/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: finer
+pretty_name: Finnish News Corpus for Named Entity Recognition
---
# Dataset Card for [Dataset Name]
@@ -155,4 +156,4 @@ IOB2 labeling scheme is used.
### Contributions
-Thanks to [@stefan-it](https://github.com/stefan-it) for adding this dataset.
\ No newline at end of file
+Thanks to [@stefan-it](https://github.com/stefan-it) for adding this dataset.
diff --git a/datasets/fquad/README.md b/datasets/fquad/README.md
index 06ca1a0ea30..e7ba5ed785f 100644
--- a/datasets/fquad/README.md
+++ b/datasets/fquad/README.md
@@ -23,9 +23,10 @@ task_ids:
- extractive-qa
- closed-domain-qa
paperswithcode_id: fquad
+pretty_name: "FQuAD: French Question Answering Dataset"
---
-# Dataset Card for "fquad"
+# Dataset Card for FQuAD
## Table of Contents
- [Dataset Description](#dataset-description)
@@ -63,10 +64,10 @@ paperswithcode_id: fquad
### Dataset Summary
FQuAD: French Question Answering Dataset
-We introduce FQuAD, a native French Question Answering Dataset.
+We introduce FQuAD, a native French Question Answering Dataset.
FQuAD contains 25,000+ question and answer pairs.
-Finetuning CamemBERT on FQuAD yields a F1 score of 88% and an exact match of 77.9%.
+Finetuning CamemBERT on FQuAD yields a F1 score of 88% and an exact match of 77.9%.
Developped to provide a SQuAD equivalent in the French language. Questions are original and based on high quality Wikipedia articles.
### Supported Tasks and Leaderboards
@@ -116,7 +117,7 @@ The data fields are the same among all splits.
### Data Splits
-The FQuAD dataset has 3 splits: _train_, _validation_, and _test_. The _test_ split is however not released publicly at the moment. The splits contain disjoint sets of articles. The following table contains stats about each split.
+The FQuAD dataset has 3 splits: _train_, _validation_, and _test_. The _test_ split is however not released publicly at the moment. The splits contain disjoint sets of articles. The following table contains stats about each split.
Dataset Split | Number of Articles in Split | Number of paragraphs in split | Number of questions in split
--------------|------------------------------|--------------------------|-------------------------
@@ -134,7 +135,7 @@ Test | 10 | 532 | 2189
The text used for the contexts are from the curated list of French High-Quality Wikipedia [articles](https://fr.wikipedia.org/wiki/Cat%C3%A9gorie:Article_de_qualit%C3%A9).
### Annotations
-Annotations (spans and questions) are written by students of the CentraleSupélec school of engineering.
+Annotations (spans and questions) are written by students of the CentraleSupélec school of engineering.
Wikipedia articles were scraped and Illuin used an internally-developped tool to help annotators ask questions and indicate the answer spans.
Annotators were given paragraph sized contexts and asked to generate 4/5 non-trivial questions about information in the context.
diff --git a/datasets/freebase_qa/README.md b/datasets/freebase_qa/README.md
index f3cae80ae21..08bce88e33a 100644
--- a/datasets/freebase_qa/README.md
+++ b/datasets/freebase_qa/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- open-domain-qa
paperswithcode_id: freebaseqa
+pretty_name: FreebaseQA
---
# Dataset Card for FreebaseQA
diff --git a/datasets/gap/README.md b/datasets/gap/README.md
index e7501d65c91..1e31f3fb1fe 100644
--- a/datasets/gap/README.md
+++ b/datasets/gap/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: gap
+pretty_name: GAP Benchmark Suite
---
# Dataset Card for "gap"
@@ -187,4 +188,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@otakumesi](https://github.com/otakumesi), [@lewtun](https://github.com/lewtun) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@otakumesi](https://github.com/otakumesi), [@lewtun](https://github.com/lewtun) for adding this dataset.
diff --git a/datasets/generics_kb/README.md b/datasets/generics_kb/README.md
index a6359750e3f..8a3c7aedd85 100644
--- a/datasets/generics_kb/README.md
+++ b/datasets/generics_kb/README.md
@@ -25,6 +25,7 @@ task_categories:
task_ids:
- other-other-knowledge-base
paperswithcode_id: genericskb
+pretty_name: GenericsKB
---
# Dataset Card for Generics KB
@@ -205,4 +206,4 @@ publisher = {Allen Institute for AI},
### Contributions
-Thanks to [@bpatidar](https://github.com/bpatidar) for adding this dataset.
\ No newline at end of file
+Thanks to [@bpatidar](https://github.com/bpatidar) for adding this dataset.
diff --git a/datasets/german_legal_entity_recognition/README.md b/datasets/german_legal_entity_recognition/README.md
index 734ba96f9c1..096037a525d 100644
--- a/datasets/german_legal_entity_recognition/README.md
+++ b/datasets/german_legal_entity_recognition/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: legal-documents-entity-recognition
+pretty_name: Legal Documents Entity Recognition
---
# Dataset Card Creation Guide
@@ -142,4 +143,4 @@ paperswithcode_id: legal-documents-entity-recognition
[More Information Needed]
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/germaner/README.md b/datasets/germaner/README.md
index bf0b72955dd..1a679fbb6e8 100644
--- a/datasets/germaner/README.md
+++ b/datasets/germaner/README.md
@@ -1,7 +1,7 @@
---
-annotations_creators:
+annotations_creators:
- crowdsourced
-language_creators:
+language_creators:
- found
languages:
- de
@@ -18,9 +18,10 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: GermaNER
---
-# Dataset Card Creation Guide
+# Dataset Card for GermaNER
## Table of Contents
- [Dataset Description](#dataset-description)
@@ -72,8 +73,8 @@ An example instance looks as follows:
```
{
- 'id': '3',
- 'ner_tags': [1, 5, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8],
+ 'id': '3',
+ 'ner_tags': [1, 5, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8],
'tokens': ['Bayern', 'München', 'ist', 'wieder', 'alleiniger', 'Top-', 'Favorit', 'auf', 'den', 'Gewinn', 'der', 'deutschen', 'Fußball-Meisterschaft', '.']
}
```
@@ -190,7 +191,7 @@ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
You must give any other recipients of the Work or Derivative Works a copy of this License; and
You must cause any modified files to carry prominent notices stating that You changed the files; and
You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
-If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
+If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
diff --git a/datasets/germeval_14/README.md b/datasets/germeval_14/README.md
index f30ef3746dd..118b6bd5640 100644
--- a/datasets/germeval_14/README.md
+++ b/datasets/germeval_14/README.md
@@ -1,5 +1,6 @@
---
paperswithcode_id: null
+pretty_name: GermEval14
---
# Dataset Card for "germeval_14"
@@ -168,4 +169,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@stefan-it](https://github.com/stefan-it), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@stefan-it](https://github.com/stefan-it), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
diff --git a/datasets/giga_fren/README.md b/datasets/giga_fren/README.md
index f27f902b63a..8ac0c6a10cf 100644
--- a/datasets/giga_fren/README.md
+++ b/datasets/giga_fren/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: GigaFren
---
# Dataset Card Creation Guide
@@ -145,4 +146,4 @@ Here are some examples of questions and facts:
[More Information Needed]
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/gigaword/README.md b/datasets/gigaword/README.md
index 05b8e280227..547a2813887 100644
--- a/datasets/gigaword/README.md
+++ b/datasets/gigaword/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: null
+pretty_name: gigaword
---
# Dataset Card for "gigaword"
@@ -176,4 +177,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/glucose/README.md b/datasets/glucose/README.md
index 2234df965ec..30949e226b9 100644
--- a/datasets/glucose/README.md
+++ b/datasets/glucose/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sequence-modeling-other-common-sense-inference
paperswithcode_id: glucose
+pretty_name: GLUCOSE
---
# Dataset Card for [Dataset Name]
@@ -232,4 +233,4 @@ Creative Commons Attribution-NonCommercial 4.0 International Public License
```
### Contributions
-Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset.
\ No newline at end of file
+Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset.
diff --git a/datasets/glue/README.md b/datasets/glue/README.md
index d187a33a7a4..92b54d2d2bf 100644
--- a/datasets/glue/README.md
+++ b/datasets/glue/README.md
@@ -64,9 +64,10 @@ task_ids:
wnli:
- text-classification-other-coreference-nli
paperswithcode_id: glue
+pretty_name: GLUE (General Language Understanding Evaluation benchmark)
---
-# Dataset Card for "glue"
+# Dataset Card for GLUE
## Table of Contents
- [Dataset Description](#dataset-description)
diff --git a/datasets/gnad10/README.md b/datasets/gnad10/README.md
index 23adaf2cf52..6eae695614d 100644
--- a/datasets/gnad10/README.md
+++ b/datasets/gnad10/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- topic-classification
paperswithcode_id: null
+pretty_name: 10k German News Articles Datasets
---
# Dataset Card for 10k German News Articles Datasets
@@ -153,4 +154,4 @@ This dataset is licensed under the Creative Commons Attribution-NonCommercial-Sh
### Contributions
-Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
\ No newline at end of file
+Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
diff --git a/datasets/go_emotions/README.md b/datasets/go_emotions/README.md
index 68f5f71ab34..41762822639 100644
--- a/datasets/go_emotions/README.md
+++ b/datasets/go_emotions/README.md
@@ -23,6 +23,7 @@ task_ids:
- multi-label-classification
- text-classification-other-emotion
paperswithcode_id: goemotions
+pretty_name: GoEmotions
---
# Dataset Card for GoEmotions
@@ -181,4 +182,4 @@ The GitHub repository which houses this dataset has an
### Contributions
-Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
\ No newline at end of file
+Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
diff --git a/datasets/google_wellformed_query/README.md b/datasets/google_wellformed_query/README.md
index 1504355e201..46e885552c8 100644
--- a/datasets/google_wellformed_query/README.md
+++ b/datasets/google_wellformed_query/README.md
@@ -16,6 +16,7 @@ size_categories:
licenses:
- CC-BY-SA-4.0
paperswithcode_id: null
+pretty_name: GoogleWellformedQuery
---
# Dataset Card Creation Guide
@@ -154,4 +155,4 @@ Query-wellformedness dataset is licensed under CC BY-SA 4.0. Any third party con
### Contributions
-Thanks to [@vasudevgupta7](https://github.com/vasudevgupta7) for adding this dataset.
\ No newline at end of file
+Thanks to [@vasudevgupta7](https://github.com/vasudevgupta7) for adding this dataset.
diff --git a/datasets/grail_qa/README.md b/datasets/grail_qa/README.md
index 5499dccfff0..a4b6223515a 100644
--- a/datasets/grail_qa/README.md
+++ b/datasets/grail_qa/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- question-answering-other-knowledge-base-qa
paperswithcode_id: null
+pretty_name: Grail QA
---
# Dataset Card for Grail QA
@@ -178,4 +179,4 @@ Test | 13,231
### Contributions
-Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset.
\ No newline at end of file
+Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset.
diff --git a/datasets/guardian_authorship/README.md b/datasets/guardian_authorship/README.md
index 13a6e5869c7..141fe8da6cd 100644
--- a/datasets/guardian_authorship/README.md
+++ b/datasets/guardian_authorship/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: null
+pretty_name: GuardianAuthorship
---
# Dataset Card for "guardian_authorship"
@@ -266,4 +267,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@eltoto1219](https://github.com/eltoto1219), [@malikaltakrori](https://github.com/malikaltakrori) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@eltoto1219](https://github.com/eltoto1219), [@malikaltakrori](https://github.com/malikaltakrori) for adding this dataset.
diff --git a/datasets/gutenberg_time/README.md b/datasets/gutenberg_time/README.md
index 5f53cfa3e9e..e5751b9024f 100644
--- a/datasets/gutenberg_time/README.md
+++ b/datasets/gutenberg_time/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- multi-class-classification
paperswithcode_id: gutenberg-time-dataset
+pretty_name: the Gutenberg Time dataset
---
# Dataset Card for the Gutenberg Time dataset
@@ -163,4 +164,4 @@ Allen Kim, Charuta Pethe and Steven Skiena, Stony Brook University
```
### Contributions
-Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset.
\ No newline at end of file
+Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset.
diff --git a/datasets/hans/README.md b/datasets/hans/README.md
index d925e387f04..8bc67971de7 100644
--- a/datasets/hans/README.md
+++ b/datasets/hans/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: hans
+pretty_name: Heuristic Analysis for NLI Systems
---
# Dataset Card for "hans"
@@ -170,4 +171,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@TevenLeScao](https://github.com/TevenLeScao), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@TevenLeScao](https://github.com/TevenLeScao), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/hansards/README.md b/datasets/hansards/README.md
index 98547159094..bf3c10a5c39 100644
--- a/datasets/hansards/README.md
+++ b/datasets/hansards/README.md
@@ -1,5 +1,6 @@
---
paperswithcode_id: null
+pretty_name: hansards
---
# Dataset Card for "hansards"
@@ -186,4 +187,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
\ No newline at end of file
+Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
diff --git a/datasets/hard/README.md b/datasets/hard/README.md
index 1b6ed6c0e13..84a998806e8 100644
--- a/datasets/hard/README.md
+++ b/datasets/hard/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- multi-class-classification
paperswithcode_id: hard
+pretty_name: Hotel Arabic-Reviews Dataset
---
# Dataset Card for Hard
@@ -137,4 +138,4 @@ The dataset is not split.
### Contributions
-Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset.
\ No newline at end of file
+Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset.
diff --git a/datasets/harem/README.md b/datasets/harem/README.md
index db97ae26060..07c73364048 100644
--- a/datasets/harem/README.md
+++ b/datasets/harem/README.md
@@ -18,9 +18,10 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: HAREM
---
-# Dataset Card for [Dataset Name]
+# Dataset Card for HAREM
## Table of Contents
- [Dataset Description](#dataset-description)
@@ -58,7 +59,7 @@ paperswithcode_id: null
The HAREM is a Portuguese language corpus commonly used for Named Entity Recognition tasks. It includes about 93k words, from 129 different texts,
from several genres, and language varieties. The split of this dataset version follows the division made by [1], where 7% HAREM
documents are the validation set and the miniHAREM corpus (with about 65k words) is the test set. There are two versions of the dataset set,
-a version that has a total of 10 different named entity classes (Person, Organization, Location, Value, Date, Title, Thing, Event,
+a version that has a total of 10 different named entity classes (Person, Organization, Location, Value, Date, Title, Thing, Event,
Abstraction, and Other) and a "selective" version with only 5 classes (Person, Organization, Location, Value, and Date).
It's important to note that the original version of the HAREM dataset has 2 levels of NER details, namely "Category" and "Sub-type".
@@ -177,4 +178,4 @@ The data is split into train, validation and test set for each of the two versio
### Contributions
-Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset.
\ No newline at end of file
+Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset.
diff --git a/datasets/has_part/README.md b/datasets/has_part/README.md
index 60fd2bebf0f..6ec23875b4a 100644
--- a/datasets/has_part/README.md
+++ b/datasets/has_part/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- text-scoring-other-Meronym-Prediction
paperswithcode_id: haspart-kb
+pretty_name: hasPart KB
---
# Dataset Card for [HasPart]
@@ -162,4 +163,4 @@ model and further fine-tune the model parameters by training on our labeled data
### Contributions
-Thanks to [@jeromeku](https://github.com/jeromeku) for adding this dataset.
\ No newline at end of file
+Thanks to [@jeromeku](https://github.com/jeromeku) for adding this dataset.
diff --git a/datasets/hate_offensive/README.md b/datasets/hate_offensive/README.md
index 6e3fda2f5b7..676e4dacb77 100644
--- a/datasets/hate_offensive/README.md
+++ b/datasets/hate_offensive/README.md
@@ -19,6 +19,7 @@ task_ids:
- multi-class-classification
- text-classification-other-hate-speech-detection
paperswithcode_id: hate-speech-and-offensive-language
+pretty_name: HateOffensive
---
# Dataset Card for HateOffensive
@@ -152,4 +153,4 @@ MIT License
### Contributions
-Thanks to [@MisbahKhan789](https://github.com/MisbahKhan789) for adding this dataset.
\ No newline at end of file
+Thanks to [@MisbahKhan789](https://github.com/MisbahKhan789) for adding this dataset.
diff --git a/datasets/hate_speech18/README.md b/datasets/hate_speech18/README.md
index c9029bc7849..abd8b003a45 100644
--- a/datasets/hate_speech18/README.md
+++ b/datasets/hate_speech18/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- intent-classification
paperswithcode_id: hate-speech
+pretty_name: Hate Speech
---
# Dataset Card for [Dataset Name]
@@ -163,4 +164,4 @@ English
### Contributions
-Thanks to [@czabo](https://github.com/czabo) for adding this dataset.
\ No newline at end of file
+Thanks to [@czabo](https://github.com/czabo) for adding this dataset.
diff --git a/datasets/hate_speech_filipino/README.md b/datasets/hate_speech_filipino/README.md
index abfdd833277..c04a0c9d886 100644
--- a/datasets/hate_speech_filipino/README.md
+++ b/datasets/hate_speech_filipino/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sentiment-analysis
paperswithcode_id: null
+pretty_name: Hate Speech in Filipino
---
# Dataset Card for Hate Speech in Filipino
@@ -157,4 +158,4 @@ Data preprocessing was performed to prepare the tweets for feature extraction an
### Contributions
-Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.
\ No newline at end of file
+Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.
diff --git a/datasets/hate_speech_offensive/README.md b/datasets/hate_speech_offensive/README.md
index dab5e3e9be9..2391b2db0a4 100644
--- a/datasets/hate_speech_offensive/README.md
+++ b/datasets/hate_speech_offensive/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- text-classification-other-hate-speech-detection
paperswithcode_id: hate-speech-and-offensive-language
+pretty_name: Hate Speech and Offensive Language
---
# Dataset Card for [Dataset Name]
@@ -159,4 +160,4 @@ MIT License
### Contributions
-Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
\ No newline at end of file
+Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
diff --git a/datasets/hate_speech_pl/README.md b/datasets/hate_speech_pl/README.md
index c4224ac0305..426dfbf509f 100644
--- a/datasets/hate_speech_pl/README.md
+++ b/datasets/hate_speech_pl/README.md
@@ -23,6 +23,7 @@ task_ids:
- sentiment-scoring
- topic-classification
paperswithcode_id: null
+pretty_name: HateSpeechPl
---
@@ -196,4 +197,4 @@ According to [Metashare](http://metashare.nlp.ipipan.waw.pl/metashare/repository
### Contributions
-Thanks to [@kacperlukawski](https://github.com/kacperlukawski) for adding this dataset.
\ No newline at end of file
+Thanks to [@kacperlukawski](https://github.com/kacperlukawski) for adding this dataset.
diff --git a/datasets/hate_speech_portuguese/README.md b/datasets/hate_speech_portuguese/README.md
index 3aac0223e03..cde2414b3ce 100644
--- a/datasets/hate_speech_portuguese/README.md
+++ b/datasets/hate_speech_portuguese/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- text-classification-other-hate-speech-detection
paperswithcode_id: null
+pretty_name: HateSpeechPortuguese
---
# Dataset Card for [Dataset Name]
@@ -140,4 +141,4 @@ Portuguese dataset for hate speech detection composed of 5,668 tweets with binar
### Contributions
-Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
\ No newline at end of file
+Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
diff --git a/datasets/hatexplain/README.md b/datasets/hatexplain/README.md
index c8468c6b29a..4a575f826df 100644
--- a/datasets/hatexplain/README.md
+++ b/datasets/hatexplain/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- text-classification-other-hate-speech-detection
paperswithcode_id: hatexplain
+pretty_name: hatexplain
---
# Dataset Card for hatexplain
diff --git a/datasets/hausa_voa_ner/README.md b/datasets/hausa_voa_ner/README.md
index 4ca23235459..eda07ef1c13 100644
--- a/datasets/hausa_voa_ner/README.md
+++ b/datasets/hausa_voa_ner/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: Hausa VOA NER Corpus
---
# Dataset Card for Hausa VOA NER Corpus
@@ -177,4 +178,4 @@ The data is under the [Creative Commons Attribution 4.0 ](https://creativecommon
```
### Contributions
-Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
\ No newline at end of file
+Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
diff --git a/datasets/hausa_voa_topics/README.md b/datasets/hausa_voa_topics/README.md
index d56c749f371..5b39847933f 100644
--- a/datasets/hausa_voa_topics/README.md
+++ b/datasets/hausa_voa_topics/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- topic-classification
paperswithcode_id: null
+pretty_name: Hausa Voa News Topic Classification Dataset (HausaVoaTopics)
---
# Dataset Card for Hausa VOA News Topic Classification dataset (hausa_voa_topics)
@@ -142,4 +143,4 @@ An instance consists of a news title sentence and the corresponding topic label.
### Contributions
-Thanks to [@michael-aloys](https://github.com/michael-aloys) for adding this dataset.
\ No newline at end of file
+Thanks to [@michael-aloys](https://github.com/michael-aloys) for adding this dataset.
diff --git a/datasets/hda_nli_hindi/README.md b/datasets/hda_nli_hindi/README.md
index a7fd4646dab..ad3743f8b4d 100644
--- a/datasets/hda_nli_hindi/README.md
+++ b/datasets/hda_nli_hindi/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- natural-language-inference
paperswithcode_id: null
+pretty_name: Hindi Discourse Analysis Dataset
---
# Dataset Card for Hindi Discourse Analysis Dataset
diff --git a/datasets/head_qa/README.md b/datasets/head_qa/README.md
index 2ecfa201261..75ed2d23604 100644
--- a/datasets/head_qa/README.md
+++ b/datasets/head_qa/README.md
@@ -21,6 +21,7 @@ task_categories:
task_ids:
- multiple-choice-qa
paperswithcode_id: headqa
+pretty_name: HEAD-QA
---
# Dataset Card for HEAD-QA
@@ -251,4 +252,4 @@ The dataset is licensed under the [MIT License](https://mit-license.org/).
```
### Contributions
-Thanks to [@mariagrandury](https://github.com/mariagrandury) for adding this dataset.
\ No newline at end of file
+Thanks to [@mariagrandury](https://github.com/mariagrandury) for adding this dataset.
diff --git a/datasets/health_fact/README.md b/datasets/health_fact/README.md
index 2949e6eb258..afc99e12303 100644
--- a/datasets/health_fact/README.md
+++ b/datasets/health_fact/README.md
@@ -19,6 +19,7 @@ task_ids:
- fact-checking
- multi-class-classification
paperswithcode_id: pubhealth
+pretty_name: PUBHEALTH
---
# Dataset Card for PUBHEALTH
@@ -180,4 +181,4 @@ MIT License
```
### Contributions
-Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
\ No newline at end of file
+Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
diff --git a/datasets/hebrew_projectbenyehuda/README.md b/datasets/hebrew_projectbenyehuda/README.md
index eeb694d2cab..1072be96850 100644
--- a/datasets/hebrew_projectbenyehuda/README.md
+++ b/datasets/hebrew_projectbenyehuda/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- language-modeling
paperswithcode_id: null
+pretty_name: Hebrew Projectbenyehuda
---
# Dataset Card for Hebrew Projectbenyehuda
@@ -193,4 +194,4 @@ SOFTWARE.
}
### Contributions
-Thanks to [@imvladikon](https://github.com/imvladikon) for adding this dataset.
\ No newline at end of file
+Thanks to [@imvladikon](https://github.com/imvladikon) for adding this dataset.
diff --git a/datasets/hebrew_sentiment/README.md b/datasets/hebrew_sentiment/README.md
index 14337887558..22de875d8db 100644
--- a/datasets/hebrew_sentiment/README.md
+++ b/datasets/hebrew_sentiment/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: modern-hebrew-sentiment-dataset
+pretty_name: HebrewSentiment
---
# Dataset Card for HebrewSentiment
@@ -200,4 +201,4 @@ SOFTWARE.
}
### Contributions
-Thanks to [@elronbandel](https://github.com/elronbandel) for adding this dataset.
\ No newline at end of file
+Thanks to [@elronbandel](https://github.com/elronbandel) for adding this dataset.
diff --git a/datasets/hebrew_this_world/README.md b/datasets/hebrew_this_world/README.md
index 172cae309c2..234c88919e3 100644
--- a/datasets/hebrew_this_world/README.md
+++ b/datasets/hebrew_this_world/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- language-modeling
paperswithcode_id: null
+pretty_name: HebrewSentiment
---
# Dataset Card for HebrewSentiment
@@ -194,4 +195,4 @@ along with this program. If not, see .
### Contributions
-Thanks to [@lhoestq](https://github.com/lhoestq), [@imvladikon](https://github.com/imvladikon) for adding this dataset.
\ No newline at end of file
+Thanks to [@lhoestq](https://github.com/lhoestq), [@imvladikon](https://github.com/imvladikon) for adding this dataset.
diff --git a/datasets/hellaswag/README.md b/datasets/hellaswag/README.md
index 1ddb25186e5..0958a14739f 100644
--- a/datasets/hellaswag/README.md
+++ b/datasets/hellaswag/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: hellaswag
+pretty_name: HellaSwag
---
# Dataset Card for "hellaswag"
@@ -171,4 +172,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@albertvillanova](https://github.com/albertvillanova), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
\ No newline at end of file
+Thanks to [@albertvillanova](https://github.com/albertvillanova), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
diff --git a/datasets/hendrycks_test/README.md b/datasets/hendrycks_test/README.md
index 250577e36bc..df474d7df02 100644
--- a/datasets/hendrycks_test/README.md
+++ b/datasets/hendrycks_test/README.md
@@ -17,6 +17,7 @@ task_categories:
- question-answering
task_ids:
- multiple-choice-qa
+pretty_name: HendrycksTest
---
# Dataset Card for HendrycksTest
diff --git a/datasets/hind_encorp/README.md b/datasets/hind_encorp/README.md
index 2f554faa08c..74ed953078e 100644
--- a/datasets/hind_encorp/README.md
+++ b/datasets/hind_encorp/README.md
@@ -20,6 +20,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: hindencorp
+pretty_name: HindEnCorp
---
# Dataset Card for HindEnCorp
@@ -194,4 +195,4 @@ CC BY-NC-SA 3.0
### Contributions
-Thanks to [@rahul-art](https://github.com/rahul-art) for adding this dataset.
\ No newline at end of file
+Thanks to [@rahul-art](https://github.com/rahul-art) for adding this dataset.
diff --git a/datasets/hindi_discourse/README.md b/datasets/hindi_discourse/README.md
index 1b251a111fe..eea3a3308a8 100644
--- a/datasets/hindi_discourse/README.md
+++ b/datasets/hindi_discourse/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sequence-modeling-other-discourse-analysis
paperswithcode_id: null
+pretty_name: Discourse Analysis dataset
---
# Dataset Card for Discourse Analysis dataset
@@ -208,4 +209,4 @@ Please cite the following publication if you make use of the dataset: https://ww
### Contributions
-Thanks to [@duttahritwik](https://github.com/duttahritwik) for adding this dataset.
\ No newline at end of file
+Thanks to [@duttahritwik](https://github.com/duttahritwik) for adding this dataset.
diff --git a/datasets/hippocorpus/README.md b/datasets/hippocorpus/README.md
index 71ab97d341d..d33059a9cb5 100644
--- a/datasets/hippocorpus/README.md
+++ b/datasets/hippocorpus/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- text-scoring-other-narrative-flow
paperswithcode_id: null
+pretty_name: hippocorpus
---
# Dataset Card for [Dataset Name]
@@ -195,4 +196,4 @@ Hippocorpus is distributed under the [Open Use of Data Agreement v1.0](https://m
### Contributions
-Thanks to [@manandey](https://github.com/manandey) for adding this dataset.
\ No newline at end of file
+Thanks to [@manandey](https://github.com/manandey) for adding this dataset.
diff --git a/datasets/hkcancor/README.md b/datasets/hkcancor/README.md
index fcd0b487f56..7c1b177eb65 100644
--- a/datasets/hkcancor/README.md
+++ b/datasets/hkcancor/README.md
@@ -20,6 +20,7 @@ task_ids:
- dialogue-modeling
- machine-translation
paperswithcode_id: hong-kong-cantonese-corpus
+pretty_name: The Hong Kong Cantonese Corpus (HKCanCor)
---
# Dataset Card for The Hong Kong Cantonese Corpus (HKCanCor)
@@ -191,4 +192,4 @@ The POS tagset to Universal Dependency tagset mapping is provided by Jackson Lee
```
### Contributions
-Thanks to [@j-chim](https://github.com/j-chim) for adding this dataset.
\ No newline at end of file
+Thanks to [@j-chim](https://github.com/j-chim) for adding this dataset.
diff --git a/datasets/hope_edi/README.md b/datasets/hope_edi/README.md
index db39a5821d8..7564c6b22cf 100644
--- a/datasets/hope_edi/README.md
+++ b/datasets/hope_edi/README.md
@@ -35,6 +35,8 @@ task_categories:
task_ids:
- text-classification-other-hope-speech-classification
paperswithcode_id: hopeedi
+pretty_name: 'HopeEDI: A Multilingual Hope Speech Detection Dataset for Equality,
+ Diversity, and Inclusion'
---
# Dataset Card for [Dataset Name]
@@ -210,4 +212,4 @@ abstract = "Over the past few years, systems have been developed to control onli
```
### Contributions
-Thanks to [@jamespaultg](https://github.com/jamespaultg) for adding this dataset.
\ No newline at end of file
+Thanks to [@jamespaultg](https://github.com/jamespaultg) for adding this dataset.
diff --git a/datasets/hotpot_qa/README.md b/datasets/hotpot_qa/README.md
index 42cd012f3ac..a3fb14b6b21 100644
--- a/datasets/hotpot_qa/README.md
+++ b/datasets/hotpot_qa/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: hotpotqa
+pretty_name: HotpotQA
---
# Dataset Card for "hotpot_qa"
@@ -222,4 +223,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@albertvillanova](https://github.com/albertvillanova), [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset.
\ No newline at end of file
+Thanks to [@albertvillanova](https://github.com/albertvillanova), [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset.
diff --git a/datasets/hover/README.md b/datasets/hover/README.md
index 1965a9d5a43..fd11e8f4f66 100644
--- a/datasets/hover/README.md
+++ b/datasets/hover/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- fact-checking-retrieval
paperswithcode_id: hover
+pretty_name: HoVer
---
# Dataset Card Creation Guide
@@ -151,4 +152,4 @@ Please note that in test set sentence only id, uid and claim are available. Labe
[More Information Needed]
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/hrenwac_para/README.md b/datasets/hrenwac_para/README.md
index fc40aaae059..2a46234cf14 100644
--- a/datasets/hrenwac_para/README.md
+++ b/datasets/hrenwac_para/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: HrenwacPara
---
# Dataset Card for hrenwac_para
@@ -149,4 +150,4 @@ Dataset is under the [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.
### Contributions
-Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset.
\ No newline at end of file
+Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset.
diff --git a/datasets/hrwac/README.md b/datasets/hrwac/README.md
index a8a57bdbeb0..dc0f58ede08 100644
--- a/datasets/hrwac/README.md
+++ b/datasets/hrwac/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- language-modeling
paperswithcode_id: null
+pretty_name: HrWac
---
# Dataset Card for HrWac
@@ -148,4 +149,4 @@ Dataset is under the [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.
### Contributions
-Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset.
\ No newline at end of file
+Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset.
diff --git a/datasets/humicroedit/README.md b/datasets/humicroedit/README.md
index 4489d217d8c..cacb03a565d 100644
--- a/datasets/humicroedit/README.md
+++ b/datasets/humicroedit/README.md
@@ -25,6 +25,7 @@ task_ids:
subtask-2:
- text-classification-other-funnier-headline-identification
paperswithcode_id: humicroedit
+pretty_name: Humicroedit
---
# Dataset Card for [Dataset Name]
@@ -201,4 +202,4 @@ are ranked on the game’s leaderboard page.
### Contributions
-Thanks to [@saradhix](https://github.com/saradhix) for adding this dataset.
\ No newline at end of file
+Thanks to [@saradhix](https://github.com/saradhix) for adding this dataset.
diff --git a/datasets/hybrid_qa/README.md b/datasets/hybrid_qa/README.md
index ea4cc137ff8..c93a88bf85a 100644
--- a/datasets/hybrid_qa/README.md
+++ b/datasets/hybrid_qa/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- question-answering-other-multihop-tabular-text-qa
paperswithcode_id: hybridqa
+pretty_name: HybridQA
---
# Dataset Card Creation Guide
@@ -200,4 +201,4 @@ The dataset is split into `train`, `dev` and `test` splits.
```
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/hyperpartisan_news_detection/README.md b/datasets/hyperpartisan_news_detection/README.md
index 6a325dd3f33..09e340cc0ff 100644
--- a/datasets/hyperpartisan_news_detection/README.md
+++ b/datasets/hyperpartisan_news_detection/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: null
+pretty_name: HyperpartisanNewsDetection
---
# Dataset Card for "hyperpartisan_news_detection"
@@ -203,4 +204,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset.
diff --git a/datasets/iapp_wiki_qa_squad/README.md b/datasets/iapp_wiki_qa_squad/README.md
index 50e82b6b395..19184032f9c 100644
--- a/datasets/iapp_wiki_qa_squad/README.md
+++ b/datasets/iapp_wiki_qa_squad/README.md
@@ -19,6 +19,7 @@ task_ids:
- extractive-qa
- open-domain-qa
paperswithcode_id: null
+pretty_name: IappWikiQaSquad
---
# Dataset Card for `iapp_wiki_qa_squad`
diff --git a/datasets/id_clickbait/README.md b/datasets/id_clickbait/README.md
index a05c152069a..28721521041 100644
--- a/datasets/id_clickbait/README.md
+++ b/datasets/id_clickbait/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- fact-checking
paperswithcode_id: null
+pretty_name: Indonesian Clickbait Headlines
---
# Dataset Card for Indonesian Clickbait Headlines
@@ -175,4 +176,4 @@ abstract = "News analysis is a popular task in Natural Language Processing (NLP)
### Contributions
-Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset.
\ No newline at end of file
+Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset.
diff --git a/datasets/id_liputan6/README.md b/datasets/id_liputan6/README.md
index 783a10062c1..e19b28ed60d 100644
--- a/datasets/id_liputan6/README.md
+++ b/datasets/id_liputan6/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- summarization
paperswithcode_id: null
+pretty_name: Large-scale Indonesian Summarization
---
# Dataset Card for Large-scale Indonesian Summarization
@@ -178,4 +179,4 @@ The dataset is splitted in to train, validation and test sets.
```
### Contributions
-Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset.
\ No newline at end of file
+Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset.
diff --git a/datasets/id_panl_bppt/README.md b/datasets/id_panl_bppt/README.md
index 8991c28dc7e..611737fa34d 100644
--- a/datasets/id_panl_bppt/README.md
+++ b/datasets/id_panl_bppt/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: IdPanlBppt
---
# Dataset Card for [Dataset Name]
@@ -165,4 +166,4 @@ The dataset is splitted in to train, validation and test sets.
### Contributions
-Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset.
\ No newline at end of file
+Thanks to [@cahya-wirawan](https://github.com/cahya-wirawan) for adding this dataset.
diff --git a/datasets/igbo_english_machine_translation/README.md b/datasets/igbo_english_machine_translation/README.md
index 62923d23b16..bc95c5c8dde 100644
--- a/datasets/igbo_english_machine_translation/README.md
+++ b/datasets/igbo_english_machine_translation/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: igbonlp-datasets
+pretty_name: IgboNLP Datasets
---
# Dataset Card Creation Guide
@@ -145,4 +146,4 @@ paperswithcode_id: igbonlp-datasets
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/igbo_monolingual/README.md b/datasets/igbo_monolingual/README.md
index c8b32c51b8d..af1da8932c2 100644
--- a/datasets/igbo_monolingual/README.md
+++ b/datasets/igbo_monolingual/README.md
@@ -35,6 +35,7 @@ task_categories:
task_ids:
- language-modeling
paperswithcode_id: null
+pretty_name: Igbo Monolingual Dataset
---
# Dataset Card for Igbo Monolingual Dataset
@@ -198,4 +199,4 @@ primaryClass={cs.CL}
### Contributions
-Thanks to [@purvimisal](https://github.com/purvimisal) for adding this dataset.
\ No newline at end of file
+Thanks to [@purvimisal](https://github.com/purvimisal) for adding this dataset.
diff --git a/datasets/igbo_ner/README.md b/datasets/igbo_ner/README.md
index 4d979e8bf75..b55e9a59315 100644
--- a/datasets/igbo_ner/README.md
+++ b/datasets/igbo_ner/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: Igbo NER dataset
---
# Dataset Card for Igbo NER dataset
@@ -150,4 +151,4 @@ Here is an example from the dataset:
### Contributions
-Thanks to [@purvimisal](https://github.com/purvimisal) for adding this dataset.
\ No newline at end of file
+Thanks to [@purvimisal](https://github.com/purvimisal) for adding this dataset.
diff --git a/datasets/ilist/README.md b/datasets/ilist/README.md
index 6cfc03c4170..c8d6f2f2e50 100644
--- a/datasets/ilist/README.md
+++ b/datasets/ilist/README.md
@@ -20,6 +20,7 @@ size_categories:
licenses:
- unknown
paperswithcode_id: null
+pretty_name: ilist
---
# Dataset Card Creation Guide
@@ -168,4 +169,4 @@ The language ids correspond to the following languages: "AWA", "BRA", "MAG", "BH
### Contributions
-Thanks to [@vasudevgupta7](https://github.com/vasudevgupta7) for adding this dataset.
\ No newline at end of file
+Thanks to [@vasudevgupta7](https://github.com/vasudevgupta7) for adding this dataset.
diff --git a/datasets/imdb_urdu_reviews/README.md b/datasets/imdb_urdu_reviews/README.md
index 20fb97f8532..d1cadbd7e03 100644
--- a/datasets/imdb_urdu_reviews/README.md
+++ b/datasets/imdb_urdu_reviews/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: null
+pretty_name: ImDB Urdu Reviews
---
# Dataset Card for ImDB Urdu Reviews
@@ -141,4 +142,4 @@ paperswithcode_id: null
### Contributions
-Thanks to [@chaitnayabasava](https://github.com/chaitnayabasava) for adding this dataset.
\ No newline at end of file
+Thanks to [@chaitnayabasava](https://github.com/chaitnayabasava) for adding this dataset.
diff --git a/datasets/imppres/README.md b/datasets/imppres/README.md
index c746dcaaa9c..bc2ef587768 100644
--- a/datasets/imppres/README.md
+++ b/datasets/imppres/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- natural-language-inference
paperswithcode_id: imppres
+pretty_name: IMPPRES
---
# Dataset Card for IMPPRES
@@ -289,4 +290,4 @@ IMPPRES is available under a Creative Commons Attribution-NonCommercial 4.0 Inte
### Contributions
-Thanks to [@aclifton314](https://github.com/aclifton314) for adding this dataset.
\ No newline at end of file
+Thanks to [@aclifton314](https://github.com/aclifton314) for adding this dataset.
diff --git a/datasets/indonlu/README.md b/datasets/indonlu/README.md
index 056f3e25b56..2d3cc6cad46 100644
--- a/datasets/indonlu/README.md
+++ b/datasets/indonlu/README.md
@@ -87,6 +87,7 @@ task_ids:
wrete:
- semantic-similarity-classification
paperswithcode_id: indonlu-benchmark
+pretty_name: IndoNLU
---
@@ -613,4 +614,4 @@ IndoNLU citation
### Contributions
-Thanks to [@yasirabd](https://github.com/yasirabd) for adding this dataset.
\ No newline at end of file
+Thanks to [@yasirabd](https://github.com/yasirabd) for adding this dataset.
diff --git a/datasets/interpress_news_category_tr/README.md b/datasets/interpress_news_category_tr/README.md
index d42ad0c86b5..47bf5424f9e 100644
--- a/datasets/interpress_news_category_tr/README.md
+++ b/datasets/interpress_news_category_tr/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- text-classification-other-news-category-classification
paperswithcode_id: null
+pretty_name: Interpress Turkish News Category Dataset (270K)
---
# Dataset Card for Interpress Turkish News Category Dataset (270K)
@@ -138,4 +139,4 @@ The dataset does not contain any additional annotations.
### Contributions
-Thanks to [@basakbuluz](https://github.com/basakbuluz) for adding this dataset.
\ No newline at end of file
+Thanks to [@basakbuluz](https://github.com/basakbuluz) for adding this dataset.
diff --git a/datasets/interpress_news_category_tr_lite/README.md b/datasets/interpress_news_category_tr_lite/README.md
index 4382d2cef28..3806240dd2e 100644
--- a/datasets/interpress_news_category_tr_lite/README.md
+++ b/datasets/interpress_news_category_tr_lite/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- text-classification-other-news-category-classification
paperswithcode_id: null
+pretty_name: Interpress Turkish News Category Dataset (270K - Lite Version)
---
# Dataset Card for Interpress Turkish News Category Dataset (270K - Lite Version)
@@ -149,4 +150,4 @@ The dataset does not contain any additional annotations.
### Contributions
-Thanks to [@basakbuluz](https://github.com/basakbuluz) & [@yavuzkomecoglu](https://github.com/yavuzkomecoglu) & [@serdarakyol](https://github.com/serdarakyol/) for adding this dataset.
\ No newline at end of file
+Thanks to [@basakbuluz](https://github.com/basakbuluz) & [@yavuzkomecoglu](https://github.com/yavuzkomecoglu) & [@serdarakyol](https://github.com/serdarakyol/) for adding this dataset.
diff --git a/datasets/isixhosa_ner_corpus/README.md b/datasets/isixhosa_ner_corpus/README.md
index aba59032d35..1edb6440c1a 100644
--- a/datasets/isixhosa_ner_corpus/README.md
+++ b/datasets/isixhosa_ner_corpus/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: IsixhosaNerCorpus
---
# Dataset Card for [Dataset Name]
@@ -169,4 +170,4 @@ The data is under the [Creative Commons Attribution 2.5 South Africa License](ht
### Contributions
-Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
\ No newline at end of file
+Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
diff --git a/datasets/isizulu_ner_corpus/README.md b/datasets/isizulu_ner_corpus/README.md
index 652a8c7173f..95f19440626 100644
--- a/datasets/isizulu_ner_corpus/README.md
+++ b/datasets/isizulu_ner_corpus/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: Isizulu Ner Corpus
---
# Dataset Card for Isizulu Ner Corpus
@@ -164,4 +165,4 @@ The data is under the [Creative Commons Attribution 2.5 South Africa License](ht
### Contributions
-Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
\ No newline at end of file
+Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
diff --git a/datasets/iwslt2017/README.md b/datasets/iwslt2017/README.md
index 225e90531a5..68253512cae 100644
--- a/datasets/iwslt2017/README.md
+++ b/datasets/iwslt2017/README.md
@@ -1,8 +1,9 @@
---
paperswithcode_id: null
+pretty_name: IWSLT 2017
---
-# Dataset Card for "iwslt2017"
+# Dataset Card for IWSLT 2017
## Table of Contents
- [Dataset Description](#dataset-description)
@@ -239,4 +240,4 @@ Year = {2012}}
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@Narsil](https://github.com/Narsil) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@Narsil](https://github.com/Narsil) for adding this dataset.
diff --git a/datasets/jeopardy/README.md b/datasets/jeopardy/README.md
index 95becc85bf4..2274be0454b 100644
--- a/datasets/jeopardy/README.md
+++ b/datasets/jeopardy/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: null
+pretty_name: jeopardy
---
# Dataset Card for "jeopardy"
@@ -171,4 +172,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
diff --git a/datasets/jfleg/README.md b/datasets/jfleg/README.md
index 1d9575094a8..9ed9d8d5901 100644
--- a/datasets/jfleg/README.md
+++ b/datasets/jfleg/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- conditional-text-generation-other-grammatical-error-correction
paperswithcode_id: jfleg
+pretty_name: JHU FLuency-Extended GUG corpus
---
# Dataset Card for JFLEG
@@ -173,4 +174,4 @@ This benchmark was proposed by [Napoles et al., 2020](https://www.aclweb.org/ant
### Contributions
-Thanks to [@j-chim](https://github.com/j-chim) for adding this dataset.
\ No newline at end of file
+Thanks to [@j-chim](https://github.com/j-chim) for adding this dataset.
diff --git a/datasets/jigsaw_toxicity_pred/README.md b/datasets/jigsaw_toxicity_pred/README.md
index c217c513794..660fd8db66c 100644
--- a/datasets/jigsaw_toxicity_pred/README.md
+++ b/datasets/jigsaw_toxicity_pred/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- multi-label-classification
paperswithcode_id: null
+pretty_name: JigsawToxicityPred
---
# Dataset Card for [Dataset Name]
@@ -156,4 +157,4 @@ The "Toxic Comment Classification" dataset is released under [CC0], with the und
### Contributions
-Thanks to [@Tigrex161](https://github.com/Tigrex161) for adding this dataset.
\ No newline at end of file
+Thanks to [@Tigrex161](https://github.com/Tigrex161) for adding this dataset.
diff --git a/datasets/journalists_questions/README.md b/datasets/journalists_questions/README.md
index 4e7e13572ad..4ced9cd25c2 100644
--- a/datasets/journalists_questions/README.md
+++ b/datasets/journalists_questions/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- text-classification-other-question-identification
paperswithcode_id: null
+pretty_name: JournalistsQuestions
---
# Dataset Card for journalists_questions
@@ -151,4 +152,4 @@ To construct our dataset of question tweets posted by journalists, we first acqu
### Contributions
-Thanks to [@MaramHasanain](https://github.com/MaramHasanain) for adding this dataset.
\ No newline at end of file
+Thanks to [@MaramHasanain](https://github.com/MaramHasanain) for adding this dataset.
diff --git a/datasets/kannada_news/README.md b/datasets/kannada_news/README.md
index 7cd5dd119c6..6710002d925 100644
--- a/datasets/kannada_news/README.md
+++ b/datasets/kannada_news/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- topic-classification
paperswithcode_id: null
+pretty_name: KannadaNews Dataset
---
# Dataset Card for kannada_news dataset
@@ -160,4 +161,4 @@ cc-by-sa-4.0
### Contributions
-Thanks to [@vrindaprabhu](https://github.com/vrindaprabhu) for adding this dataset.
\ No newline at end of file
+Thanks to [@vrindaprabhu](https://github.com/vrindaprabhu) for adding this dataset.
diff --git a/datasets/kd_conv/README.md b/datasets/kd_conv/README.md
index 1bc4f0597b6..caf7ae30681 100644
--- a/datasets/kd_conv/README.md
+++ b/datasets/kd_conv/README.md
@@ -20,6 +20,7 @@ task_ids:
- dialogue-modeling
- other-multi-turn
paperswithcode_id: kdconv
+pretty_name: Knowledge-driven Conversation
---
# Dataset Card for KdConv
@@ -245,4 +246,4 @@ Apache License 2.0
```
### Contributions
-Thanks to [@pacman100](https://github.com/pacman100) for adding this dataset.
\ No newline at end of file
+Thanks to [@pacman100](https://github.com/pacman100) for adding this dataset.
diff --git a/datasets/kde4/README.md b/datasets/kde4/README.md
index de215db9e29..4acc8a71314 100644
--- a/datasets/kde4/README.md
+++ b/datasets/kde4/README.md
@@ -109,8 +109,9 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: KDE4
---
-# Dataset Card Creation Guide
+# Dataset Card for KDE4
## Table of Contents
- [Dataset Description](#dataset-description)
@@ -238,4 +239,4 @@ E.g.
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/kelm/README.md b/datasets/kelm/README.md
index e175121f2b6..235937bda14 100644
--- a/datasets/kelm/README.md
+++ b/datasets/kelm/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- other-other-data-to-text-generation
paperswithcode_id: kelm
+pretty_name: Corpus for Knowledge-Enhanced Language Model Pre-training (KELM)
---
# Dataset Card for Corpus for Knowledge-Enhanced Language Model Pre-training (KELM)
@@ -160,4 +161,4 @@ This dataset has been released under the [CC BY-SA 2.0 license](https://creative
### Contributions
-Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
\ No newline at end of file
+Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
diff --git a/datasets/kilt_wikipedia/README.md b/datasets/kilt_wikipedia/README.md
index 9328327e59a..8bccdf051a2 100644
--- a/datasets/kilt_wikipedia/README.md
+++ b/datasets/kilt_wikipedia/README.md
@@ -1,5 +1,6 @@
---
paperswithcode_id: null
+pretty_name: KiltWikipedia
---
# Dataset Card for "kilt_wikipedia"
@@ -220,4 +221,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@yjernite](https://github.com/yjernite) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@yjernite](https://github.com/yjernite) for adding this dataset.
diff --git a/datasets/kinnews_kirnews/README.md b/datasets/kinnews_kirnews/README.md
index 8e186f7c0ac..1d5b4fd4fbf 100644
--- a/datasets/kinnews_kirnews/README.md
+++ b/datasets/kinnews_kirnews/README.md
@@ -27,6 +27,7 @@ task_ids:
- multi-class-classification
- topic-classification
paperswithcode_id: kinnews-and-kirnews
+pretty_name: KinnewsKirnews
---
# Dataset Card for kinnews_kirnews
@@ -187,4 +188,4 @@ Lang| Train | Test |
### Contributions
-Thanks to [@saradhix](https://github.com/saradhix) for adding this dataset.
\ No newline at end of file
+Thanks to [@saradhix](https://github.com/saradhix) for adding this dataset.
diff --git a/datasets/kor_3i4k/README.md b/datasets/kor_3i4k/README.md
index 56b020e8a0a..dd6b3eca173 100644
--- a/datasets/kor_3i4k/README.md
+++ b/datasets/kor_3i4k/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- intent-classification
paperswithcode_id: null
+pretty_name: 3i4K
---
# Dataset Card for 3i4K
@@ -154,4 +155,4 @@ The dataset is licensed under the CC BY-SA-4.0.
### Contributions
-Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
\ No newline at end of file
+Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
diff --git a/datasets/kor_hate/README.md b/datasets/kor_hate/README.md
index b6a4cb87ce0..f28ec50795c 100644
--- a/datasets/kor_hate/README.md
+++ b/datasets/kor_hate/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- multi-label-classification
paperswithcode_id: korean-hatespeech-dataset
+pretty_name: Korean HateSpeech Dataset
---
# Dataset Card for [Dataset Name]
@@ -173,4 +174,4 @@ This dataset is curated by Jihyung Moon, Won Ik Cho and Junbum Lee.
### Contributions
-Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
\ No newline at end of file
+Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
diff --git a/datasets/kor_ner/README.md b/datasets/kor_ner/README.md
index ddbd7e68e1c..df2487c0de0 100644
--- a/datasets/kor_ner/README.md
+++ b/datasets/kor_ner/README.md
@@ -18,9 +18,10 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: KorNER
---
-# Dataset Card for [Dataset Name]
+# Dataset Card for KorNER
## Table of Contents
- [Dataset Description](#dataset-description)
@@ -162,4 +163,4 @@ The prefix `B` denotes the first item of a phrase, and an `I` denotes any non-in
### Contributions
-Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset.
\ No newline at end of file
+Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset.
diff --git a/datasets/kor_nli/README.md b/datasets/kor_nli/README.md
index 60a389a5bac..f71eaaf7eeb 100644
--- a/datasets/kor_nli/README.md
+++ b/datasets/kor_nli/README.md
@@ -1,5 +1,6 @@
---
paperswithcode_id: kornli
+pretty_name: KorNLI
---
# Dataset Card for "kor_nli"
@@ -197,4 +198,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
diff --git a/datasets/kor_nlu/README.md b/datasets/kor_nlu/README.md
index f90401e5734..15f40449741 100644
--- a/datasets/kor_nlu/README.md
+++ b/datasets/kor_nlu/README.md
@@ -22,6 +22,7 @@ task_ids:
- natural-language-inference
- semantic-similarity-scoring
paperswithcode_id: null
+pretty_name: KorNlu
---
# Dataset Card for [Dataset Name]
@@ -143,4 +144,4 @@ paperswithcode_id: null
[More Information Needed]
### Contributions
-Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset.
\ No newline at end of file
+Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset.
diff --git a/datasets/kor_qpair/README.md b/datasets/kor_qpair/README.md
index 8ab1dfc70aa..20b2af384d0 100644
--- a/datasets/kor_qpair/README.md
+++ b/datasets/kor_qpair/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- semantic-similarity-classification
paperswithcode_id: null
+pretty_name: KorQpair
---
# Dataset Card for [Dataset Name]
@@ -144,4 +145,4 @@ Each row in the dataset contains two questions and a `is_duplicate` label.
### Contributions
-Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset.
\ No newline at end of file
+Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset.
diff --git a/datasets/kor_sae/README.md b/datasets/kor_sae/README.md
index 7d85348b381..78d3be3bc6a 100644
--- a/datasets/kor_sae/README.md
+++ b/datasets/kor_sae/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- intent-classification
paperswithcode_id: null
+pretty_name: Structured Argument Extraction for Korean
---
# Dataset Card for Structured Argument Extraction for Korean
@@ -156,4 +157,4 @@ The dataset is licensed under the CC BY-SA-4.0.
### Contributions
-Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
\ No newline at end of file
+Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
diff --git a/datasets/kor_sarcasm/README.md b/datasets/kor_sarcasm/README.md
index 171409a440e..aa0f1a29042 100644
--- a/datasets/kor_sarcasm/README.md
+++ b/datasets/kor_sarcasm/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- text-classification-other-sarcasm-detection
paperswithcode_id: null
+pretty_name: Korean Sarcasm Detection
---
# Dataset Card for Korean Sarcasm Detection
@@ -146,4 +147,4 @@ This dataset is licensed under the MIT License.
### Contributions
-Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
\ No newline at end of file
+Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
diff --git a/datasets/lambada/README.md b/datasets/lambada/README.md
index cd2700caa8e..e711a53ace2 100644
--- a/datasets/lambada/README.md
+++ b/datasets/lambada/README.md
@@ -18,6 +18,7 @@ size_categories:
licenses:
- cc-by-4.0
paperswithcode_id: lambada
+pretty_name: LAMBADA
---
# Dataset Card for LAMBADA
@@ -184,4 +185,4 @@ Computational Linguistics (Volume 1: Long Papers)},
### Contributions
-Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
\ No newline at end of file
+Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
diff --git a/datasets/large_spanish_corpus/README.md b/datasets/large_spanish_corpus/README.md
index 19455b756a7..70ddcc87581 100644
--- a/datasets/large_spanish_corpus/README.md
+++ b/datasets/large_spanish_corpus/README.md
@@ -80,9 +80,10 @@ task_categories:
task_ids:
- other-other-pretraining-language-models
paperswithcode_id: null
+pretty_name: The Large Spanish Corpus
---
-# Dataset Card for [Dataset Name]
+# Dataset Card for The Large Spanish Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
@@ -112,7 +113,7 @@ paperswithcode_id: null
- **Homepage:** [https://github.com/josecannete/spanish-corpora](https://github.com/josecannete/spanish-corpora)
- **Repository:** [https://github.com/josecannete/spanish-corpora](https://github.com/josecannete/spanish-corpora)
-- **Paper:**
+- **Paper:**
- **Leaderboard:**
- **Point of Contact:** [José Cañete](mailto:jose.canete@ug.uchile.cl) (corpus creator) or [Lewis Tunstall](mailto:lewis.c.tunstall@gmail.com) (corpus submitter)
@@ -241,4 +242,4 @@ The following is taken from the corpus' source repsository:
### Contributions
-Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
\ No newline at end of file
+Thanks to [@lewtun](https://github.com/lewtun) for adding this dataset.
diff --git a/datasets/laroseda/README.md b/datasets/laroseda/README.md
index 114a29a6794..a1503ca2c39 100644
--- a/datasets/laroseda/README.md
+++ b/datasets/laroseda/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: null
+pretty_name: LaRoSeDa
---
# Dataset Card for LaRoSeDa
diff --git a/datasets/lc_quad/README.md b/datasets/lc_quad/README.md
index 6155b723802..d1cccba93ac 100644
--- a/datasets/lc_quad/README.md
+++ b/datasets/lc_quad/README.md
@@ -2,9 +2,10 @@
languages:
- en
paperswithcode_id: lc-quad-2-0
+pretty_name: "LC-QuAD 2.0: Large-scale Complex Question Answering Dataset"
---
-# Dataset Card for "lc_quad"
+# Dataset Card for LC-QuAD 2.0
## Table of Contents
- [Dataset Description](#dataset-description)
@@ -173,4 +174,4 @@ organization={Springer}
### Contributions
-Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
\ No newline at end of file
+Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
diff --git a/datasets/lener_br/README.md b/datasets/lener_br/README.md
index d3b2b0967b9..5e3c8fb185f 100644
--- a/datasets/lener_br/README.md
+++ b/datasets/lener_br/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: lener-br
+pretty_name: leNER-br
---
# Dataset Card for leNER-br
@@ -183,4 +184,4 @@ The data is split into train, validation and test set. The split sizes are as fo
### Contributions
-Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset.
\ No newline at end of file
+Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset.
diff --git a/datasets/liar/README.md b/datasets/liar/README.md
index 7e3e052dfae..84439b62704 100644
--- a/datasets/liar/README.md
+++ b/datasets/liar/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- text-classification-other-fake-news-detection
paperswithcode_id: liar
+pretty_name: LIAR
---
# Dataset Card for [Dataset Name]
@@ -140,4 +141,4 @@ English.
### Contributions
-Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
\ No newline at end of file
+Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
diff --git a/datasets/librispeech_lm/README.md b/datasets/librispeech_lm/README.md
index 24dd1a43e1a..4f777bb224b 100644
--- a/datasets/librispeech_lm/README.md
+++ b/datasets/librispeech_lm/README.md
@@ -1,5 +1,6 @@
---
paperswithcode_id: null
+pretty_name: LibrispeechLm
---
# Dataset Card for "librispeech_lm"
@@ -153,4 +154,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@lewtun](https://github.com/lewtun), [@jplu](https://github.com/jplu), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/limit/README.md b/datasets/limit/README.md
index 4e2638df4d0..97dcf5ec7aa 100644
--- a/datasets/limit/README.md
+++ b/datasets/limit/README.md
@@ -21,6 +21,7 @@ task_ids:
- multi-class-classification
- named-entity-recognition
paperswithcode_id: limit
+pretty_name: Literal Motion in Text Dataset
---
# Dataset Card Creation Guide
@@ -192,4 +193,4 @@ The dataset is split into a `train`, and `test` split with the following sizes:
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/lince/README.md b/datasets/lince/README.md
index c73d6b9bd93..9b9c6bd4e3a 100644
--- a/datasets/lince/README.md
+++ b/datasets/lince/README.md
@@ -1,5 +1,6 @@
---
paperswithcode_id: lince
+pretty_name: Linguistic Code-switching Evaluation Dataset
---
# Dataset Card for "lince"
diff --git a/datasets/liveqa/README.md b/datasets/liveqa/README.md
index 26cc4932d8c..d5198b70474 100644
--- a/datasets/liveqa/README.md
+++ b/datasets/liveqa/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- extractive-qa
paperswithcode_id: liveqa
+pretty_name: LiveQA
---
# Dataset Card for LiveQA
@@ -191,4 +192,4 @@ This resource is developed by [Liu et al., 2020](https://www.aclweb.org/antholog
### Contributions
-Thanks to [@j-chim](https://github.com/j-chim) for adding this dataset.
\ No newline at end of file
+Thanks to [@j-chim](https://github.com/j-chim) for adding this dataset.
diff --git a/datasets/m_lama/README.md b/datasets/m_lama/README.md
index 834ab98a482..cb129051e09 100644
--- a/datasets/m_lama/README.md
+++ b/datasets/m_lama/README.md
@@ -76,6 +76,7 @@ task_ids:
- open-domain-qa
- text-scoring-other-probing
paperswithcode_id: null
+pretty_name: MLama
---
# Dataset Card for [Dataset Name]
diff --git a/datasets/makhzan/README.md b/datasets/makhzan/README.md
index 01732afc127..b2bea6518b8 100644
--- a/datasets/makhzan/README.md
+++ b/datasets/makhzan/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- language-modeling
paperswithcode_id: null
+pretty_name: makhzan
---
# Dataset Card for makhzan
@@ -223,4 +224,4 @@ Zeerak Ahmed
### Contributions
-Thanks to [@arkhalid](https://github.com/arkhalid) for adding this dataset.
\ No newline at end of file
+Thanks to [@arkhalid](https://github.com/arkhalid) for adding this dataset.
diff --git a/datasets/math_qa/README.md b/datasets/math_qa/README.md
index 4ee8d941629..4aee564ecb7 100644
--- a/datasets/math_qa/README.md
+++ b/datasets/math_qa/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: mathqa
+pretty_name: MathQA
---
# Dataset Card for "math_qa"
@@ -159,4 +160,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
diff --git a/datasets/matinf/README.md b/datasets/matinf/README.md
index 65d42cbc892..5bc80521f23 100644
--- a/datasets/matinf/README.md
+++ b/datasets/matinf/README.md
@@ -1,5 +1,6 @@
---
paperswithcode_id: matinf
+pretty_name: Maternal and Infant Dataset
---
# Dataset Card for "matinf"
@@ -242,4 +243,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@JetRunner](https://github.com/JetRunner) for adding this dataset.
\ No newline at end of file
+Thanks to [@JetRunner](https://github.com/JetRunner) for adding this dataset.
diff --git a/datasets/mc_taco/README.md b/datasets/mc_taco/README.md
index fb83646b5c3..b0a5296b168 100644
--- a/datasets/mc_taco/README.md
+++ b/datasets/mc_taco/README.md
@@ -20,6 +20,7 @@ task_categories:
task_ids:
- multiple-choice-qa
paperswithcode_id: mc-taco
+pretty_name: MC-TACO
---
# Dataset Card Creation Guide
@@ -180,4 +181,4 @@ Unknwon
### Contributions
-Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
\ No newline at end of file
+Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
diff --git a/datasets/med_hop/README.md b/datasets/med_hop/README.md
index bcd5e78fa14..51bd067a426 100644
--- a/datasets/med_hop/README.md
+++ b/datasets/med_hop/README.md
@@ -19,6 +19,7 @@ task_ids:
- extractive-qa
- question-answering-other-multi-hop
paperswithcode_id: medhop
+pretty_name: MedHop
---
# Dataset Card Creation Guide
@@ -143,4 +144,4 @@ paperswithcode_id: medhop
[More Information Needed]
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/medical_questions_pairs/README.md b/datasets/medical_questions_pairs/README.md
index 6c85907c83a..e282c6c34f6 100644
--- a/datasets/medical_questions_pairs/README.md
+++ b/datasets/medical_questions_pairs/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- semantic-similarity-classification
paperswithcode_id: null
+pretty_name: MedicalQuestionsPairs
---
# Dataset Card for [medical_questions_pairs]
@@ -164,4 +165,4 @@ The first instruction generates a positive question pair (similar) and the secon
### Contributions
-Thanks to [@tuner007](https://github.com/tuner007) for adding this dataset.
\ No newline at end of file
+Thanks to [@tuner007](https://github.com/tuner007) for adding this dataset.
diff --git a/datasets/meta_woz/README.md b/datasets/meta_woz/README.md
index 1e6777adb93..efdc55e1f2f 100644
--- a/datasets/meta_woz/README.md
+++ b/datasets/meta_woz/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- dialogue-modeling
paperswithcode_id: metalwoz
+pretty_name: Meta-Learning Wizard-of-Oz
---
# Dataset Card for MetaLWOz
@@ -228,4 +229,4 @@ url = {https://www.microsoft.com/en-us/research/publication/fast-domain-adaptati
### Contributions
-Thanks to [@pacman100](https://github.com/pacman100) for adding this dataset.
\ No newline at end of file
+Thanks to [@pacman100](https://github.com/pacman100) for adding this dataset.
diff --git a/datasets/metooma/README.md b/datasets/metooma/README.md
index 84f2d6a6dfc..0303ee3d3b4 100644
--- a/datasets/metooma/README.md
+++ b/datasets/metooma/README.md
@@ -18,6 +18,7 @@ task_ids:
- multi-class-classification
- multi-label-classification
paperswithcode_id: metooma
+pretty_name: '#MeTooMA dataset'
---
# Dataset Card for #MeTooMA dataset
@@ -228,4 +229,4 @@ Please cite the following publication if you make use of the dataset: https://oj
### Contributions
-Thanks to [@akash418](https://github.com/akash418) for adding this dataset.
\ No newline at end of file
+Thanks to [@akash418](https://github.com/akash418) for adding this dataset.
diff --git a/datasets/metrec/README.md b/datasets/metrec/README.md
index 342cce1b0f1..5cb325206cc 100644
--- a/datasets/metrec/README.md
+++ b/datasets/metrec/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- text-classification-other-poetry-classification
paperswithcode_id: metrec
+pretty_name: MetRec
---
# Dataset Card for MetRec
@@ -156,4 +157,4 @@ The dataset does not contain any additional annotations.
### Contributions
-Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset.
\ No newline at end of file
+Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset.
diff --git a/datasets/miam/README.md b/datasets/miam/README.md
index bdf89200dec..9e25845dcfd 100644
--- a/datasets/miam/README.md
+++ b/datasets/miam/README.md
@@ -47,6 +47,7 @@ task_ids:
- language-modeling
- text-classification-other-dialogue-act-classification
paperswithcode_id: null
+pretty_name: MIAM
---
# Dataset Card for MIAM
@@ -273,4 +274,4 @@ year={2021},
url{https://openreview.net/forum?id=c1oDhu_hagR},
note={anonymous preprint under review}
}
-```
\ No newline at end of file
+```
diff --git a/datasets/mkqa/README.md b/datasets/mkqa/README.md
index cc4b37cdea0..1b2bf70766f 100644
--- a/datasets/mkqa/README.md
+++ b/datasets/mkqa/README.md
@@ -43,6 +43,7 @@ task_categories:
task_ids:
- open-domain-qa
paperswithcode_id: mkqa
+pretty_name: Multilingual Knowledge Questions and Answers
---
# Dataset Card for MKQA: Multilingual Knowledge Questions & Answers
diff --git a/datasets/mnist/README.md b/datasets/mnist/README.md
index 919f86d84dc..0075b53996e 100644
--- a/datasets/mnist/README.md
+++ b/datasets/mnist/README.md
@@ -16,6 +16,7 @@ task_categories:
task_ids:
- other-other-image-classification
paperswithcode_id: mnist
+pretty_name: MNIST
---
# Dataset Card for MNIST
@@ -156,4 +157,4 @@ MIT Licence
### Contributions
-Thanks to [@sgugger](https://github.com/sgugger) for adding this dataset.
\ No newline at end of file
+Thanks to [@sgugger](https://github.com/sgugger) for adding this dataset.
diff --git a/datasets/mrqa/README.md b/datasets/mrqa/README.md
index e5fb4d4f799..20848a26fa3 100644
--- a/datasets/mrqa/README.md
+++ b/datasets/mrqa/README.md
@@ -24,6 +24,7 @@ task_categories:
task_ids:
- extractive-qa
paperswithcode_id: mrqa-2019
+pretty_name: MRQA 2019
---
# Dataset Card Creation Guide
@@ -293,4 +294,4 @@ Unknown
### Contributions
-Thanks to [@jimmycode](https://github.com/jimmycode), [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
\ No newline at end of file
+Thanks to [@jimmycode](https://github.com/jimmycode), [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
diff --git a/datasets/ms_marco/README.md b/datasets/ms_marco/README.md
index d6e9277b009..337ba18a00b 100644
--- a/datasets/ms_marco/README.md
+++ b/datasets/ms_marco/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: ms-marco
+pretty_name: Microsoft Machine Reading Comprehension Dataset
---
# Dataset Card for "ms_marco"
@@ -214,4 +215,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
\ No newline at end of file
+Thanks to [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun) for adding this dataset.
diff --git a/datasets/ms_terms/README.md b/datasets/ms_terms/README.md
index 4392eacecd0..eae98037dcb 100644
--- a/datasets/ms_terms/README.md
+++ b/datasets/ms_terms/README.md
@@ -133,6 +133,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: MsTerms
---
# Dataset Card for [ms_terms]
@@ -258,4 +259,4 @@ Nearly 100 Languages.
### Contributions
-Thanks to [@leoxzhao](https://github.com/leoxzhao), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
\ No newline at end of file
+Thanks to [@leoxzhao](https://github.com/leoxzhao), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
diff --git a/datasets/msr_genomics_kbcomp/README.md b/datasets/msr_genomics_kbcomp/README.md
index d5569bb5a65..cc0dcc1b7e9 100644
--- a/datasets/msr_genomics_kbcomp/README.md
+++ b/datasets/msr_genomics_kbcomp/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- other-other-NCI-PID-PubMed Genomics Knowledge Base Completion Dataset
paperswithcode_id: null
+pretty_name: MsrGenomicsKbcomp
---
# Dataset Card for [Dataset Name]
@@ -191,4 +192,4 @@ The dataset was initially created by Kristina Toutanova, Victoria Lin, Wen-tau Y
### Contributions
-Thanks to [@manandey](https://github.com/manandey) for adding this dataset.
\ No newline at end of file
+Thanks to [@manandey](https://github.com/manandey) for adding this dataset.
diff --git a/datasets/msr_sqa/README.md b/datasets/msr_sqa/README.md
index f82357a11f2..8e12a43db05 100644
--- a/datasets/msr_sqa/README.md
+++ b/datasets/msr_sqa/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- extractive-qa
paperswithcode_id: null
+pretty_name: Microsoft Research Sequential Question Answering
---
# Dataset Card for Microsoft Research Sequential Question Answering
@@ -161,4 +162,4 @@ It is recommended to use a CSV parser like the Python CSV package to process the
### Contributions
-Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset.
\ No newline at end of file
+Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset.
diff --git a/datasets/msr_text_compression/README.md b/datasets/msr_text_compression/README.md
index 45014d645e3..6a038a34f70 100644
--- a/datasets/msr_text_compression/README.md
+++ b/datasets/msr_text_compression/README.md
@@ -17,6 +17,7 @@ task_categories:
- conditional-text-generation
task_ids:
- summarization
+pretty_name: MsrTextCompression
---
# Dataset Card for [Dataset Name]
@@ -154,4 +155,4 @@ Microsoft Research Data License Agreement
### Contributions
-Thanks to [@jeromeku](https://github.com/jeromeku) for adding this dataset.
\ No newline at end of file
+Thanks to [@jeromeku](https://github.com/jeromeku) for adding this dataset.
diff --git a/datasets/msr_zhen_translation_parity/README.md b/datasets/msr_zhen_translation_parity/README.md
index 7ef945548c6..14c1cdcc95a 100644
--- a/datasets/msr_zhen_translation_parity/README.md
+++ b/datasets/msr_zhen_translation_parity/README.md
@@ -20,6 +20,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: MsrZhenTranslationParity
---
# Dataset Card for msr_zhen_translation_parity
@@ -181,4 +182,4 @@ Citation information is available at this link [Achieving Human Parity on Automa
### Contributions
-Thanks to [@leoxzhao](https://github.com/leoxzhao) for adding this dataset.
\ No newline at end of file
+Thanks to [@leoxzhao](https://github.com/leoxzhao) for adding this dataset.
diff --git a/datasets/msra_ner/README.md b/datasets/msra_ner/README.md
index ae475b01a6a..3bc4562c5ef 100644
--- a/datasets/msra_ner/README.md
+++ b/datasets/msra_ner/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: MSRA NER
---
# Dataset Card for MSRA NER
@@ -140,4 +141,4 @@ paperswithcode_id: null
### Contributions
-Thanks to [@JetRunner](https://github.com/JetRunner) for adding this dataset.
\ No newline at end of file
+Thanks to [@JetRunner](https://github.com/JetRunner) for adding this dataset.
diff --git a/datasets/mt_eng_vietnamese/README.md b/datasets/mt_eng_vietnamese/README.md
index 0bb100a2b41..dd82fc579d4 100644
--- a/datasets/mt_eng_vietnamese/README.md
+++ b/datasets/mt_eng_vietnamese/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: MtEngVietnamese
---
# Dataset Card for mt_eng_vietnamese
@@ -161,4 +162,4 @@ train: 133318, validation: 1269, test: 1269
### Contributions
-Thanks to [@Nilanshrajput](https://github.com/Nilanshrajput) for adding this dataset.
\ No newline at end of file
+Thanks to [@Nilanshrajput](https://github.com/Nilanshrajput) for adding this dataset.
diff --git a/datasets/muchocine/README.md b/datasets/muchocine/README.md
index e6f6903bab2..a039379f0e3 100644
--- a/datasets/muchocine/README.md
+++ b/datasets/muchocine/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: null
+pretty_name: Muchocine
---
# Dataset Card for Muchocine
@@ -142,4 +143,4 @@ See http://www.lsi.us.es/~fermin/index.php/Datasets
### Contributions
-Thanks to [@mapmeld](https://github.com/mapmeld) for adding this dataset.
\ No newline at end of file
+Thanks to [@mapmeld](https://github.com/mapmeld) for adding this dataset.
diff --git a/datasets/multi_booked/README.md b/datasets/multi_booked/README.md
index 36ff8c5847f..055b8b73786 100644
--- a/datasets/multi_booked/README.md
+++ b/datasets/multi_booked/README.md
@@ -21,6 +21,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: multibooked
+pretty_name: MultiBooked
---
# Dataset Card for MultiBooked
@@ -182,4 +183,4 @@ Dataset is under the [CC-BY 3.0](https://creativecommons.org/licenses/by/3.0/) l
### Contributions
-Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
\ No newline at end of file
+Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
diff --git a/datasets/multi_news/README.md b/datasets/multi_news/README.md
index d72de5f4bdd..8b9371b9539 100644
--- a/datasets/multi_news/README.md
+++ b/datasets/multi_news/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: multi-news
+pretty_name: Multi-News
---
# Dataset Card for "multi_news"
@@ -165,4 +166,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/multi_nli/README.md b/datasets/multi_nli/README.md
index 5477a1997d8..3f63c7ab3be 100644
--- a/datasets/multi_nli/README.md
+++ b/datasets/multi_nli/README.md
@@ -22,6 +22,7 @@ task_categories:
task_ids:
- semantic-similarity-scoring
paperswithcode_id: multinli
+pretty_name: Multi-Genre Natural Language Inference
---
# Dataset Card for Multi-Genre Natural Language Inference (MultiNLI)
diff --git a/datasets/multi_nli_mismatch/README.md b/datasets/multi_nli_mismatch/README.md
index 3926de9c6e9..7bfa78b724f 100644
--- a/datasets/multi_nli_mismatch/README.md
+++ b/datasets/multi_nli_mismatch/README.md
@@ -22,6 +22,7 @@ task_categories:
task_ids:
- semantic-similarity-scoring
paperswithcode_id: multinli
+pretty_name: Multi-Genre Natural Language Inference
---
# Dataset Card for Multi-Genre Natural Language Inference (Mismatched only)
@@ -193,4 +194,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
diff --git a/datasets/multi_para_crawl/README.md b/datasets/multi_para_crawl/README.md
index 2b59fdd4090..48776c9d5ae 100644
--- a/datasets/multi_para_crawl/README.md
+++ b/datasets/multi_para_crawl/README.md
@@ -57,6 +57,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: MultiParaCrawl
---
# Dataset Card Creation Guide
@@ -187,4 +188,4 @@ E.g.
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/multi_re_qa/README.md b/datasets/multi_re_qa/README.md
index 90871ad3160..aa33f7f92ca 100644
--- a/datasets/multi_re_qa/README.md
+++ b/datasets/multi_re_qa/README.md
@@ -55,6 +55,7 @@ task_ids:
- extractive-qa
- open-domain-qa
paperswithcode_id: multireqa
+pretty_name: MultiReQA
---
# Dataset Card for MultiReQA
@@ -227,4 +228,4 @@ The annotators/curators of the dataset are [mandyguo-xyguo](https://github.com/m
### Contributions
-Thanks to [@Karthik-Bhaskar](https://github.com/Karthik-Bhaskar) for adding this dataset.
\ No newline at end of file
+Thanks to [@Karthik-Bhaskar](https://github.com/Karthik-Bhaskar) for adding this dataset.
diff --git a/datasets/multi_woz_v22/README.md b/datasets/multi_woz_v22/README.md
index 8f34e8aef2d..27dd3a9e2c4 100644
--- a/datasets/multi_woz_v22/README.md
+++ b/datasets/multi_woz_v22/README.md
@@ -23,6 +23,7 @@ task_ids:
- multi-class-classification
- parsing
paperswithcode_id: multiwoz
+pretty_name: Multi-domain Wizard-of-Oz
---
# Dataset Card for MultiWOZ
@@ -279,4 +280,4 @@ Version 2.2
### Contributions
-Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset.
\ No newline at end of file
+Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset.
diff --git a/datasets/multi_x_science_sum/README.md b/datasets/multi_x_science_sum/README.md
index dad32092432..5f13df93315 100644
--- a/datasets/multi_x_science_sum/README.md
+++ b/datasets/multi_x_science_sum/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- summarization
paperswithcode_id: multi-xscience
+pretty_name: Multi-XScience
---
# Dataset Card for Multi-XScience
@@ -159,4 +160,4 @@ The data is split into a training, validation and test.
### Contributions
-Thanks to [@moussaKam](https://github.com/moussaKam) for adding this dataset.
\ No newline at end of file
+Thanks to [@moussaKam](https://github.com/moussaKam) for adding this dataset.
diff --git a/datasets/mwsc/README.md b/datasets/mwsc/README.md
index b9cd30f45b7..2a71630174b 100644
--- a/datasets/mwsc/README.md
+++ b/datasets/mwsc/README.md
@@ -2,9 +2,10 @@
languages:
- en
paperswithcode_id: null
+pretty_name: The modified Winograd Schema Challenge (MWSC)
---
-# Dataset Card for "mwsc"
+# Dataset Card for The modified Winograd Schema Challenge (MWSC)
## Table of Contents
- [Dataset Description](#dataset-description)
@@ -160,4 +161,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@ghomasHudson](https://github.com/ghomasHudson), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@ghomasHudson](https://github.com/ghomasHudson), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
diff --git a/datasets/myanmar_news/README.md b/datasets/myanmar_news/README.md
index 8e75c5b2b74..b1180df87f1 100644
--- a/datasets/myanmar_news/README.md
+++ b/datasets/myanmar_news/README.md
@@ -16,6 +16,7 @@ task_categories:
task_ids:
- topic-classification
paperswithcode_id: null
+pretty_name: MyanmarNews
---
# Dataset Card for Myanmar_News
@@ -72,4 +73,4 @@ See https://github.com/ayehninnkhine/MyanmarNewsClassificationSystem
### Contributions
-Thanks to [@mapmeld](https://github.com/mapmeld) for adding this dataset.
\ No newline at end of file
+Thanks to [@mapmeld](https://github.com/mapmeld) for adding this dataset.
diff --git a/datasets/narrativeqa/README.md b/datasets/narrativeqa/README.md
index 004c9c0f709..29f8c9b4ae0 100644
--- a/datasets/narrativeqa/README.md
+++ b/datasets/narrativeqa/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- abstractive-qa
paperswithcode_id: narrativeqa
+pretty_name: NarrativeQA
---
# Dataset Card for Narrative QA
@@ -197,4 +198,4 @@ pages = {TBD},
### Contributions
-Thanks to [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset.
\ No newline at end of file
+Thanks to [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset.
diff --git a/datasets/narrativeqa_manual/README.md b/datasets/narrativeqa_manual/README.md
index 7a458a0da9c..ab06c73d197 100644
--- a/datasets/narrativeqa_manual/README.md
+++ b/datasets/narrativeqa_manual/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- abstractive-qa
paperswithcode_id: narrativeqa
+pretty_name: NarrativeQA
---
# Dataset Card for Narrative QA Manual
@@ -197,4 +198,4 @@ pages = {TBD},
### Contributions
-Thanks to [@rsanjaykamath](https://github.com/rsanjaykamath) for adding this dataset.
\ No newline at end of file
+Thanks to [@rsanjaykamath](https://github.com/rsanjaykamath) for adding this dataset.
diff --git a/datasets/natural_questions/README.md b/datasets/natural_questions/README.md
index 4b4238c49bd..ea4ca276741 100644
--- a/datasets/natural_questions/README.md
+++ b/datasets/natural_questions/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: natural-questions
+pretty_name: Natural Questions
---
# Dataset Card for "natural_questions"
@@ -177,4 +178,4 @@ journal = {Transactions of the Association of Computational Linguistics}
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
diff --git a/datasets/nchlt/README.md b/datasets/nchlt/README.md
index 87db42d7d73..6e930144686 100644
--- a/datasets/nchlt/README.md
+++ b/datasets/nchlt/README.md
@@ -26,8 +26,9 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: NCHLT
---
-# Dataset Card Creation Guide
+# Dataset Card for NCHLT
## Table of Contents
- [Dataset Description](#dataset-description)
@@ -165,4 +166,4 @@ Martin.Puttkammer@nwu.ac.za
### Contributions
-Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset.
\ No newline at end of file
+Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset.
diff --git a/datasets/ncslgr/README.md b/datasets/ncslgr/README.md
index 332d59014d0..02af89f315a 100644
--- a/datasets/ncslgr/README.md
+++ b/datasets/ncslgr/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: NCSLGR
---
# Dataset Card for NCSLGR
@@ -153,4 +154,4 @@ None
### Contributions
-Thanks to [@AmitMY](https://github.com/AmitMY) for adding this dataset.
\ No newline at end of file
+Thanks to [@AmitMY](https://github.com/AmitMY) for adding this dataset.
diff --git a/datasets/news_commentary/README.md b/datasets/news_commentary/README.md
index b1250ee1c73..b36b94bc08e 100644
--- a/datasets/news_commentary/README.md
+++ b/datasets/news_commentary/README.md
@@ -29,6 +29,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: NewsCommentary
---
# Dataset Card Creation Guide
@@ -155,4 +156,4 @@ paperswithcode_id: null
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/newsph_nli/README.md b/datasets/newsph_nli/README.md
index ae230c5b0ec..946f1eebca6 100644
--- a/datasets/newsph_nli/README.md
+++ b/datasets/newsph_nli/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- natural-language-inference
paperswithcode_id: newsph-nli
+pretty_name: NewsPH NLI
---
# Dataset Card for NewsPH NLI
@@ -151,4 +152,4 @@ Jan Christian Blaise Cruz, Jose Kristian Resabal, James Lin, Dan John Velasco an
### Contributions
-Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.
\ No newline at end of file
+Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.
diff --git a/datasets/newspop/README.md b/datasets/newspop/README.md
index 148ebeec763..a5c96d72b1e 100644
--- a/datasets/newspop/README.md
+++ b/datasets/newspop/README.md
@@ -18,9 +18,10 @@ task_categories:
task_ids:
- other-social-media-shares-prediction
paperswithcode_id: null
+pretty_name: News Popularity in Multiple Social Media Platforms
---
-# Dataset Card for newspop
+# Dataset Card for News Popularity in Multiple Social Media Platforms
## Table of Contents
- [Dataset Description](#dataset-description)
diff --git a/datasets/newsqa/README.md b/datasets/newsqa/README.md
index aeab9f91838..2142571db26 100644
--- a/datasets/newsqa/README.md
+++ b/datasets/newsqa/README.md
@@ -23,6 +23,7 @@ task_categories:
task_ids:
- extractive-qa
paperswithcode_id: newsqa
+pretty_name: NewsQA
---
# Dataset Card for NewsQA
@@ -197,4 +198,4 @@ THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLI
### Contributions
-Thanks to [@rsanjaykamath](https://github.com/rsanjaykamath) for adding this dataset.
\ No newline at end of file
+Thanks to [@rsanjaykamath](https://github.com/rsanjaykamath) for adding this dataset.
diff --git a/datasets/newsroom/README.md b/datasets/newsroom/README.md
index 66b89d40cdf..96eae4097c7 100644
--- a/datasets/newsroom/README.md
+++ b/datasets/newsroom/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: newsroom
+pretty_name: CORNELL NEWSROOM
---
# Dataset Card for "newsroom"
@@ -196,4 +197,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@yoavartzi](https://github.com/yoavartzi), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@yoavartzi](https://github.com/yoavartzi), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/nkjp-ner/README.md b/datasets/nkjp-ner/README.md
index c197087f080..19f75e0c95f 100644
--- a/datasets/nkjp-ner/README.md
+++ b/datasets/nkjp-ner/README.md
@@ -18,9 +18,10 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: NJKP NER
---
-# Dataset Card for [Dataset Name]
+# Dataset Card for NJKP NER
## Table of Contents
- [Dataset Description](#dataset-description)
@@ -80,7 +81,7 @@ Polish
### Data Instances
-Two tsv files (train, dev) with two columns (sentence, target) and one (test) with just one (sentence).
+Two tsv files (train, dev) with two columns (sentence, target) and one (test) with just one (sentence).
### Data Fields
@@ -156,4 +157,4 @@ publisher={Naukowe PWN}
### Contributions
-Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
\ No newline at end of file
+Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
diff --git a/datasets/nli_tr/README.md b/datasets/nli_tr/README.md
index 778fcce77bd..64abaf0c555 100644
--- a/datasets/nli_tr/README.md
+++ b/datasets/nli_tr/README.md
@@ -1,5 +1,6 @@
---
paperswithcode_id: nli-tr
+pretty_name: Natural Language Inference in Turkish
---
# Dataset Card for "nli_tr"
@@ -195,4 +196,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@e-budur](https://github.com/e-budur) for adding this dataset.
\ No newline at end of file
+Thanks to [@e-budur](https://github.com/e-budur) for adding this dataset.
diff --git a/datasets/nlu_evaluation_data/README.md b/datasets/nlu_evaluation_data/README.md
index 1948b8f5887..2cf194bcba9 100644
--- a/datasets/nlu_evaluation_data/README.md
+++ b/datasets/nlu_evaluation_data/README.md
@@ -21,6 +21,7 @@ task_ids:
- intent-classification
- multi-class-classification
paperswithcode_id: null
+pretty_name: NLU Evaluation Data
---
# Dataset Card for NLU Evaluation Data
diff --git a/datasets/norne/README.md b/datasets/norne/README.md
index 32eed38cd8c..7044b742135 100644
--- a/datasets/norne/README.md
+++ b/datasets/norne/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: 'NorNE: Norwegian Named Entities'
---
# Dataset Card for NorNE: Norwegian Named Entities
diff --git a/datasets/norwegian_ner/README.md b/datasets/norwegian_ner/README.md
index d52a9491a4c..e8db3942423 100644
--- a/datasets/norwegian_ner/README.md
+++ b/datasets/norwegian_ner/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: Norwegian NER
---
# Dataset Card for Norwegian NER
@@ -140,4 +141,4 @@ paperswithcode_id: null
### Contributions
-Thanks to [@jplu](https://github.com/jplu) for adding this dataset.
\ No newline at end of file
+Thanks to [@jplu](https://github.com/jplu) for adding this dataset.
diff --git a/datasets/nsmc/README.md b/datasets/nsmc/README.md
index f38a953020d..d7ea6ff2198 100644
--- a/datasets/nsmc/README.md
+++ b/datasets/nsmc/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: nsmc
+pretty_name: Naver Sentiment Movie Corpus
---
# Dataset Card for Naver sentiment movie corpus
@@ -151,4 +152,4 @@ Each instance is a movie review written by Korean internet users on Naver, the m
### Contributions
-Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset.
\ No newline at end of file
+Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset.
diff --git a/datasets/numer_sense/README.md b/datasets/numer_sense/README.md
index 42fc0b2f616..95b1c583ac2 100644
--- a/datasets/numer_sense/README.md
+++ b/datasets/numer_sense/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- slot-filling
paperswithcode_id: numersense
+pretty_name: NumerSense
---
# Dataset Card for [Dataset Name]
@@ -187,4 +188,4 @@ The data is hosted in a GitHub repositor with the
### Contributions
-Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
\ No newline at end of file
+Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
diff --git a/datasets/numeric_fused_head/README.md b/datasets/numeric_fused_head/README.md
index 5cac584e4cd..7d0e650d4a3 100644
--- a/datasets/numeric_fused_head/README.md
+++ b/datasets/numeric_fused_head/README.md
@@ -25,6 +25,7 @@ task_categories:
task_ids:
- structure-prediction-other-fused-head-identification
paperswithcode_id: numeric-fused-head
+pretty_name: Numeric Fused Heads
---
# Dataset Card for Numeric Fused Heads
@@ -194,4 +195,4 @@ MIT License
### Contributions
-Thanks to [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset.
\ No newline at end of file
+Thanks to [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset.
diff --git a/datasets/offcombr/README.md b/datasets/offcombr/README.md
index c01a9556ec5..58977b69ef6 100644
--- a/datasets/offcombr/README.md
+++ b/datasets/offcombr/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- text-classification-other-hate-speech-detection
paperswithcode_id: offcombr
+pretty_name: Offensive Comments in the Brazilian Web
---
# Dataset Card for [Dataset Name]
@@ -140,4 +141,4 @@ OffComBR: an annotated dataset containing for hate speech detection in Portugues
### Contributions
-Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
\ No newline at end of file
+Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
diff --git a/datasets/offenseval2020_tr/README.md b/datasets/offenseval2020_tr/README.md
index 61dab08f61c..7615dc0b50f 100644
--- a/datasets/offenseval2020_tr/README.md
+++ b/datasets/offenseval2020_tr/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- text-classification-other-offensive-language-classification
paperswithcode_id: null
+pretty_name: OffensEval-TR 2020
---
# Dataset Card for OffensEval-TR 2020
@@ -189,4 +190,4 @@ The annotations are distributed under the terms of [Creative Commons Attribution
### Contributions
-Thanks to [@yavuzKomecoglu](https://github.com/yavuzKomecoglu) for adding this dataset.
\ No newline at end of file
+Thanks to [@yavuzKomecoglu](https://github.com/yavuzKomecoglu) for adding this dataset.
diff --git a/datasets/offenseval_dravidian/README.md b/datasets/offenseval_dravidian/README.md
index 044a98c1996..c4e108432d4 100644
--- a/datasets/offenseval_dravidian/README.md
+++ b/datasets/offenseval_dravidian/README.md
@@ -31,6 +31,7 @@ task_categories:
task_ids:
- text-classification-other-offensive-language
paperswithcode_id: null
+pretty_name: Offenseval Dravidian
---
# Dataset Card for Offenseval Dravidian
diff --git a/datasets/ofis_publik/README.md b/datasets/ofis_publik/README.md
index 0a81ee195ef..1b3553f3ae5 100644
--- a/datasets/ofis_publik/README.md
+++ b/datasets/ofis_publik/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OfisPublik
---
# Dataset Card Creation Guide
@@ -145,4 +146,4 @@ paperswithcode_id: null
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/onestop_english/README.md b/datasets/onestop_english/README.md
index 3eeabea4de8..2ac162080ec 100644
--- a/datasets/onestop_english/README.md
+++ b/datasets/onestop_english/README.md
@@ -20,6 +20,7 @@ task_ids:
- multi-class-classification
- text-simplification
paperswithcode_id: onestopenglish
+pretty_name: OneStopEnglish corpus
---
# Dataset Card for OneStopEnglish corpus
@@ -142,4 +143,4 @@ Creative Commons Attribution-ShareAlike 4.0 International License
### Contributions
-Thanks to [@purvimisal](https://github.com/purvimisal) for adding this dataset.
\ No newline at end of file
+Thanks to [@purvimisal](https://github.com/purvimisal) for adding this dataset.
diff --git a/datasets/open_subtitles/README.md b/datasets/open_subtitles/README.md
index 3ce6e80c263..e0598ec5704 100644
--- a/datasets/open_subtitles/README.md
+++ b/datasets/open_subtitles/README.md
@@ -88,6 +88,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: opensubtitles
+pretty_name: OpenSubtitles
---
# Dataset Card Creation Guide
diff --git a/datasets/openbookqa/README.md b/datasets/openbookqa/README.md
index af1d67637e4..54f2896555c 100644
--- a/datasets/openbookqa/README.md
+++ b/datasets/openbookqa/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: openbookqa
+pretty_name: OpenBookQA
---
# Dataset Card for "openbookqa"
@@ -181,4 +182,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
diff --git a/datasets/openwebtext/README.md b/datasets/openwebtext/README.md
index ffce0bf1bf7..d9316b76da0 100644
--- a/datasets/openwebtext/README.md
+++ b/datasets/openwebtext/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: openwebtext
+pretty_name: OpenWebText
---
# Dataset Card for "openwebtext"
@@ -155,4 +156,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset.
\ No newline at end of file
+Thanks to [@richarddwang](https://github.com/richarddwang) for adding this dataset.
diff --git a/datasets/opinosis/README.md b/datasets/opinosis/README.md
index 802a9eb29b4..1e04ca3088e 100644
--- a/datasets/opinosis/README.md
+++ b/datasets/opinosis/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: opinosis
+pretty_name: Opinosis
---
# Dataset Card for "opinosis"
@@ -159,4 +160,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
diff --git a/datasets/opus_books/README.md b/datasets/opus_books/README.md
index 56838ef2fe3..7d0cf334c25 100644
--- a/datasets/opus_books/README.md
+++ b/datasets/opus_books/README.md
@@ -33,6 +33,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OpusBooks
---
# Dataset Card Creation Guide
@@ -160,4 +161,4 @@ Here are some examples of questions and facts:
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/opus_dgt/README.md b/datasets/opus_dgt/README.md
index dc19891c604..073a763e1cc 100644
--- a/datasets/opus_dgt/README.md
+++ b/datasets/opus_dgt/README.md
@@ -61,6 +61,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OpusDgt
---
# Dataset Card Creation Guide
@@ -205,4 +206,4 @@ E.g.
### Contributions
-Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
\ No newline at end of file
+Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
diff --git a/datasets/opus_dogc/README.md b/datasets/opus_dogc/README.md
index a97a95c5e94..79b7be2d487 100644
--- a/datasets/opus_dogc/README.md
+++ b/datasets/opus_dogc/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OPUS DOGC
---
# Dataset Card for OPUS DOGC
@@ -158,4 +159,4 @@ Dataset is in the Public Domain under [CC0 1.0](https://creativecommons.org/publ
### Contributions
-Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
\ No newline at end of file
+Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
diff --git a/datasets/opus_elhuyar/README.md b/datasets/opus_elhuyar/README.md
index 79ccc03a82d..5f43b2a0459 100644
--- a/datasets/opus_elhuyar/README.md
+++ b/datasets/opus_elhuyar/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OpusElhuyar
---
# Dataset Card for [opus_elhuyar]
@@ -141,4 +142,4 @@ J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings
### Contributions
-Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
\ No newline at end of file
+Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
diff --git a/datasets/opus_euconst/README.md b/datasets/opus_euconst/README.md
index 5908debb23a..2a39dbc256c 100644
--- a/datasets/opus_euconst/README.md
+++ b/datasets/opus_euconst/README.md
@@ -38,6 +38,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OpusEuconst
---
# Dataset Card for [Dataset Name]
@@ -160,4 +161,4 @@ The underlying task is machine translation.
J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012)
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/opus_finlex/README.md b/datasets/opus_finlex/README.md
index 5f2b633b336..91197b4de6f 100644
--- a/datasets/opus_finlex/README.md
+++ b/datasets/opus_finlex/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OpusFinlex
---
# Dataset Card for [opus_finlex]
@@ -141,4 +142,4 @@ J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings
### Contributions
-Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
\ No newline at end of file
+Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
diff --git a/datasets/opus_fiskmo/README.md b/datasets/opus_fiskmo/README.md
index ddf31d71d67..4c1f018727d 100644
--- a/datasets/opus_fiskmo/README.md
+++ b/datasets/opus_fiskmo/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OpusFiskmo
---
# Dataset Card for [opus_fiskmo]
@@ -141,4 +142,4 @@ J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings
### Contributions
-Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
\ No newline at end of file
+Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
diff --git a/datasets/opus_gnome/README.md b/datasets/opus_gnome/README.md
index 97f5dc29c9e..d49c860cf5c 100644
--- a/datasets/opus_gnome/README.md
+++ b/datasets/opus_gnome/README.md
@@ -223,6 +223,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OpusGnome
---
# Dataset Card Creation Guide
@@ -367,4 +368,4 @@ E.g.
### Contributions
-Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
\ No newline at end of file
+Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
diff --git a/datasets/opus_infopankki/README.md b/datasets/opus_infopankki/README.md
index 92baa7948e4..96f81db3a67 100644
--- a/datasets/opus_infopankki/README.md
+++ b/datasets/opus_infopankki/README.md
@@ -215,6 +215,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OpusInfopankki
---
# Dataset Card for [Dataset Name]
@@ -351,4 +352,4 @@ The underlying task is machine translation.
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/opus_memat/README.md b/datasets/opus_memat/README.md
index 2f41519314e..a55f70caccf 100644
--- a/datasets/opus_memat/README.md
+++ b/datasets/opus_memat/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OpusMemat
---
# Dataset Card for [opus_memat]
@@ -142,4 +143,4 @@ J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings
### Contributions
-Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
\ No newline at end of file
+Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
diff --git a/datasets/opus_montenegrinsubs/README.md b/datasets/opus_montenegrinsubs/README.md
index dfd7ddf1e34..57776cb3141 100644
--- a/datasets/opus_montenegrinsubs/README.md
+++ b/datasets/opus_montenegrinsubs/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OpusMontenegrinsubs
---
# Dataset Card for [opus_montenegrinsubs]
@@ -141,4 +142,4 @@ The underlying task is machine translation from en to me
### Contributions
-Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
\ No newline at end of file
+Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
diff --git a/datasets/opus_openoffice/README.md b/datasets/opus_openoffice/README.md
index 28cd857ee4c..3ac2107c68a 100644
--- a/datasets/opus_openoffice/README.md
+++ b/datasets/opus_openoffice/README.md
@@ -101,6 +101,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OpusOpenoffice
---
# Dataset Card for [Dataset Name]
@@ -237,4 +238,4 @@ The underlying task is machine translation.
```
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/opus_paracrawl/README.md b/datasets/opus_paracrawl/README.md
index 4d75631e5e1..804c14bddfd 100644
--- a/datasets/opus_paracrawl/README.md
+++ b/datasets/opus_paracrawl/README.md
@@ -76,6 +76,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OpusParaCrawl
---
# Dataset Card Creation Guide
@@ -218,4 +219,4 @@ language = {english}
### Contributions
-Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
\ No newline at end of file
+Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
diff --git a/datasets/opus_rf/README.md b/datasets/opus_rf/README.md
index 42f96339c40..74f5216801b 100644
--- a/datasets/opus_rf/README.md
+++ b/datasets/opus_rf/README.md
@@ -47,6 +47,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OpusRf
---
# Dataset Card for [Dataset Name]
@@ -183,4 +184,4 @@ English (en), Spanish (es), German (de), French (fr), Swedish (sv)
### Contributions
-Thanks to [@akshayb7](https://github.com/akshayb7) for adding this dataset.
\ No newline at end of file
+Thanks to [@akshayb7](https://github.com/akshayb7) for adding this dataset.
diff --git a/datasets/opus_tedtalks/README.md b/datasets/opus_tedtalks/README.md
index 8cbe1b98f20..52167105a72 100644
--- a/datasets/opus_tedtalks/README.md
+++ b/datasets/opus_tedtalks/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OpusTedtalks
---
# Dataset Card Creation Guide
@@ -158,4 +159,4 @@ This is a Croatian-English parallel corpus of transcribed and translated TED tal
### Contributions
-Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
\ No newline at end of file
+Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
diff --git a/datasets/opus_ubuntu/README.md b/datasets/opus_ubuntu/README.md
index d421bc0c1ec..83ad32844b7 100644
--- a/datasets/opus_ubuntu/README.md
+++ b/datasets/opus_ubuntu/README.md
@@ -280,6 +280,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OpusUbuntu
---
# Dataset Card Creation Guide
@@ -424,4 +425,4 @@ E.g.
### Contributions
-Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
\ No newline at end of file
+Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
diff --git a/datasets/opus_wikipedia/README.md b/datasets/opus_wikipedia/README.md
index 352731f1931..e4b84456bab 100644
--- a/datasets/opus_wikipedia/README.md
+++ b/datasets/opus_wikipedia/README.md
@@ -46,6 +46,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OpusWikipedia
---
# Dataset Card Creation Guide
@@ -190,4 +191,4 @@ E.g.
### Contributions
-Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
\ No newline at end of file
+Thanks to [@rkc007](https://github.com/rkc007) for adding this dataset.
diff --git a/datasets/opus_xhosanavy/README.md b/datasets/opus_xhosanavy/README.md
index 43d2191636e..83451399699 100644
--- a/datasets/opus_xhosanavy/README.md
+++ b/datasets/opus_xhosanavy/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: OpusXhosanavy
---
# Dataset Card for [Dataset Name]
@@ -143,4 +144,4 @@ J. Tiedemann, 2012, Parallel Data, Tools and Interfaces in OPUS. In Proceedings
### Contributions
-Thanks to [@lhoestq](https://github.com/lhoestq), [@spatil6](https://github.com/spatil6) for adding this dataset.
\ No newline at end of file
+Thanks to [@lhoestq](https://github.com/lhoestq), [@spatil6](https://github.com/spatil6) for adding this dataset.
diff --git a/datasets/para_crawl/README.md b/datasets/para_crawl/README.md
index b385361235d..675adb43938 100644
--- a/datasets/para_crawl/README.md
+++ b/datasets/para_crawl/README.md
@@ -1,5 +1,6 @@
---
paperswithcode_id: paracrawl
+pretty_name: ParaCrawl
---
# Dataset Card for "para_crawl"
@@ -228,4 +229,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
diff --git a/datasets/para_pat/README.md b/datasets/para_pat/README.md
index 2fa62d645ab..efe9d01410d 100644
--- a/datasets/para_pat/README.md
+++ b/datasets/para_pat/README.md
@@ -33,6 +33,7 @@ task_categories:
task_ids:
- language-modeling
paperswithcode_id: parapat
+pretty_name: Parallel Corpus of Patents Abstracts
---
# Dataset Card for ParaPat: The Multi-Million Sentences Parallel Corpus of Patents Abstracts
@@ -230,4 +231,4 @@ CC BY 4.0
[DOI](https://doi.org/10.6084/m9.figshare.12627632)
### Contributions
-Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
\ No newline at end of file
+Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
diff --git a/datasets/pec/README.md b/datasets/pec/README.md
index 3a327d46f8b..b2c6f1cbae6 100644
--- a/datasets/pec/README.md
+++ b/datasets/pec/README.md
@@ -27,6 +27,7 @@ task_ids:
- dialogue-modeling
- utterance-retrieval
paperswithcode_id: pec
+pretty_name: Persona-Based Empathetic Conversational
---
# Dataset Card for PEC
@@ -184,4 +185,4 @@ The licensing status of the dataset hinges on the legal status of the [Pushshift
```
### Contributions
-Thanks to [@zhongpeixiang](https://github.com/zhongpeixiang) for adding this dataset.
\ No newline at end of file
+Thanks to [@zhongpeixiang](https://github.com/zhongpeixiang) for adding this dataset.
diff --git a/datasets/peer_read/README.md b/datasets/peer_read/README.md
index b74864d4bfe..a72c086de3a 100644
--- a/datasets/peer_read/README.md
+++ b/datasets/peer_read/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- text-classification-other-acceptability-classification
paperswithcode_id: peerread
+pretty_name: PeerRead
---
# Dataset Card for peer_read
@@ -201,4 +202,4 @@ Dongyeop Kang, Waleed Ammar, Bhavana Dalvi Mishra, Madeleine van Zuylen, Sebasti
### Contributions
-Thanks to [@vinaykudari](https://github.com/vinaykudari) for adding this dataset.
\ No newline at end of file
+Thanks to [@vinaykudari](https://github.com/vinaykudari) for adding this dataset.
diff --git a/datasets/peoples_daily_ner/README.md b/datasets/peoples_daily_ner/README.md
index 491cad85c51..6583098bf91 100644
--- a/datasets/peoples_daily_ner/README.md
+++ b/datasets/peoples_daily_ner/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: People's Daily NER
---
# Dataset Card for People's Daily NER
@@ -139,4 +140,4 @@ paperswithcode_id: null
No citation available for this dataset.
### Contributions
-Thanks to [@JetRunner](https://github.com/JetRunner) for adding this dataset.
\ No newline at end of file
+Thanks to [@JetRunner](https://github.com/JetRunner) for adding this dataset.
diff --git a/datasets/per_sent/README.md b/datasets/per_sent/README.md
index 194e7d4819d..11f9cd06f33 100644
--- a/datasets/per_sent/README.md
+++ b/datasets/per_sent/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: persent
+pretty_name: PerSenT
---
# Dataset Card for PerSenT
@@ -192,4 +193,4 @@ Slightly Negative, Neutral, Slightly Positive, or Positive. We then combine the
### Contributions
-Thanks to [@jeromeku](https://github.com/jeromeku) for adding this dataset.
\ No newline at end of file
+Thanks to [@jeromeku](https://github.com/jeromeku) for adding this dataset.
diff --git a/datasets/persian_ner/README.md b/datasets/persian_ner/README.md
index eb1f20a8f34..5e10dff7e9c 100644
--- a/datasets/persian_ner/README.md
+++ b/datasets/persian_ner/README.md
@@ -17,6 +17,7 @@ task_categories:
- structure-prediction
task_ids:
- named-entity-recognition
+pretty_name: Persian NER
---
# Dataset Card for [Persian NER]
@@ -161,4 +162,4 @@ Creative Commons Attribution 4.0 International License.
### Contributions
-Thanks to [@KMFODA](https://github.com/KMFODA) for adding this dataset.
\ No newline at end of file
+Thanks to [@KMFODA](https://github.com/KMFODA) for adding this dataset.
diff --git a/datasets/pg19/README.md b/datasets/pg19/README.md
index dbc9cc43526..b0414df00b7 100644
--- a/datasets/pg19/README.md
+++ b/datasets/pg19/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: pg-19
+pretty_name: PG-19
---
# Dataset Card for "pg19"
@@ -172,4 +173,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@lucidrains](https://github.com/lucidrains), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@lucidrains](https://github.com/lucidrains), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
diff --git a/datasets/php/README.md b/datasets/php/README.md
index ad6f2863820..15321204ade 100644
--- a/datasets/php/README.md
+++ b/datasets/php/README.md
@@ -40,6 +40,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: php
---
# Dataset Card Creation Guide
@@ -173,4 +174,4 @@ Here are some examples of questions and facts:
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/piaf/README.md b/datasets/piaf/README.md
index fd190274811..5f6c8e2aac4 100644
--- a/datasets/piaf/README.md
+++ b/datasets/piaf/README.md
@@ -19,9 +19,10 @@ task_ids:
- extractive-qa
- open-domain-qa
paperswithcode_id: null
+pretty_name: Piaf
---
-# Dataset Card for "piaf"
+# Dataset Card for Piaf
## Table of Contents
- [Dataset Description](#dataset-description)
@@ -189,4 +190,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@RachelKer](https://github.com/RachelKer) for adding this dataset.
\ No newline at end of file
+Thanks to [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@RachelKer](https://github.com/RachelKer) for adding this dataset.
diff --git a/datasets/piqa/README.md b/datasets/piqa/README.md
index 508d1c4313d..1a887a7e790 100644
--- a/datasets/piqa/README.md
+++ b/datasets/piqa/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- multiple-choice-qa
paperswithcode_id: piqa
+pretty_name: 'Physical Interaction: Question Answering'
---
# Dataset Card Creation Guide
@@ -183,4 +184,4 @@ Unknown
### Contributions
-Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
\ No newline at end of file
+Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
diff --git a/datasets/pn_summary/README.md b/datasets/pn_summary/README.md
index cd6c9009aa4..c1ac026877e 100644
--- a/datasets/pn_summary/README.md
+++ b/datasets/pn_summary/README.md
@@ -21,6 +21,7 @@ task_ids:
- text-simplification
- topic-classification
paperswithcode_id: pn-summary
+pretty_name: Persian News Summary (PnSummary)
---
# Dataset Card for Persian News Summary (pn_summary)
@@ -186,4 +187,4 @@ This dataset is licensed under MIT License.
### Contributions
-Thanks to [@m3hrdadfi](https://github.com/m3hrdadfi) for adding this dataset.
\ No newline at end of file
+Thanks to [@m3hrdadfi](https://github.com/m3hrdadfi) for adding this dataset.
diff --git a/datasets/poem_sentiment/README.md b/datasets/poem_sentiment/README.md
index a6fd2e15884..7413079c7b0 100644
--- a/datasets/poem_sentiment/README.md
+++ b/datasets/poem_sentiment/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: gutenberg-poem-dataset
+pretty_name: Gutenberg Poem Dataset
---
# Dataset Card Creation Guide
@@ -169,4 +170,4 @@ This work is licensed under a Creative Commons Attribution 4.0 International Lic
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/polemo2/README.md b/datasets/polemo2/README.md
index 13c3d480f84..8c3d0155ff9 100644
--- a/datasets/polemo2/README.md
+++ b/datasets/polemo2/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: null
+pretty_name: polemo2
---
# Dataset Card for [Dataset Name]
@@ -146,4 +147,4 @@ CC BY-NC-SA 4.0
### Contributions
-Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
\ No newline at end of file
+Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
diff --git a/datasets/poleval2019_cyberbullying/README.md b/datasets/poleval2019_cyberbullying/README.md
index da0938e29a5..df8a77e317d 100644
--- a/datasets/poleval2019_cyberbullying/README.md
+++ b/datasets/poleval2019_cyberbullying/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- intent-classification
paperswithcode_id: null
+pretty_name: Poleval 2019 cyberbullying
---
# Dataset Card for Poleval 2019 cyberbullying
@@ -164,4 +165,4 @@ Train and Test
### Contributions
-Thanks to [@czabo](https://github.com/czabo) for adding this dataset.
\ No newline at end of file
+Thanks to [@czabo](https://github.com/czabo) for adding this dataset.
diff --git a/datasets/poleval2019_mt/README.md b/datasets/poleval2019_mt/README.md
index ed8535386a7..f2efbceaa3a 100644
--- a/datasets/poleval2019_mt/README.md
+++ b/datasets/poleval2019_mt/README.md
@@ -21,6 +21,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: Poleval2019Mt
---
# Dataset Card for poleval2019_mt
@@ -165,4 +166,4 @@ The organization details of PolEval is present in this [link](http://2019.poleva
### Contributions
-Thanks to [@vrindaprabhu](https://github.com/vrindaprabhu) for adding this dataset.
\ No newline at end of file
+Thanks to [@vrindaprabhu](https://github.com/vrindaprabhu) for adding this dataset.
diff --git a/datasets/polsum/README.md b/datasets/polsum/README.md
index 12d1eb55da2..6a48848cb26 100644
--- a/datasets/polsum/README.md
+++ b/datasets/polsum/README.md
@@ -18,9 +18,10 @@ task_categories:
task_ids:
- summarization
paperswithcode_id: null
+pretty_name: Polish Summaries Corpus
---
-# Dataset Card for polsum
+# Dataset Card for Polish Summaries Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
@@ -720,4 +721,4 @@ Single train split
```
### Contributions
-Thanks to [@kldarek](https://github.com/kldarek) for adding this dataset.
\ No newline at end of file
+Thanks to [@kldarek](https://github.com/kldarek) for adding this dataset.
diff --git a/datasets/prachathai67k/README.md b/datasets/prachathai67k/README.md
index ca0a25fe9db..1c92322990d 100644
--- a/datasets/prachathai67k/README.md
+++ b/datasets/prachathai67k/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- topic-classification
paperswithcode_id: prachathai-67k
+pretty_name: prachathai67k
---
# Dataset Card for `prachathai67k`
@@ -194,4 +195,4 @@ CC-BY-NC
### Contributions
-Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
\ No newline at end of file
+Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
diff --git a/datasets/pragmeval/README.md b/datasets/pragmeval/README.md
index 8ec63c5c91c..c194af322bd 100644
--- a/datasets/pragmeval/README.md
+++ b/datasets/pragmeval/README.md
@@ -53,8 +53,9 @@ task_categories:
task_ids:
- multi-class-classification
paperswithcode_id: null
+pretty_name: pragmeval
---
### Contributions
-Thanks to [@sileod](https://github.com/sileod) for adding this dataset.
\ No newline at end of file
+Thanks to [@sileod](https://github.com/sileod) for adding this dataset.
diff --git a/datasets/proto_qa/README.md b/datasets/proto_qa/README.md
index 1dd52c0bcba..cae563c8dc8 100644
--- a/datasets/proto_qa/README.md
+++ b/datasets/proto_qa/README.md
@@ -20,6 +20,7 @@ task_ids:
- multiple-choice-qa
- open-domain-qa
paperswithcode_id: protoqa
+pretty_name: ProtoQA
---
# Dataset Card for [Dataset Name]
@@ -218,4 +219,4 @@ howpublished = {https://github.com/iesl/protoqa-data},
### Contributions
-Thanks to [@bpatidar](https://github.com/bpatidar) for adding this dataset.
\ No newline at end of file
+Thanks to [@bpatidar](https://github.com/bpatidar) for adding this dataset.
diff --git a/datasets/psc/README.md b/datasets/psc/README.md
index 6495930ea43..0a07be50547 100644
--- a/datasets/psc/README.md
+++ b/datasets/psc/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- summarization
paperswithcode_id: null
+pretty_name: psc
---
# Dataset Card for [Dataset Name]
@@ -149,4 +150,4 @@ year = "2014",
### Contributions
-Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
\ No newline at end of file
+Thanks to [@abecadel](https://github.com/abecadel) for adding this dataset.
diff --git a/datasets/ptb_text_only/README.md b/datasets/ptb_text_only/README.md
index a1bd3d99813..150185e04c3 100644
--- a/datasets/ptb_text_only/README.md
+++ b/datasets/ptb_text_only/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- language-modeling
paperswithcode_id: null
+pretty_name: Penn Treebank
---
# Dataset Card for Penn Treebank
@@ -154,4 +155,4 @@ The text in the dataset is in American English
}
### Contributions
-Thanks to [@harshalmittal4](https://github.com/harshalmittal4) for adding this dataset.
\ No newline at end of file
+Thanks to [@harshalmittal4](https://github.com/harshalmittal4) for adding this dataset.
diff --git a/datasets/pubmed_qa/README.md b/datasets/pubmed_qa/README.md
index adffeb828d8..831cdb35a52 100644
--- a/datasets/pubmed_qa/README.md
+++ b/datasets/pubmed_qa/README.md
@@ -24,6 +24,7 @@ task_categories:
task_ids:
- multiple-choice-qa
paperswithcode_id: pubmedqa
+pretty_name: PubMedQA
---
# Dataset Card for [Dataset Name]
@@ -145,4 +146,4 @@ paperswithcode_id: pubmedqa
### Contributions
-Thanks to [@tuner007](https://github.com/tuner007) for adding this dataset.
\ No newline at end of file
+Thanks to [@tuner007](https://github.com/tuner007) for adding this dataset.
diff --git a/datasets/qa4mre/README.md b/datasets/qa4mre/README.md
index d39089415d8..9cf8ea37782 100644
--- a/datasets/qa4mre/README.md
+++ b/datasets/qa4mre/README.md
@@ -1,5 +1,6 @@
---
{}
+pretty_name: qa4mre
---
# Dataset Card for "qa4mre"
@@ -286,4 +287,4 @@ isbn="978-3-642-40802-1"
### Contributions
-Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@albertvillanova](https://github.com/albertvillanova), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@albertvillanova](https://github.com/albertvillanova), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/qa_srl/README.md b/datasets/qa_srl/README.md
index 2e0573f9a7a..5063fb5c11a 100644
--- a/datasets/qa_srl/README.md
+++ b/datasets/qa_srl/README.md
@@ -19,6 +19,7 @@ task_ids:
- multiple-choice-qa
- open-domain-qa
paperswithcode_id: qa-srl
+pretty_name: QA-SRL
---
# Dataset Card for QA-SRL
@@ -180,4 +181,4 @@ howpublished={\\url{https://dada.cs.washington.edu/qasrl/#page-top}},
### Contributions
-Thanks to [@bpatidar](https://github.com/bpatidar) for adding this dataset.
\ No newline at end of file
+Thanks to [@bpatidar](https://github.com/bpatidar) for adding this dataset.
diff --git a/datasets/qa_zre/README.md b/datasets/qa_zre/README.md
index aae56df9924..cf2ab654710 100644
--- a/datasets/qa_zre/README.md
+++ b/datasets/qa_zre/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: null
+pretty_name: QaZre
---
# Dataset Card for "qa_zre"
@@ -170,4 +171,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@ghomasHudson](https://github.com/ghomasHudson), [@lewtun](https://github.com/lewtun) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@ghomasHudson](https://github.com/ghomasHudson), [@lewtun](https://github.com/lewtun) for adding this dataset.
diff --git a/datasets/qangaroo/README.md b/datasets/qangaroo/README.md
index 5153b65e6a2..1bc452d7a7a 100644
--- a/datasets/qangaroo/README.md
+++ b/datasets/qangaroo/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: null
+pretty_name: qangaroo
---
# Dataset Card for "qangaroo"
@@ -212,4 +213,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
diff --git a/datasets/qanta/README.md b/datasets/qanta/README.md
index 1d841110072..d5c6a011d48 100644
--- a/datasets/qanta/README.md
+++ b/datasets/qanta/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: quizbowl
+pretty_name: Quizbowl
---
# Dataset Card for "qanta"
@@ -198,4 +199,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
diff --git a/datasets/qed/README.md b/datasets/qed/README.md
index 26aa4044d91..b302f413062 100644
--- a/datasets/qed/README.md
+++ b/datasets/qed/README.md
@@ -19,6 +19,7 @@ task_ids:
- extractive-qa
- question-answering-other-explanations-in-question-answering
paperswithcode_id: qed
+pretty_name: QED
---
# Dataset Card Creation Guide
@@ -143,4 +144,4 @@ paperswithcode_id: qed
[More Information Needed]
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/qed_amara/README.md b/datasets/qed_amara/README.md
index b1b2b05b6f1..a78ed3cb49b 100644
--- a/datasets/qed_amara/README.md
+++ b/datasets/qed_amara/README.md
@@ -242,6 +242,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: QedAmara
---
# Dataset Card Creation Guide
@@ -375,4 +376,4 @@ Here are some examples of questions and facts:
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/quac/README.md b/datasets/quac/README.md
index 3dc73b0c727..e5e757a7c63 100644
--- a/datasets/quac/README.md
+++ b/datasets/quac/README.md
@@ -21,6 +21,7 @@ task_ids:
- dialogue-modeling
- extractive-qa
paperswithcode_id: quac
+pretty_name: Question Answering in Context
---
# Dataset Card Creation Guide
@@ -236,4 +237,4 @@ Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset
### Contributions
-Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
\ No newline at end of file
+Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
diff --git a/datasets/quail/README.md b/datasets/quail/README.md
index 8afd5cb34d8..5a6fdd5a1c7 100644
--- a/datasets/quail/README.md
+++ b/datasets/quail/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: quail
+pretty_name: Question Answering for Artificial Intelligence
---
# Dataset Card for "quail"
@@ -193,4 +194,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@sai-prasanna](https://github.com/sai-prasanna), [@ngdodd](https://github.com/ngdodd) for adding this dataset.
\ No newline at end of file
+Thanks to [@sai-prasanna](https://github.com/sai-prasanna), [@ngdodd](https://github.com/ngdodd) for adding this dataset.
diff --git a/datasets/quarel/README.md b/datasets/quarel/README.md
index eb2ffefbae0..31be1a98dde 100644
--- a/datasets/quarel/README.md
+++ b/datasets/quarel/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: quarel
+pretty_name: QuaRel
---
# Dataset Card for "quarel"
@@ -168,4 +169,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
diff --git a/datasets/quartz/README.md b/datasets/quartz/README.md
index 59ce308d485..87cf399a6a6 100644
--- a/datasets/quartz/README.md
+++ b/datasets/quartz/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: quartz
+pretty_name: QuaRTz Dataset
---
# Dataset Card for "quartz"
@@ -202,4 +203,4 @@ Questions"},
### Contributions
-Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/quora/README.md b/datasets/quora/README.md
index e757944e539..7a644c82954 100644
--- a/datasets/quora/README.md
+++ b/datasets/quora/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: null
+pretty_name: quora
---
# Dataset Card for "quora"
@@ -154,4 +155,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@ghomasHudson](https://github.com/ghomasHudson), [@lewtun](https://github.com/lewtun) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@ghomasHudson](https://github.com/ghomasHudson), [@lewtun](https://github.com/lewtun) for adding this dataset.
diff --git a/datasets/quoref/README.md b/datasets/quoref/README.md
index d41083eebd4..44dd26bf88f 100644
--- a/datasets/quoref/README.md
+++ b/datasets/quoref/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: quoref
+pretty_name: Quoref
---
# Dataset Card for "quoref"
@@ -172,4 +173,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/re_dial/README.md b/datasets/re_dial/README.md
index 609b5aee775..13c0a45d558 100644
--- a/datasets/re_dial/README.md
+++ b/datasets/re_dial/README.md
@@ -20,6 +20,7 @@ task_ids:
- sentiment-classification
- text-classification-other-dialogue-sentiment-classification
paperswithcode_id: redial
+pretty_name: ReDial (Recommendation Dialogues)
---
# Dataset Card for ReDial (Recommendation Dialogues)
@@ -392,4 +393,4 @@ The data is published under the CC BY 4.0 License.
### Contributions
-Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
\ No newline at end of file
+Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
diff --git a/datasets/reasoning_bg/README.md b/datasets/reasoning_bg/README.md
index c7f63c58cc9..a38dade16bd 100644
--- a/datasets/reasoning_bg/README.md
+++ b/datasets/reasoning_bg/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- multiple-choice-qa
paperswithcode_id: null
+pretty_name: ReasoningBg
---
# Dataset Card for reasoning_bg
@@ -174,4 +175,4 @@ Data has been sourced from the matriculation exams and online quizzes.
### Contributions
-Thanks to [@saradhix](https://github.com/saradhix) for adding this dataset.
\ No newline at end of file
+Thanks to [@saradhix](https://github.com/saradhix) for adding this dataset.
diff --git a/datasets/recipe_nlg/README.md b/datasets/recipe_nlg/README.md
index d2ad1387ac2..3e58eadd8f2 100644
--- a/datasets/recipe_nlg/README.md
+++ b/datasets/recipe_nlg/README.md
@@ -24,6 +24,7 @@ task_ids:
- language-modeling
- summarization
paperswithcode_id: recipenlg
+pretty_name: RecipeNLG
---
@@ -151,4 +152,4 @@ paperswithcode_id: recipenlg
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/reclor/README.md b/datasets/reclor/README.md
index 5ea81308fe5..0b62223cfb3 100644
--- a/datasets/reclor/README.md
+++ b/datasets/reclor/README.md
@@ -1,7 +1,8 @@
---
paperswithcode_id: reclor
+pretty_name: ReClor
---
### Contributions
-Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@JetRunner](https://github.com/JetRunner), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
\ No newline at end of file
+Thanks to [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@JetRunner](https://github.com/JetRunner), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
diff --git a/datasets/reddit/README.md b/datasets/reddit/README.md
index 8d33ff2a3d3..a5073120da4 100644
--- a/datasets/reddit/README.md
+++ b/datasets/reddit/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: reddit
+pretty_name: Reddit
---
# Dataset Card for "reddit"
@@ -183,4 +184,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/reddit_tifu/README.md b/datasets/reddit_tifu/README.md
index 47dc17f283c..9a27c69c5a7 100644
--- a/datasets/reddit_tifu/README.md
+++ b/datasets/reddit_tifu/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: reddit-tifu
+pretty_name: Reddit TIFU
---
# Dataset Card for "reddit_tifu"
@@ -192,4 +193,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/refresd/README.md b/datasets/refresd/README.md
index 3c2b96da273..909f0d9b9f9 100644
--- a/datasets/refresd/README.md
+++ b/datasets/refresd/README.md
@@ -22,6 +22,7 @@ task_ids:
- semantic-similarity-classification
- semantic-similarity-scoring
paperswithcode_id: refresd
+pretty_name: Rationalized English-French Semantic Divergences
---
# Dataset Card for REFreSD Dataset
diff --git a/datasets/ro_sent/README.md b/datasets/ro_sent/README.md
index 4c7924e794e..87e9093aa37 100644
--- a/datasets/ro_sent/README.md
+++ b/datasets/ro_sent/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: null
+pretty_name: RoSent
---
# Dataset Card for RoSent
diff --git a/datasets/roman_urdu/README.md b/datasets/roman_urdu/README.md
index bf7c6194049..28f459999e1 100644
--- a/datasets/roman_urdu/README.md
+++ b/datasets/roman_urdu/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: roman-urdu-data-set
+pretty_name: Roman Urdu Dataset
---
# Dataset Card for Roman Urdu Dataset
@@ -160,4 +161,4 @@ Each row consists of a short Urdu text, followed by a sentiment label. The label
### Contributions
-Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset.
\ No newline at end of file
+Thanks to [@jaketae](https://github.com/jaketae) for adding this dataset.
diff --git a/datasets/s2orc/README.md b/datasets/s2orc/README.md
index beaa7347150..0343bd9c5e2 100644
--- a/datasets/s2orc/README.md
+++ b/datasets/s2orc/README.md
@@ -23,6 +23,7 @@ task_ids:
- multi-label-classification
- other-other-citation-recommendation
paperswithcode_id: s2orc
+pretty_name: S2ORC
---
# Dataset Card for S2ORC: The Semantic Scholar Open Research Corpus
diff --git a/datasets/samsum/README.md b/datasets/samsum/README.md
index 9e7070bf369..ea114633db8 100644
--- a/datasets/samsum/README.md
+++ b/datasets/samsum/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- summarization
paperswithcode_id: samsum-corpus
+pretty_name: SAMSum Corpus
---
# Dataset Card for SAMSum Corpus
diff --git a/datasets/sanskrit_classic/README.md b/datasets/sanskrit_classic/README.md
index 6d66ad4275d..0dad96d49c7 100644
--- a/datasets/sanskrit_classic/README.md
+++ b/datasets/sanskrit_classic/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- language-modeling
paperswithcode_id: null
+pretty_name: SanskritClassic
---
# Dataset Card for [Dataset Name]
@@ -142,4 +143,4 @@ Sanskrit
### Contributions
-Thanks to [@parmarsuraj99](https://github.com/parmarsuraj99) for adding this dataset.
\ No newline at end of file
+Thanks to [@parmarsuraj99](https://github.com/parmarsuraj99) for adding this dataset.
diff --git a/datasets/saudinewsnet/README.md b/datasets/saudinewsnet/README.md
index fc010866123..37c8b09dd5d 100644
--- a/datasets/saudinewsnet/README.md
+++ b/datasets/saudinewsnet/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- language-modeling
paperswithcode_id: null
+pretty_name: saudinewsnet
---
# Dataset Card for "saudinewsnet"
@@ -206,4 +207,4 @@ url = "http://github.com/ParallelMazen/SaudiNewsNet"
### Contributions
-Thanks to [@abdulelahsm](https://github.com/abdulelahsm) for adding this dataset.
\ No newline at end of file
+Thanks to [@abdulelahsm](https://github.com/abdulelahsm) for adding this dataset.
diff --git a/datasets/scan/README.md b/datasets/scan/README.md
index 9e39f554d8b..eecee8149c5 100644
--- a/datasets/scan/README.md
+++ b/datasets/scan/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: scan
+pretty_name: Simplified versions of the CommAI Navigation tasks
---
# Dataset Card for "scan"
@@ -227,4 +228,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/scb_mt_enth_2020/README.md b/datasets/scb_mt_enth_2020/README.md
index 7c9141b413b..d8cd272e76f 100644
--- a/datasets/scb_mt_enth_2020/README.md
+++ b/datasets/scb_mt_enth_2020/README.md
@@ -24,6 +24,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: scb-mt-en-th-2020
+pretty_name: ScbMtEnth2020
---
# Dataset Card for `scb_mt_enth_2020`
@@ -241,4 +242,4 @@ CC-BY-SA 4.0
### Contributions
-Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
\ No newline at end of file
+Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
diff --git a/datasets/schema_guided_dstc8/README.md b/datasets/schema_guided_dstc8/README.md
index e881764245c..7f78a7fa647 100644
--- a/datasets/schema_guided_dstc8/README.md
+++ b/datasets/schema_guided_dstc8/README.md
@@ -23,6 +23,7 @@ task_ids:
- multi-class-classification
- parsing
paperswithcode_id: sgd
+pretty_name: Schema-Guided Dialogue
---
# Dataset Card for The Schema-Guided Dialogue Dataset
@@ -222,4 +223,4 @@ For the initial release paper please cite:
### Contributions
-Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset.
\ No newline at end of file
+Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset.
diff --git a/datasets/scielo/README.md b/datasets/scielo/README.md
index aadee06ec33..4afe0c5e917 100644
--- a/datasets/scielo/README.md
+++ b/datasets/scielo/README.md
@@ -27,9 +27,10 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: SciELO
---
-# Dataset Card for [Dataset Name]
+# Dataset Card for SciELO
## Table of Contents
- [Dataset Description](#dataset-description)
@@ -157,4 +158,4 @@ The underlying task is machine translation.
```
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/scientific_papers/README.md b/datasets/scientific_papers/README.md
index dbba429edf9..6dab52f6ee3 100644
--- a/datasets/scientific_papers/README.md
+++ b/datasets/scientific_papers/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: null
+pretty_name: ScientificPapers
---
# Dataset Card for "scientific_papers"
@@ -195,4 +196,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
diff --git a/datasets/sciq/README.md b/datasets/sciq/README.md
index 2039f59fd0c..73e94f77cc9 100644
--- a/datasets/sciq/README.md
+++ b/datasets/sciq/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: sciq
+pretty_name: SciQ
---
# Dataset Card for "sciq"
@@ -165,4 +166,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/scitail/README.md b/datasets/scitail/README.md
index fc548e1e02c..538914e1cdf 100644
--- a/datasets/scitail/README.md
+++ b/datasets/scitail/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: scitail
+pretty_name: SciTail
---
# Dataset Card for "scitail"
@@ -217,4 +218,4 @@ inproceedings{scitail,
### Contributions
-Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/search_qa/README.md b/datasets/search_qa/README.md
index d37e1ddf5e7..db39edb77c6 100644
--- a/datasets/search_qa/README.md
+++ b/datasets/search_qa/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: searchqa
+pretty_name: SearchQA
---
# Dataset Card for "search_qa"
@@ -216,4 +217,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/selqa/README.md b/datasets/selqa/README.md
index d4ec638c228..a2ea9f3f804 100644
--- a/datasets/selqa/README.md
+++ b/datasets/selqa/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- open-domain-qa
paperswithcode_id: selqa
+pretty_name: SelQA
---
# Dataset Card for SelQA
@@ -312,4 +313,4 @@ Apache License 2.0
### Contributions
-Thanks to [@Bharat123rox](https://github.com/Bharat123rox) for adding this dataset.
\ No newline at end of file
+Thanks to [@Bharat123rox](https://github.com/Bharat123rox) for adding this dataset.
diff --git a/datasets/sem_eval_2010_task_8/README.md b/datasets/sem_eval_2010_task_8/README.md
index 007d3847ab5..cdf2982aa7d 100644
--- a/datasets/sem_eval_2010_task_8/README.md
+++ b/datasets/sem_eval_2010_task_8/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: semeval-2010-task-8
+pretty_name: SemEval-2010 Task 8
---
# Dataset Card for "sem_eval_2010_task_8"
@@ -170,4 +171,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@JoelNiklaus](https://github.com/JoelNiklaus) for adding this dataset.
\ No newline at end of file
+Thanks to [@JoelNiklaus](https://github.com/JoelNiklaus) for adding this dataset.
diff --git a/datasets/sem_eval_2014_task_1/README.md b/datasets/sem_eval_2014_task_1/README.md
index 9964a7a220b..e1a90cd1c16 100644
--- a/datasets/sem_eval_2014_task_1/README.md
+++ b/datasets/sem_eval_2014_task_1/README.md
@@ -20,6 +20,7 @@ task_ids:
- natural-language-inference
- semantic-similarity-scoring
paperswithcode_id: null
+pretty_name: SemEval 2014 - Task 1
---
# Dataset Card for SemEval 2014 - Task 1
@@ -142,4 +143,4 @@ paperswithcode_id: null
### Contributions
-Thanks to [@ashmeet13](https://github.com/ashmeet13) for adding this dataset.
\ No newline at end of file
+Thanks to [@ashmeet13](https://github.com/ashmeet13) for adding this dataset.
diff --git a/datasets/sent_comp/README.md b/datasets/sent_comp/README.md
index 827ea159057..f2602eb9808 100644
--- a/datasets/sent_comp/README.md
+++ b/datasets/sent_comp/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- other-other-sentence-compression
paperswithcode_id: sentence-compression
+pretty_name: Google Sentence Compression
---
# Dataset Card for Google Sentence Compression
@@ -182,4 +183,4 @@ Dependency tree features:
### Contributions
-Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset.
\ No newline at end of file
+Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset.
diff --git a/datasets/senti_lex/README.md b/datasets/senti_lex/README.md
index eb9e23016a5..3e05567ae1b 100644
--- a/datasets/senti_lex/README.md
+++ b/datasets/senti_lex/README.md
@@ -265,6 +265,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: null
+pretty_name: SentiWS
---
# Dataset Card for SentiWS
diff --git a/datasets/senti_ws/README.md b/datasets/senti_ws/README.md
index b31c798813e..b5d2e37b498 100644
--- a/datasets/senti_ws/README.md
+++ b/datasets/senti_ws/README.md
@@ -21,6 +21,7 @@ task_ids:
- sentiment-scoring
- structure-prediction-other-pos-tagging
paperswithcode_id: null
+pretty_name: SentiWS
---
# Dataset Card for SentiWS
@@ -166,4 +167,4 @@ year = {2010}
}
### Contributions
-Thanks to [@harshalmittal4](https://github.com/harshalmittal4) for adding this dataset.
\ No newline at end of file
+Thanks to [@harshalmittal4](https://github.com/harshalmittal4) for adding this dataset.
diff --git a/datasets/sentiment140/README.md b/datasets/sentiment140/README.md
index 4439230635a..4614a4e7018 100644
--- a/datasets/sentiment140/README.md
+++ b/datasets/sentiment140/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: sentiment140
+pretty_name: Sentiment140
---
# Dataset Card for "sentiment140"
@@ -165,4 +166,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/sepedi_ner/README.md b/datasets/sepedi_ner/README.md
index 98327255cdb..4917e4475c5 100644
--- a/datasets/sepedi_ner/README.md
+++ b/datasets/sepedi_ner/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: Sepedi NER Corpus
---
# Dataset Card for Sepedi NER Corpus
@@ -169,4 +170,4 @@ The data is under the [Creative Commons Attribution 2.5 South Africa License](ht
```
### Contributions
-Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
\ No newline at end of file
+Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
diff --git a/datasets/sesotho_ner_corpus/README.md b/datasets/sesotho_ner_corpus/README.md
index 94a30751f22..d51243f7800 100644
--- a/datasets/sesotho_ner_corpus/README.md
+++ b/datasets/sesotho_ner_corpus/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: Sesotho NER Corpus
---
# Dataset Card for Sesotho NER Corpus
@@ -171,4 +172,4 @@ The data is under the [Creative Commons Attribution 2.5 South Africa License](ht
### Contributions
-Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
\ No newline at end of file
+Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
diff --git a/datasets/setswana_ner_corpus/README.md b/datasets/setswana_ner_corpus/README.md
index 987ebff937e..9cd0646cfb2 100644
--- a/datasets/setswana_ner_corpus/README.md
+++ b/datasets/setswana_ner_corpus/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: Setswana NER Corpus
---
# Dataset Card for Setswana NER Corpus
@@ -172,4 +173,4 @@ The data is under the [Creative Commons Attribution 2.5 South Africa License](ht
### Contributions
-Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
\ No newline at end of file
+Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
diff --git a/datasets/sharc/README.md b/datasets/sharc/README.md
index eb6a1a5c0d5..7497f55e46a 100644
--- a/datasets/sharc/README.md
+++ b/datasets/sharc/README.md
@@ -20,6 +20,7 @@ task_ids:
- extractive-qa
- question-answering-other-conversational-qa
paperswithcode_id: sharc
+pretty_name: Shaping Answers with Rules through Conversation
---
# Dataset Card Creation Guide
@@ -145,4 +146,4 @@ paperswithcode_id: sharc
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/sharc_modified/README.md b/datasets/sharc_modified/README.md
index 970ebb21b8f..43c5c3f4d57 100644
--- a/datasets/sharc_modified/README.md
+++ b/datasets/sharc_modified/README.md
@@ -20,6 +20,7 @@ task_ids:
- extractive-qa
- question-answering-other-conversational-qa
paperswithcode_id: null
+pretty_name: SharcModified
---
# Dataset Card Creation Guide
@@ -229,4 +230,4 @@ The dataset is split into training and validation splits.
```
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/sick/README.md b/datasets/sick/README.md
index 11c6ed9855b..2a0d91426d4 100644
--- a/datasets/sick/README.md
+++ b/datasets/sick/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- natural-language-inference
paperswithcode_id: sick
+pretty_name: Sentences Involving Compositional Knowledge
---
# Dataset Card for sick
diff --git a/datasets/siswati_ner_corpus/README.md b/datasets/siswati_ner_corpus/README.md
index ab216a37b1e..d3921d29960 100644
--- a/datasets/siswati_ner_corpus/README.md
+++ b/datasets/siswati_ner_corpus/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: Siswati NER Corpus
---
# Dataset Card for Siswati NER Corpus
@@ -174,4 +175,4 @@ The data is under the [Creative Commons Attribution 2.5 South Africa License](ht
### Contributions
-Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
\ No newline at end of file
+Thanks to [@yvonnegitau](https://github.com/yvonnegitau) for adding this dataset.
diff --git a/datasets/smartdata/README.md b/datasets/smartdata/README.md
index ce1236eff48..c889c00e059 100644
--- a/datasets/smartdata/README.md
+++ b/datasets/smartdata/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: SmartData
---
# Dataset Card for SmartData
@@ -164,4 +165,4 @@ CC-BY 4.0
### Contributions
-Thanks to [@aseifert](https://github.com/aseifert) for adding this dataset.
\ No newline at end of file
+Thanks to [@aseifert](https://github.com/aseifert) for adding this dataset.
diff --git a/datasets/sms_spam/README.md b/datasets/sms_spam/README.md
index 248b78f667d..4c471240909 100644
--- a/datasets/sms_spam/README.md
+++ b/datasets/sms_spam/README.md
@@ -20,6 +20,7 @@ task_categories:
task_ids:
- intent-classification
paperswithcode_id: sms-spam-collection-data-set
+pretty_name: SMS Spam Collection Data Set
---
# Dataset Card for [Dataset Name]
@@ -149,4 +150,4 @@ English
### Contributions
-Thanks to [@czabo](https://github.com/czabo) for adding this dataset.
\ No newline at end of file
+Thanks to [@czabo](https://github.com/czabo) for adding this dataset.
diff --git a/datasets/snips_built_in_intents/README.md b/datasets/snips_built_in_intents/README.md
index 74380961969..77e615a3142 100644
--- a/datasets/snips_built_in_intents/README.md
+++ b/datasets/snips_built_in_intents/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- intent-classification
paperswithcode_id: snips
+pretty_name: SNIPS Natural Language Understanding benchmark
---
# Dataset Card for Snips Built In Intents
@@ -151,4 +152,4 @@ https://arxiv.org/abs/1805.10190
### Contributions
-Thanks to [@bduvenhage](https://github.com/bduvenhage) for adding this dataset.
\ No newline at end of file
+Thanks to [@bduvenhage](https://github.com/bduvenhage) for adding this dataset.
diff --git a/datasets/snli/README.md b/datasets/snli/README.md
index 5537b968bc2..1fc5179a2cc 100644
--- a/datasets/snli/README.md
+++ b/datasets/snli/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- natural-language-inference
paperswithcode_id: snli
+pretty_name: Stanford Natural Language Inference
---
# Dataset Card for SNLI
diff --git a/datasets/snow_simplified_japanese_corpus/README.md b/datasets/snow_simplified_japanese_corpus/README.md
index d2dee62841b..946d5bf99ba 100644
--- a/datasets/snow_simplified_japanese_corpus/README.md
+++ b/datasets/snow_simplified_japanese_corpus/README.md
@@ -20,6 +20,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: SNOW T15 and T23 (simplified Japanese corpus)
---
# Dataset Card for SNOW T15 and T23 (simplified Japanese corpus)
@@ -209,4 +210,4 @@ CC BY 4.0
### Contributions
-Thanks to [@forest1988](https://github.com/forest1988), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
\ No newline at end of file
+Thanks to [@forest1988](https://github.com/forest1988), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
diff --git a/datasets/social_i_qa/README.md b/datasets/social_i_qa/README.md
index f11a2c95209..d126bd08b5b 100644
--- a/datasets/social_i_qa/README.md
+++ b/datasets/social_i_qa/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: social-iqa
+pretty_name: Social Interaction QA
---
# Dataset Card for "social_i_qa"
@@ -157,4 +158,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
\ No newline at end of file
+Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
diff --git a/datasets/sofc_materials_articles/README.md b/datasets/sofc_materials_articles/README.md
index 38de0e31830..ec314e5a7a0 100644
--- a/datasets/sofc_materials_articles/README.md
+++ b/datasets/sofc_materials_articles/README.md
@@ -22,6 +22,7 @@ task_ids:
- slot-filling
- topic-classification
paperswithcode_id: null
+pretty_name: SofcMaterialsArticles
---
@@ -208,4 +209,4 @@ The manual annotations created for the SOFC-Exp corpus are licensed under a [Cre
### Contributions
-Thanks to [@ZacharySBrown](https://github.com/ZacharySBrown) for adding this dataset.
\ No newline at end of file
+Thanks to [@ZacharySBrown](https://github.com/ZacharySBrown) for adding this dataset.
diff --git a/datasets/spanish_billion_words/README.md b/datasets/spanish_billion_words/README.md
index 813edf02730..2eaac990c4a 100644
--- a/datasets/spanish_billion_words/README.md
+++ b/datasets/spanish_billion_words/README.md
@@ -20,6 +20,7 @@ task_ids:
- language-modeling
- other-other-pretraining-language-models
paperswithcode_id: sbwce
+pretty_name: Spanish Billion Word Corpus and Embeddings
---
# Dataset Card for Spanish Billion Words
@@ -174,4 +175,4 @@ The dataset is licensed under a Creative Commons Attribution-ShareAlike 4.0 Inte
```
### Contributions
-Thanks to [@mariagrandury](https://github.com/mariagrandury) for adding this dataset.
\ No newline at end of file
+Thanks to [@mariagrandury](https://github.com/mariagrandury) for adding this dataset.
diff --git a/datasets/spc/README.md b/datasets/spc/README.md
index a28f0b7822d..9b2dc8d2a97 100644
--- a/datasets/spc/README.md
+++ b/datasets/spc/README.md
@@ -26,6 +26,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: spc
---
# Dataset Card Creation Guide
@@ -152,4 +153,4 @@ paperswithcode_id: null
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/species_800/README.md b/datasets/species_800/README.md
index 2d81acff0da..828842f811c 100644
--- a/datasets/species_800/README.md
+++ b/datasets/species_800/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: species800
---
# Dataset Card for [Dataset Name]
@@ -141,4 +142,4 @@ paperswithcode_id: null
[More Information Needed]
### Contributions
-Thanks to [@edugp](https://github.com/edugp) for adding this dataset.
\ No newline at end of file
+Thanks to [@edugp](https://github.com/edugp) for adding this dataset.
diff --git a/datasets/squad_adversarial/README.md b/datasets/squad_adversarial/README.md
index d32e3420239..1d73525c23a 100644
--- a/datasets/squad_adversarial/README.md
+++ b/datasets/squad_adversarial/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- extractive-qa
paperswithcode_id: null
+pretty_name: '''Adversarial Examples for SQuAD'''
---
# Dataset Card for 'Adversarial Examples for SQuAD'
@@ -176,4 +177,4 @@ SQuAD dev set (+with adversarial sentences added)
### Contributions
-Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset.
\ No newline at end of file
+Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset.
diff --git a/datasets/squad_es/README.md b/datasets/squad_es/README.md
index ca4b6a826df..60bc109113a 100644
--- a/datasets/squad_es/README.md
+++ b/datasets/squad_es/README.md
@@ -1,5 +1,6 @@
---
paperswithcode_id: squad-es
+pretty_name: SQuAD-es
---
# Dataset Card for "squad_es"
@@ -171,4 +172,4 @@ archivePrefix = {arXiv},
### Contributions
-Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun) for adding this dataset.
\ No newline at end of file
+Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun) for adding this dataset.
diff --git a/datasets/squad_it/README.md b/datasets/squad_it/README.md
index 8d83dc68fc4..9266c40a575 100644
--- a/datasets/squad_it/README.md
+++ b/datasets/squad_it/README.md
@@ -19,6 +19,7 @@ task_ids:
- open-domain-qa
- extractive-qa
paperswithcode_id: squad-it
+pretty_name: SQuAD-it
---
# Dataset Card for "squad_it"
@@ -188,4 +189,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@mariamabarham](https://github.com/mariamabarham), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
diff --git a/datasets/squad_kor_v1/README.md b/datasets/squad_kor_v1/README.md
index 5d3348f5b7d..42310a96bfc 100644
--- a/datasets/squad_kor_v1/README.md
+++ b/datasets/squad_kor_v1/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- extractive-qa
paperswithcode_id: korquad
+pretty_name: The Korean Question Answering Dataset
---
# Dataset Card for KorQuAD v1.0
@@ -160,4 +161,4 @@ Wikipedia
### Contributions
-Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset.
\ No newline at end of file
+Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset.
diff --git a/datasets/squad_kor_v2/README.md b/datasets/squad_kor_v2/README.md
index 06d988e9bfa..500f523d63d 100644
--- a/datasets/squad_kor_v2/README.md
+++ b/datasets/squad_kor_v2/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- extractive-qa
paperswithcode_id: null
+pretty_name: KorQuAD v2.1
---
# Dataset Card for KorQuAD v2.1
@@ -175,4 +176,4 @@ Wikipedia
### Contributions
-Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset.
\ No newline at end of file
+Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset.
diff --git a/datasets/squad_v1_pt/README.md b/datasets/squad_v1_pt/README.md
index 5260f332ec3..c346c319a87 100644
--- a/datasets/squad_v1_pt/README.md
+++ b/datasets/squad_v1_pt/README.md
@@ -19,6 +19,7 @@ task_ids:
- extractive-qa
- open-domain-qa
paperswithcode_id: null
+pretty_name: SquadV1Pt
---
# Dataset Card for "squad_v1_pt"
@@ -190,4 +191,4 @@ archivePrefix = {arXiv},
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
diff --git a/datasets/squadshifts/README.md b/datasets/squadshifts/README.md
index e6b8754ec93..fad27b9eddf 100644
--- a/datasets/squadshifts/README.md
+++ b/datasets/squadshifts/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: squad-shifts
+pretty_name: SQuAD-shifts
---
# Dataset Card for "squadshifts"
@@ -257,4 +258,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@millerjohnp](https://github.com/millerjohnp), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@millerjohnp](https://github.com/millerjohnp), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
diff --git a/datasets/srwac/README.md b/datasets/srwac/README.md
index 9f1f080f0ab..a4da3f5aca4 100644
--- a/datasets/srwac/README.md
+++ b/datasets/srwac/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- language-modeling
paperswithcode_id: null
+pretty_name: SrWac
---
# Dataset Card for SrWac
@@ -148,4 +149,4 @@ Dataset is under the [CC-BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.
### Contributions
-Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset.
\ No newline at end of file
+Thanks to [@IvanZidov](https://github.com/IvanZidov) for adding this dataset.
diff --git a/datasets/sst/README.md b/datasets/sst/README.md
index 24735175435..58366a8e8a8 100644
--- a/datasets/sst/README.md
+++ b/datasets/sst/README.md
@@ -23,6 +23,7 @@ task_ids:
- sentiment-classification
- sentiment-scoring
paperswithcode_id: sst
+pretty_name: Stanford Sentiment Treebank
---
# Dataset Card for sst
@@ -186,4 +187,4 @@ Rotten Tomatoes reviewers.
### Contributions
-Thanks to [@patpizio](https://github.com/patpizio) for adding this dataset.
\ No newline at end of file
+Thanks to [@patpizio](https://github.com/patpizio) for adding this dataset.
diff --git a/datasets/stereoset/README.md b/datasets/stereoset/README.md
index 8e9c66dd41c..1135e7e9fe2 100644
--- a/datasets/stereoset/README.md
+++ b/datasets/stereoset/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- text-classification-other-stereotype-detection
paperswithcode_id: stereoset
+pretty_name: StereoSet
---
# Dataset Card for StereoSet
@@ -175,4 +176,4 @@ CC-BY-SA 4.0
### Contributions
-Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
\ No newline at end of file
+Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
diff --git a/datasets/stsb_mt_sv/README.md b/datasets/stsb_mt_sv/README.md
index f42f2ddc7f6..c6af47e0488 100644
--- a/datasets/stsb_mt_sv/README.md
+++ b/datasets/stsb_mt_sv/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- semantic-similarity-scoring
paperswithcode_id: null
+pretty_name: Swedish Machine Translated STS-B
---
# Dataset Card for Swedish Machine Translated STS-B
@@ -160,4 +161,4 @@ The machine translated version were put together by @timpal0l
### Contributions
-Thanks to [@timpal0l](https://github.com/timpal0l) for adding this dataset.
\ No newline at end of file
+Thanks to [@timpal0l](https://github.com/timpal0l) for adding this dataset.
diff --git a/datasets/stsb_multi_mt/README.md b/datasets/stsb_multi_mt/README.md
index 29353eba4d0..0bc2212dd49 100644
--- a/datasets/stsb_multi_mt/README.md
+++ b/datasets/stsb_multi_mt/README.md
@@ -29,6 +29,7 @@ task_categories:
task_ids:
- semantic-similarity-scoring
paperswithcode_id: null
+pretty_name: STSb Multi MT
---
# Dataset Card for STSb Multi MT
diff --git a/datasets/style_change_detection/README.md b/datasets/style_change_detection/README.md
index 232612ce702..c690700a49c 100644
--- a/datasets/style_change_detection/README.md
+++ b/datasets/style_change_detection/README.md
@@ -1,5 +1,6 @@
---
paperswithcode_id: null
+pretty_name: StyleChangeDetection
---
# Dataset Card for "style_change_detection"
@@ -196,4 +197,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@lewtun](https://github.com/lewtun), [@ghomasHudson](https://github.com/ghomasHudson), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
\ No newline at end of file
+Thanks to [@lewtun](https://github.com/lewtun), [@ghomasHudson](https://github.com/ghomasHudson), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
diff --git a/datasets/subjqa/README.md b/datasets/subjqa/README.md
index 9585068b8ab..6581fd2c02e 100644
--- a/datasets/subjqa/README.md
+++ b/datasets/subjqa/README.md
@@ -21,6 +21,7 @@ task_categories:
task_ids:
- extractive-qa
paperswithcode_id: subjqa
+pretty_name: subjqa
---
# Dataset Card for subjqa
diff --git a/datasets/super_glue/README.md b/datasets/super_glue/README.md
index 2a561cecd67..aeeb0f00dca 100644
--- a/datasets/super_glue/README.md
+++ b/datasets/super_glue/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: superglue
+pretty_name: SuperGLUE
---
# Dataset Card for "super_glue"
@@ -266,4 +267,4 @@ get the correct citation for each contained dataset.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
diff --git a/datasets/swag/README.md b/datasets/swag/README.md
index 662ea91c3fe..32adfdea11b 100644
--- a/datasets/swag/README.md
+++ b/datasets/swag/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- natural-language-inference
paperswithcode_id: swag
+pretty_name: Situations With Adversarial Generations
---
# Dataset Card Creation Guide
@@ -195,4 +196,4 @@ Unknown
### Contributions
-Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
\ No newline at end of file
+Thanks to [@VictorSanh](https://github.com/VictorSanh) for adding this dataset.
diff --git a/datasets/swahili/README.md b/datasets/swahili/README.md
index 581dd4625bd..67e7bbbd022 100644
--- a/datasets/swahili/README.md
+++ b/datasets/swahili/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- language-modeling
paperswithcode_id: null
+pretty_name: swahili
---
# Dataset Card for [Dataset Name]
@@ -153,4 +154,4 @@ link = http://doi.org/10.5281/zenodo.3553423
### Contributions
-Thanks to [@akshayb7](https://github.com/akshayb7) for adding this dataset.
\ No newline at end of file
+Thanks to [@akshayb7](https://github.com/akshayb7) for adding this dataset.
diff --git a/datasets/swda/README.md b/datasets/swda/README.md
index 99ae8c092c1..a7a84686eb6 100644
--- a/datasets/swda/README.md
+++ b/datasets/swda/README.md
@@ -18,9 +18,10 @@ task_categories:
task_ids:
- multi-label-classification
paperswithcode_id: null
+pretty_name: The Switchboard Dialog Act Corpus (SwDA)
---
-# Dataset Card for swda
+# Dataset Card for SwDA
## Table of Contents
- [Dataset Description](#dataset-description)
@@ -56,11 +57,11 @@ paperswithcode_id: null
### Dataset Summary
-The Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2 with
-turn/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information about the
+The Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2 with
+turn/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information about the
associated turn. The SwDA project was undertaken at UC Boulder in the late 1990s.
-The SwDA is not inherently linked to the Penn Treebank 3 parses of Switchboard, and it is far from straightforward to
-align the two resources. In addition, the SwDA is not distributed with the Switchboard's tables of metadata about the
+The SwDA is not inherently linked to the Penn Treebank 3 parses of Switchboard, and it is far from straightforward to
+align the two resources. In addition, the SwDA is not distributed with the Switchboard's tables of metadata about the
conversations and their participants.
@@ -72,7 +73,7 @@ conversations and their participants.
| SGNN (Ravi et al., 2018) | 83.1 | [Self-Governing Neural Networks for On-Device Short Text Classification](https://www.aclweb.org/anthology/D18-1105.pdf)
| CASA (Raheja et al., 2019) | 82.9 | [Dialogue Act Classification with Context-Aware Self-Attention](https://www.aclweb.org/anthology/N19-1373.pdf)
| DAH-CRF (Li et al., 2019) | 82.3 | [A Dual-Attention Hierarchical Recurrent Neural Network for Dialogue Act Classification](https://www.aclweb.org/anthology/K19-1036.pdf)
-| ALDMN (Wan et al., 2018) | 81.5 | [Improved Dynamic Memory Network for Dialogue Act Classification with Adversarial Training](https://arxiv.org/pdf/1811.05021.pdf)
+| ALDMN (Wan et al., 2018) | 81.5 | [Improved Dynamic Memory Network for Dialogue Act Classification with Adversarial Training](https://arxiv.org/pdf/1811.05021.pdf)
| CRF-ASN (Chen et al., 2018) | 81.3 | [Dialogue Act Recognition via CRF-Attentive Structured Network](https://arxiv.org/abs/1711.05568)
| Pretrained H-Transformer (Chapuis et al., 2020) | 79.3 | [Hierarchical Pre-training for Sequence Labelling in Spoken Dialog] (https://www.aclweb.org/anthology/2020.findings-emnlp.239)
| Bi-LSTM-CRF (Kumar et al., 2017) | 79.2 | [Dialogue Act Sequence Labeling using Hierarchical encoder with CRF](https://arxiv.org/abs/1709.04250) | [Link](https://github.com/YanWenqiang/HBLSTM-CRF) |
@@ -122,7 +123,7 @@ An example from the dataset is:
* `to_caller_education`: (int) Called education level 0, 1, 2, 3, 9.
* `to_caller_birth_year`: (int) Caller birth year YYYY.
* `to_caller_dialect_area`: (str) MIXED, NEW ENGLAND, NORTH MIDLAND, NORTHERN, NYC, SOUTH MIDLAND, SOUTHERN, UNK, WESTERN.
-
+
### Dialog act annotations
@@ -196,7 +197,7 @@ The development set is a subset of the training set to speed up development and
#### Initial Data Collection and Normalization
-The SwDA is not inherently linked to the Penn Treebank 3 parses of Switchboard, and it is far from straightforward to align the two resources Calhoun et al. 2010, §2.4. In addition, the SwDA is not distributed with the Switchboard's tables of metadata about the conversations and their participants.
+The SwDA is not inherently linked to the Penn Treebank 3 parses of Switchboard, and it is far from straightforward to align the two resources Calhoun et al. 2010, §2.4. In addition, the SwDA is not distributed with the Switchboard's tables of metadata about the conversations and their participants.
#### Who are the source language producers?
diff --git a/datasets/swedish_reviews/README.md b/datasets/swedish_reviews/README.md
index 698220a92c9..429e68ac198 100644
--- a/datasets/swedish_reviews/README.md
+++ b/datasets/swedish_reviews/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: null
+pretty_name: Swedish Reviews
---
# Dataset Card for Swedish Reviews
@@ -155,4 +156,4 @@ No paper exists currently.
### Contributions
-Thanks to [@timpal0l](https://github.com/timpal0l) for adding this dataset.
\ No newline at end of file
+Thanks to [@timpal0l](https://github.com/timpal0l) for adding this dataset.
diff --git a/datasets/tab_fact/README.md b/datasets/tab_fact/README.md
index b31a8416117..63b723235c0 100644
--- a/datasets/tab_fact/README.md
+++ b/datasets/tab_fact/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- fact-checking
paperswithcode_id: tabfact
+pretty_name: TabFact
---
# Dataset Card Creation Guide
@@ -152,4 +153,4 @@ The problem of verifying whether a textual hypothesis holds the truth based on t
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/tamilmixsentiment/README.md b/datasets/tamilmixsentiment/README.md
index f0577ee3ca6..280bb4f0ab5 100644
--- a/datasets/tamilmixsentiment/README.md
+++ b/datasets/tamilmixsentiment/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: null
+pretty_name: Tamilmixsentiment
---
# Dataset Card for Tamilmixsentiment
@@ -169,4 +170,4 @@ Eleven volunteers were involved in the process. All of them were native speakers
```
### Contributions
-Thanks to [@jamespaultg](https://github.com/jamespaultg) for adding this dataset.
\ No newline at end of file
+Thanks to [@jamespaultg](https://github.com/jamespaultg) for adding this dataset.
diff --git a/datasets/tanzil/README.md b/datasets/tanzil/README.md
index 29c84e92b67..d01ed32af29 100644
--- a/datasets/tanzil/README.md
+++ b/datasets/tanzil/README.md
@@ -59,6 +59,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: tanzil
---
# Dataset Card Creation Guide
@@ -192,4 +193,4 @@ Here are some examples of questions and facts:
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/tapaco/README.md b/datasets/tapaco/README.md
index 8a9185b5450..ef2ea6ab124 100644
--- a/datasets/tapaco/README.md
+++ b/datasets/tapaco/README.md
@@ -387,6 +387,7 @@ task_ids:
- machine-translation
- semantic-similarity-classification
paperswithcode_id: tapaco
+pretty_name: TaPaCo Corpus
---
# Dataset Card for TaPaCo Corpus
@@ -560,4 +561,4 @@ Creative Commons Attribution 2.0 Generic
### Contributions
-Thanks to [@pacman100](https://github.com/pacman100) for adding this dataset.
\ No newline at end of file
+Thanks to [@pacman100](https://github.com/pacman100) for adding this dataset.
diff --git a/datasets/taskmaster1/README.md b/datasets/taskmaster1/README.md
index eff77244e8c..271d79e8016 100644
--- a/datasets/taskmaster1/README.md
+++ b/datasets/taskmaster1/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- dialogue-modeling
paperswithcode_id: taskmaster-1
+pretty_name: Taskmaster-1
---
# Dataset Card Creation Guide
@@ -263,4 +264,4 @@ year = {2019}
```
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/taskmaster2/README.md b/datasets/taskmaster2/README.md
index 90390fcdaea..8945f0e38d4 100644
--- a/datasets/taskmaster2/README.md
+++ b/datasets/taskmaster2/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- dialogue-modeling
paperswithcode_id: taskmaster-2
+pretty_name: Taskmaster-2
---
# Dataset Card Creation Guide
@@ -254,4 +255,4 @@ year = {2019}
```
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/taskmaster3/README.md b/datasets/taskmaster3/README.md
index ab472660a61..2255c721374 100644
--- a/datasets/taskmaster3/README.md
+++ b/datasets/taskmaster3/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- dialogue-modeling
paperswithcode_id: null
+pretty_name: taskmaster3
---
# Dataset Card Creation Guide
@@ -260,4 +261,4 @@ year = {2019}
```
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/ted_iwlst2013/README.md b/datasets/ted_iwlst2013/README.md
index 83b8865de8a..9cee316b1bb 100644
--- a/datasets/ted_iwlst2013/README.md
+++ b/datasets/ted_iwlst2013/README.md
@@ -59,6 +59,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: TedIwlst2013
---
# Dataset Card Creation Guide
@@ -185,4 +186,4 @@ paperswithcode_id: null
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/ted_talks_iwslt/README.md b/datasets/ted_talks_iwslt/README.md
index 4a09966a906..41354422610 100644
--- a/datasets/ted_talks_iwslt/README.md
+++ b/datasets/ted_talks_iwslt/README.md
@@ -156,6 +156,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: Web Inventory of Transcribed & Translated(WIT) Ted Talks
---
# Dataset Card for Web Inventory of Transcribed & Translated(WIT) Ted Talks
@@ -374,4 +375,4 @@ cc-by-nc-4.0
### Contributions
-Thanks to [@skyprince999](https://github.com/skyprince999) for adding this dataset.
\ No newline at end of file
+Thanks to [@skyprince999](https://github.com/skyprince999) for adding this dataset.
diff --git a/datasets/telugu_books/README.md b/datasets/telugu_books/README.md
index 94e9a785487..9ec044df8a9 100644
--- a/datasets/telugu_books/README.md
+++ b/datasets/telugu_books/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- language-modeling
paperswithcode_id: null
+pretty_name: TeluguBooks
---
# Dataset Card for [telugu_books]
@@ -142,4 +143,4 @@ TE - Telugu
### Contributions
-Thanks to [@vinaykudari](https://github.com/vinaykudari) for adding this dataset.
\ No newline at end of file
+Thanks to [@vinaykudari](https://github.com/vinaykudari) for adding this dataset.
diff --git a/datasets/telugu_news/README.md b/datasets/telugu_news/README.md
index ef756547b75..855b017cd9d 100644
--- a/datasets/telugu_news/README.md
+++ b/datasets/telugu_news/README.md
@@ -21,6 +21,7 @@ task_ids:
- multi-class-classification
- topic-classification
paperswithcode_id: null
+pretty_name: TeluguNews
---
# Dataset Card for [Dataset Name]
@@ -152,4 +153,4 @@ Sudalai Rajkumar, Anusha Motamarri
### Contributions
-Thanks to [@oostopitre](https://github.com/oostopitre) for adding this dataset.
\ No newline at end of file
+Thanks to [@oostopitre](https://github.com/oostopitre) for adding this dataset.
diff --git a/datasets/tep_en_fa_para/README.md b/datasets/tep_en_fa_para/README.md
index 498eff8261b..62dcba438a3 100644
--- a/datasets/tep_en_fa_para/README.md
+++ b/datasets/tep_en_fa_para/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: TepEnFaPara
---
# Dataset Card for [tep_en_fa_para]
@@ -140,4 +141,4 @@ M. T. Pilevar, H. Faili, and A. H. Pilevar, “TEP: Tehran English-Persian Paral
### Contributions
-Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
\ No newline at end of file
+Thanks to [@spatil6](https://github.com/spatil6) for adding this dataset.
diff --git a/datasets/thai_toxicity_tweet/README.md b/datasets/thai_toxicity_tweet/README.md
index 7a610a16bc6..2cef963b578 100644
--- a/datasets/thai_toxicity_tweet/README.md
+++ b/datasets/thai_toxicity_tweet/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: null
+pretty_name: ThaiToxicityTweet
---
# Dataset Card for `thai_toxicity_tweet`
@@ -176,4 +177,4 @@ Please cite the following if you make use of the dataset:
### Contributions
-Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
\ No newline at end of file
+Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
diff --git a/datasets/thainer/README.md b/datasets/thainer/README.md
index e6f83e4ba12..a0cb4146b3c 100644
--- a/datasets/thainer/README.md
+++ b/datasets/thainer/README.md
@@ -21,6 +21,7 @@ task_ids:
- named-entity-recognition
- part-of-speech-tagging
paperswithcode_id: null
+pretty_name: thainer
---
# Dataset Card for `thainer`
@@ -164,4 +165,4 @@ Work extended from:
### Contributions
-Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
\ No newline at end of file
+Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
diff --git a/datasets/thaiqa_squad/README.md b/datasets/thaiqa_squad/README.md
index fd2a3699dc4..9bc7957c15e 100644
--- a/datasets/thaiqa_squad/README.md
+++ b/datasets/thaiqa_squad/README.md
@@ -19,6 +19,7 @@ task_ids:
- extractive-qa
- open-domain-qa
paperswithcode_id: null
+pretty_name: thaiqa-squad
---
# Dataset Card for `thaiqa-squad`
@@ -163,4 +164,4 @@ CC-BY-NC-SA 3.0
### Contributions
-Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
\ No newline at end of file
+Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
diff --git a/datasets/thaisum/README.md b/datasets/thaisum/README.md
index 98da3fff070..2a5ed72444c 100644
--- a/datasets/thaisum/README.md
+++ b/datasets/thaisum/README.md
@@ -20,9 +20,10 @@ task_ids:
- language-modeling
- summarization
paperswithcode_id: null
+pretty_name: ThaiSum
---
-# Dataset Card for `thaisum`
+# Dataset Card for ThaiSum
## Table of Contents
- [Dataset Description](#dataset-description)
@@ -93,7 +94,7 @@ train/valid/test: 358868 / 11000 / 11000
### Curation Rationale
-Sequence-to-sequence (Seq2Seq) models have shown great achievement in text summarization. However, Seq2Seq model often requires large-scale training data to achieve effective results. Although many impressive advancements in text summarization field have been made, most of summarization studies focus on resource-rich languages. The progress of Thai text summarization is still far behind. The dearth of large-scale dataset keeps Thai text summarization in its infancy. As far as our knowledge goes, there is not a large-scale dataset for Thai text summarization available anywhere. Thus, we present ThaiSum, a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard.
+Sequence-to-sequence (Seq2Seq) models have shown great achievement in text summarization. However, Seq2Seq model often requires large-scale training data to achieve effective results. Although many impressive advancements in text summarization field have been made, most of summarization studies focus on resource-rich languages. The progress of Thai text summarization is still far behind. The dearth of large-scale dataset keeps Thai text summarization in its infancy. As far as our knowledge goes, there is not a large-scale dataset for Thai text summarization available anywhere. Thus, we present ThaiSum, a large-scale corpus for Thai text summarization obtained from several online news websites namely Thairath, ThaiPBS, Prachathai, and The Standard.
### Source Data
@@ -102,26 +103,26 @@ Sequence-to-sequence (Seq2Seq) models have shown great achievement in text summa
We used a python library named Scrapy to crawl articles from several news websites namely Thairath, Prachatai, ThaiPBS and, The Standard. We first collected news URLs provided in their sitemaps. During web-crawling, we used HTML markup and metadata available in HTML pages to identify article text, summary, headline, tags and label. Collected articles were published online from 2014 to August 2020.
We further performed data cleansing process to minimize noisy data. We filtered out articles that their article text or summary is missing. Articles that contains article text with less than 150 words or summary with less than 15 words were removed. We also discarded articles that contain at least one of these following tags: ‘ดวง’ (horoscope), ‘นิยาย’ (novel), ‘อินสตราแกรมดารา’ (celebrity Instagram), ‘คลิปสุดฮา’(funny video) and ‘สรุปข่าว’ (highlight news). Some summaries were completely irrelevant to their original article texts. To eliminate those irrelevant summaries, we calculated abstractedness score between summary and its article text. Abstractedness score is written formally as:

-
Where 𝑆 denotes set of article tokens. 𝐴 denotes set of summary tokens. 𝑟 denotes a total number of summary tokens. We omitted articles that have abstractedness score at 1-grams higher than 60%.
+
Where 𝑆 denotes set of article tokens. 𝐴 denotes set of summary tokens. 𝑟 denotes a total number of summary tokens. We omitted articles that have abstractedness score at 1-grams higher than 60%.
-It is important to point out that we used [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp), version 2.2.4, tokenizing engine = newmm, to process Thai texts in this study. It is challenging to tokenize running Thai text into words or sentences because there are not clear word/sentence delimiters in Thai language. Therefore, using different tokenization engines may result in different segment of words/sentences.
+It is important to point out that we used [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp), version 2.2.4, tokenizing engine = newmm, to process Thai texts in this study. It is challenging to tokenize running Thai text into words or sentences because there are not clear word/sentence delimiters in Thai language. Therefore, using different tokenization engines may result in different segment of words/sentences.
After data-cleansing process, ThaiSum dataset contains over 358,000 articles. The size of this dataset is comparable to a well-known English document summarization dataset, CNN/Dily mail dataset. Moreover, we analyse the characteristics of this dataset by measuring the abstractedness level, compassion rate, and content diversity. For more details, see [thaisum_exploration.ipynb](https://github.com/nakhunchumpolsathien/ThaiSum/blob/master/thaisum_exploration.ipynb).
#### Dataset Statistics
-ThaiSum dataset consists of 358,868 articles. Average lengths of article texts and summaries are approximately 530 and 37 words respectively. As mentioned earlier, we also collected headlines, tags and labels provided in each article. Tags are similar to keywords of the article. An article normally contains several tags but a few labels. Tags can be name of places or persons that article is about while labels indicate news category (politic, entertainment, etc.). Ultimatly, ThaiSum contains 538,059 unique tags and 59 unique labels. Note that not every article contains tags or labels.
+ThaiSum dataset consists of 358,868 articles. Average lengths of article texts and summaries are approximately 530 and 37 words respectively. As mentioned earlier, we also collected headlines, tags and labels provided in each article. Tags are similar to keywords of the article. An article normally contains several tags but a few labels. Tags can be name of places or persons that article is about while labels indicate news category (politic, entertainment, etc.). Ultimatly, ThaiSum contains 538,059 unique tags and 59 unique labels. Note that not every article contains tags or labels.
|Dataset Size| 358,868 | articles |
|:---|---:|---:|
-|Avg. Article Length| 529.5 | words|
-|Avg. Summary Length | 37.3 | words|
-|Avg. Headline Length | 12.6 | words|
-|Unique Vocabulary Size | 407,355 | words|
-|Occurring > 10 times | 81,761 | words|
-|Unique News Tag Size | 538,059 | tags|
-|Unique News Label Size | 59 | labels|
+|Avg. Article Length| 529.5 | words|
+|Avg. Summary Length | 37.3 | words|
+|Avg. Headline Length | 12.6 | words|
+|Unique Vocabulary Size | 407,355 | words|
+|Occurring > 10 times | 81,761 | words|
+|Unique News Tag Size | 538,059 | tags|
+|Unique News Label Size | 59 | labels|
#### Who are the source language producers?
@@ -131,7 +132,7 @@ Journalists of respective articles
#### Annotation process
-`summary`, `type` and `tags` are created by journalists who wrote the articles and/or their publishers.
+`summary`, `type` and `tags` are created by journalists who wrote the articles and/or their publishers.
#### Who are the annotators?
@@ -174,13 +175,13 @@ MIT License
### Citation Information
```
-@mastersthesis{chumpolsathien_2020,
+@mastersthesis{chumpolsathien_2020,
title={Using Knowledge Distillation from Keyword Extraction to Improve the Informativeness of Neural Cross-lingual Summarization},
- author={Chumpolsathien, Nakhun},
- year={2020},
+ author={Chumpolsathien, Nakhun},
+ year={2020},
school={Beijing Institute of Technology}
```
### Contributions
-Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
\ No newline at end of file
+Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
diff --git a/datasets/tilde_model/README.md b/datasets/tilde_model/README.md
index 28fd961ab08..489bbb21785 100644
--- a/datasets/tilde_model/README.md
+++ b/datasets/tilde_model/README.md
@@ -47,6 +47,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: tilde-model-corpus
+pretty_name: Tilde Multilingual Open Data for European Languages
---
# Dataset Card Creation Guide
@@ -178,4 +179,4 @@ Here are some examples of questions and facts:
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/tiny_shakespeare/README.md b/datasets/tiny_shakespeare/README.md
index 213eec8f497..8be3b50989a 100644
--- a/datasets/tiny_shakespeare/README.md
+++ b/datasets/tiny_shakespeare/README.md
@@ -1,5 +1,6 @@
---
paperswithcode_id: null
+pretty_name: TinyShakespeare
---
# Dataset Card for "tiny_shakespeare"
@@ -165,4 +166,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
diff --git a/datasets/tmu_gfm_dataset/README.md b/datasets/tmu_gfm_dataset/README.md
index ad953bb88ee..6d89c5e21d0 100644
--- a/datasets/tmu_gfm_dataset/README.md
+++ b/datasets/tmu_gfm_dataset/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- conditional-text-generation-other-grammatical-error-correction
paperswithcode_id: null
+pretty_name: TMU-GFM-Dataset
---
# Dataset Card for TMU-GFM-Dataset
@@ -186,4 +187,4 @@ Five native English annotators reqruited by using Amazon Mechaincal turk
### Contributions
-Thanks to [@forest1988](https://github.com/forest1988) for adding this dataset.
\ No newline at end of file
+Thanks to [@forest1988](https://github.com/forest1988) for adding this dataset.
diff --git a/datasets/trec/README.md b/datasets/trec/README.md
index 9ee2e11e090..8a750014ed9 100644
--- a/datasets/trec/README.md
+++ b/datasets/trec/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: trecqa
+pretty_name: Text Retrieval Conference Question Answering
---
# Dataset Card for "trec"
@@ -172,4 +173,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/tunizi/README.md b/datasets/tunizi/README.md
index 4539c507f8c..ba6f128b067 100644
--- a/datasets/tunizi/README.md
+++ b/datasets/tunizi/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: tunizi
+pretty_name: TUNIZI
---
# Dataset Card Creation Guide
@@ -142,4 +143,4 @@ This dataset uses Tunisian Arabic written with latin script (BCP-47: aeb-Latn)
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/turk/README.md b/datasets/turk/README.md
index ed324a83990..7673f9ba9c3 100644
--- a/datasets/turk/README.md
+++ b/datasets/turk/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- text-simplification
paperswithcode_id: null
+pretty_name: TURK
---
# Dataset Card for TURK
@@ -165,4 +166,4 @@ TURK was developed by researchers at the University of Pennsylvania. The work wa
```
### Contributions
-Thanks to [@mounicam](https://github.com/mounicam) for adding this dataset.
\ No newline at end of file
+Thanks to [@mounicam](https://github.com/mounicam) for adding this dataset.
diff --git a/datasets/turkish_movie_sentiment/README.md b/datasets/turkish_movie_sentiment/README.md
index 92cbd8f8661..6f103ed2c1c 100644
--- a/datasets/turkish_movie_sentiment/README.md
+++ b/datasets/turkish_movie_sentiment/README.md
@@ -19,6 +19,7 @@ task_ids:
- sentiment-classification
- sentiment-scoring
paperswithcode_id: null
+pretty_name: 'TurkishMovieSentiment: This dataset contains turkish movie reviews.'
---
# Dataset Card for TurkishMovieSentiment: This dataset contains turkish movie reviews.
@@ -150,4 +151,4 @@ The data is under the [CC0: Public Domain](https://creativecommons.org/publicdom
### Contributions
-Thanks to [@yavuzKomecoglu](https://github.com/yavuzKomecoglu) for adding this dataset.
\ No newline at end of file
+Thanks to [@yavuzKomecoglu](https://github.com/yavuzKomecoglu) for adding this dataset.
diff --git a/datasets/turkish_ner/README.md b/datasets/turkish_ner/README.md
index b2e7ea54988..ca9b21a0ede 100644
--- a/datasets/turkish_ner/README.md
+++ b/datasets/turkish_ner/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: TurkishNer
---
@@ -158,4 +159,4 @@ Creative Commons Attribution 4.0 International
### Contributions
-Thanks to [@merveenoyan](https://github.com/merveenoyan) for adding this dataset.
\ No newline at end of file
+Thanks to [@merveenoyan](https://github.com/merveenoyan) for adding this dataset.
diff --git a/datasets/turkish_shrinked_ner/README.md b/datasets/turkish_shrinked_ner/README.md
index 90e833c8d4a..bd210860fef 100644
--- a/datasets/turkish_shrinked_ner/README.md
+++ b/datasets/turkish_shrinked_ner/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: TurkishShrinkedNer
---
# Dataset Card for turkish_shrinked_ner
@@ -144,4 +145,4 @@ Creative Commons Attribution 4.0 International
### Contributions
-Thanks to [@bhctsntrk](https://github.com/bhctsntrk) for adding this dataset.
\ No newline at end of file
+Thanks to [@bhctsntrk](https://github.com/bhctsntrk) for adding this dataset.
diff --git a/datasets/tweet_eval/README.md b/datasets/tweet_eval/README.md
index 32fdd82a89f..b36effbc0b7 100644
--- a/datasets/tweet_eval/README.md
+++ b/datasets/tweet_eval/README.md
@@ -87,6 +87,7 @@ task_ids:
- intent-classification
- multi-class-classification
paperswithcode_id: tweeteval
+pretty_name: TweetEval
---
# Dataset Card for tweet_eval
diff --git a/datasets/tweet_qa/README.md b/datasets/tweet_qa/README.md
index 7e1f379ebb4..3097c5eb24c 100644
--- a/datasets/tweet_qa/README.md
+++ b/datasets/tweet_qa/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- open-domain-qa
paperswithcode_id: tweetqa
+pretty_name: TweetQA
---
# Dataset Card for TweetQA
@@ -158,4 +159,4 @@ We first describe the three-step data collection process of TWEETQA: tweet crawl
### Contributions
-Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.
\ No newline at end of file
+Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.
diff --git a/datasets/twi_text_c3/README.md b/datasets/twi_text_c3/README.md
index 8eb7b38ef80..22cfd763f2b 100644
--- a/datasets/twi_text_c3/README.md
+++ b/datasets/twi_text_c3/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- language-modeling
paperswithcode_id: null
+pretty_name: Twi Text C3
---
# Dataset Card for Twi Text C3
@@ -166,4 +167,4 @@ The data is under the [Creative Commons Attribution-NonCommercial 4.0 ](https://
```
### Contributions
-Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
\ No newline at end of file
+Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
diff --git a/datasets/twi_wordsim353/README.md b/datasets/twi_wordsim353/README.md
index 87236710681..c34b07b6c41 100644
--- a/datasets/twi_wordsim353/README.md
+++ b/datasets/twi_wordsim353/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- semantic-similarity-scoring
paperswithcode_id: null
+pretty_name: Yorùbá Wordsim-353
---
# Dataset Card for Yorùbá Wordsim-353
@@ -160,4 +161,4 @@ Only the test data is available
### Contributions
-Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
\ No newline at end of file
+Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
diff --git a/datasets/ubuntu_dialogs_corpus/README.md b/datasets/ubuntu_dialogs_corpus/README.md
index 7052da8b49a..8e683083fbb 100644
--- a/datasets/ubuntu_dialogs_corpus/README.md
+++ b/datasets/ubuntu_dialogs_corpus/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: ubuntu-dialogue-corpus
+pretty_name: Ubuntu Dialogue Corpus
---
# Dataset Card for "ubuntu_dialogs_corpus"
@@ -170,4 +171,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
diff --git a/datasets/udhr/README.md b/datasets/udhr/README.md
index 9727370ec8f..b9c31c4acf7 100644
--- a/datasets/udhr/README.md
+++ b/datasets/udhr/README.md
@@ -421,6 +421,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: The Universal Declaration of Human Rights (UDHR)
---
# Dataset Card for The Universal Declaration of Human Rights (UDHR)
@@ -565,4 +566,4 @@ The txt/xml data files used here were compiled by The Unicode Consortium, which
### Contributions
-Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
\ No newline at end of file
+Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
diff --git a/datasets/um005/README.md b/datasets/um005/README.md
index 32a484ce022..4b60babf3ca 100644
--- a/datasets/um005/README.md
+++ b/datasets/um005/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: umc005-english-urdu
+pretty_name: UMC005 English-Urdu
---
# Dataset Card Creation Guide
@@ -143,4 +144,4 @@ paperswithcode_id: umc005-english-urdu
[More Information Needed]
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/un_ga/README.md b/datasets/un_ga/README.md
index eda98dd3d23..accfbecb42b 100644
--- a/datasets/un_ga/README.md
+++ b/datasets/un_ga/README.md
@@ -62,6 +62,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: UnGa
---
# Dataset Card for [Dataset Name]
@@ -190,4 +191,4 @@ publisher = "International Association of Machine Translation",
}
### Contributions
-Thanks to [@param087](https://github.com/param087) for adding this dataset.
\ No newline at end of file
+Thanks to [@param087](https://github.com/param087) for adding this dataset.
diff --git a/datasets/un_multi/README.md b/datasets/un_multi/README.md
index 2970674242c..11b02174779 100644
--- a/datasets/un_multi/README.md
+++ b/datasets/un_multi/README.md
@@ -80,6 +80,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: multiun
+pretty_name: Multilingual Corpus from United Nation Documents
---
# Dataset Card for [Dataset Name]
@@ -231,4 +232,4 @@ The underlying task is machine translation.
```
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/un_pc/README.md b/datasets/un_pc/README.md
index 719c8fb6d23..47140f7015a 100644
--- a/datasets/un_pc/README.md
+++ b/datasets/un_pc/README.md
@@ -62,6 +62,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: united-nations-parallel-corpus
+pretty_name: United Nations Parallel Corpus
---
# Dataset Card for [Dataset Name]
@@ -200,4 +201,4 @@ The underlying task is machine translation.
```
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/universal_dependencies/README.md b/datasets/universal_dependencies/README.md
index 41ad8e83cfc..5acd346d1a3 100644
--- a/datasets/universal_dependencies/README.md
+++ b/datasets/universal_dependencies/README.md
@@ -384,6 +384,7 @@ task_ids:
- constituency-parsing
- dependency-parsing
paperswithcode_id: universal-dependencies
+pretty_name: Universal Dependencies Treebank
---
# Dataset Card for Universal Dependencies Treebank
@@ -505,4 +506,4 @@ paperswithcode_id: universal-dependencies
[More Information Needed]
### Contributions
-Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@jplu](https://github.com/jplu) for adding this dataset.
\ No newline at end of file
+Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@jplu](https://github.com/jplu) for adding this dataset.
diff --git a/datasets/universal_morphologies/README.md b/datasets/universal_morphologies/README.md
index 47594b0691d..141a5089091 100644
--- a/datasets/universal_morphologies/README.md
+++ b/datasets/universal_morphologies/README.md
@@ -459,6 +459,7 @@ task_ids:
- multi-label-classification
- structure-prediction-other-morphology
paperswithcode_id: null
+pretty_name: UniversalMorphologies
---
# Dataset Card for [Dataset Name]
@@ -603,4 +604,4 @@ Each instance in the dataset has the following fields:
### Contributions
-Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset.
\ No newline at end of file
+Thanks to [@yjernite](https://github.com/yjernite) for adding this dataset.
diff --git a/datasets/urdu_fake_news/README.md b/datasets/urdu_fake_news/README.md
index 9144edd44d4..e00c5657c92 100644
--- a/datasets/urdu_fake_news/README.md
+++ b/datasets/urdu_fake_news/README.md
@@ -19,6 +19,7 @@ task_ids:
- fact-checking
- intent-classification
paperswithcode_id: null
+pretty_name: Bend the Truth (Urdu Fake News)
---
# Dataset Card for Bend the Truth (Urdu Fake News)
@@ -143,4 +144,4 @@ paperswithcode_id: null
### Contributions
-Thanks to [@chaitnayabasava](https://github.com/chaitnayabasava) for adding this dataset.
\ No newline at end of file
+Thanks to [@chaitnayabasava](https://github.com/chaitnayabasava) for adding this dataset.
diff --git a/datasets/urdu_sentiment_corpus/README.md b/datasets/urdu_sentiment_corpus/README.md
index ebe43bd26fc..d953f50d597 100644
--- a/datasets/urdu_sentiment_corpus/README.md
+++ b/datasets/urdu_sentiment_corpus/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: urdu-sentiment-corpus
+pretty_name: Urdu Sentiment Corpus (USC)
---
# Dataset Card for Urdu Sentiment Corpus (USC)
@@ -141,4 +142,4 @@ paperswithcode_id: urdu-sentiment-corpus
### Contributions
-Thanks to [@chaitnayabasava](https://github.com/chaitnayabasava) for adding this dataset.
\ No newline at end of file
+Thanks to [@chaitnayabasava](https://github.com/chaitnayabasava) for adding this dataset.
diff --git a/datasets/web_of_science/README.md b/datasets/web_of_science/README.md
index b4925fa7756..b2b69192dbb 100644
--- a/datasets/web_of_science/README.md
+++ b/datasets/web_of_science/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: web-of-science-dataset
+pretty_name: Web of Science Dataset
---
# Dataset Card for "web_of_science"
@@ -215,4 +216,4 @@ organization={IEEE}
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.
diff --git a/datasets/web_questions/README.md b/datasets/web_questions/README.md
index 4caad96ee4f..59eb757c603 100644
--- a/datasets/web_questions/README.md
+++ b/datasets/web_questions/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: webquestions
+pretty_name: WebQuestions
---
# Dataset Card for "web_questions"
@@ -169,4 +170,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun) for adding this dataset.
diff --git a/datasets/weibo_ner/README.md b/datasets/weibo_ner/README.md
index 2300d2638eb..9e952fc9a7e 100644
--- a/datasets/weibo_ner/README.md
+++ b/datasets/weibo_ner/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: weibo-ner
+pretty_name: Weibo NER
---
# Dataset Card Creation Guide
@@ -142,4 +143,4 @@ paperswithcode_id: weibo-ner
[More Information Needed]
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/wiki40b/README.md b/datasets/wiki40b/README.md
index 5ec7cc23a2c..87593afd278 100644
--- a/datasets/wiki40b/README.md
+++ b/datasets/wiki40b/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: wiki-40b
+pretty_name: Wiki-40B
---
# Dataset Card for "wiki40b"
@@ -152,4 +153,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@jplu](https://github.com/jplu), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
\ No newline at end of file
+Thanks to [@jplu](https://github.com/jplu), [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
diff --git a/datasets/wiki_asp/README.md b/datasets/wiki_asp/README.md
index f57dfff46e0..841065de7f3 100644
--- a/datasets/wiki_asp/README.md
+++ b/datasets/wiki_asp/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- summarization
paperswithcode_id: wikiasp
+pretty_name: WikiAsp
---
# Dataset Card Creation Guide
@@ -170,4 +171,4 @@ An example from the "plant" configuration:
### Contributions
-Thanks to [@katnoria](https://github.com/katnoria) for adding this dataset.
\ No newline at end of file
+Thanks to [@katnoria](https://github.com/katnoria) for adding this dataset.
diff --git a/datasets/wiki_atomic_edits/README.md b/datasets/wiki_atomic_edits/README.md
index 1e8f03fbdcb..c29443bff2e 100644
--- a/datasets/wiki_atomic_edits/README.md
+++ b/datasets/wiki_atomic_edits/README.md
@@ -81,6 +81,7 @@ task_ids:
- explanation-generation
- summarization
paperswithcode_id: wikiatomicedits
+pretty_name: WikiAtomicEdits
---
# Dataset Card Creation Guide
@@ -208,4 +209,4 @@ Here are some examples of questions and facts:
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/wiki_auto/README.md b/datasets/wiki_auto/README.md
index a073d79039e..74fd655f794 100644
--- a/datasets/wiki_auto/README.md
+++ b/datasets/wiki_auto/README.md
@@ -27,6 +27,7 @@ task_categories:
task_ids:
- text-simplification
paperswithcode_id: null
+pretty_name: WikiAuto
---
# Dataset Card for WikiAuto
diff --git a/datasets/wiki_bio/README.md b/datasets/wiki_bio/README.md
index a7ad5870729..c3dc2edaba3 100644
--- a/datasets/wiki_bio/README.md
+++ b/datasets/wiki_bio/README.md
@@ -19,6 +19,7 @@ task_ids:
- explanation-generation
- table-to-text
paperswithcode_id: wikibio
+pretty_name: WikiBio
---
# Dataset Card for [Dataset Name]
@@ -202,4 +203,4 @@ For refering the original paper in BibTex format:
### Contributions
-Thanks to [@alejandrocros](https://github.com/alejandrocros) for adding this dataset.
\ No newline at end of file
+Thanks to [@alejandrocros](https://github.com/alejandrocros) for adding this dataset.
diff --git a/datasets/wiki_hop/README.md b/datasets/wiki_hop/README.md
index 2fc3a582d3f..302f7b9d0c7 100644
--- a/datasets/wiki_hop/README.md
+++ b/datasets/wiki_hop/README.md
@@ -19,6 +19,7 @@ task_ids:
- extractive-qa
- question-answering-other-multi-hop
paperswithcode_id: wikihop
+pretty_name: WikiHop
---
# Dataset Card Creation Guide
@@ -143,4 +144,4 @@ paperswithcode_id: wikihop
[More Information Needed]
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/wiki_lingua/README.md b/datasets/wiki_lingua/README.md
index a2b9dd763c2..98a4622a694 100644
--- a/datasets/wiki_lingua/README.md
+++ b/datasets/wiki_lingua/README.md
@@ -88,6 +88,7 @@ task_categories:
task_ids:
- summarization
paperswithcode_id: wikilingua
+pretty_name: WikiLingua
---
# Dataset Card for "wiki_lingua"
diff --git a/datasets/wiki_qa/README.md b/datasets/wiki_qa/README.md
index 8d23df26fe0..d0f3fabf0af 100644
--- a/datasets/wiki_qa/README.md
+++ b/datasets/wiki_qa/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: wikiqa
+pretty_name: Wikipedia open-domain Question Answering
---
# Dataset Card for "wiki_qa"
@@ -163,4 +164,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@mariamabarham](https://github.com/mariamabarham), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/wiki_qa_ar/README.md b/datasets/wiki_qa_ar/README.md
index b0d55301b6f..cb8b45dbea5 100644
--- a/datasets/wiki_qa_ar/README.md
+++ b/datasets/wiki_qa_ar/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- open-domain-qa
paperswithcode_id: wikiqaar
+pretty_name: English-Arabic Wikipedia Question-Answering
---
# Dataset Card for WikiQAar
@@ -150,4 +151,4 @@ The dataset does not contain any additional annotations.
### Contributions
-Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset.
\ No newline at end of file
+Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset.
diff --git a/datasets/wiki_snippets/README.md b/datasets/wiki_snippets/README.md
index 617acce6693..b20e6a5b5bc 100644
--- a/datasets/wiki_snippets/README.md
+++ b/datasets/wiki_snippets/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: null
+pretty_name: WikiSnippets
---
# Dataset Card for "wiki_snippets"
@@ -183,4 +184,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@yjernite](https://github.com/yjernite) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@mariamabarham](https://github.com/mariamabarham), [@yjernite](https://github.com/yjernite) for adding this dataset.
diff --git a/datasets/wiki_source/README.md b/datasets/wiki_source/README.md
index b477cbc2965..40e8ac13e11 100644
--- a/datasets/wiki_source/README.md
+++ b/datasets/wiki_source/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
+pretty_name: WikiSource
---
# Dataset Card Creation Guide
@@ -145,4 +146,4 @@ paperswithcode_id: null
### Contributions
-Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
\ No newline at end of file
+Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
diff --git a/datasets/wiki_split/README.md b/datasets/wiki_split/README.md
index c36b4857adb..199ad048455 100644
--- a/datasets/wiki_split/README.md
+++ b/datasets/wiki_split/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: wikisplit
+pretty_name: WikiSplit
---
# Dataset Card for "wiki_split"
@@ -161,4 +162,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun) for adding this dataset.
diff --git a/datasets/wiki_summary/README.md b/datasets/wiki_summary/README.md
index d974c3a85ee..6ff40447f8e 100644
--- a/datasets/wiki_summary/README.md
+++ b/datasets/wiki_summary/README.md
@@ -24,6 +24,7 @@ task_ids:
- open-domain-qa
- summarization
- text-simplification
+pretty_name: WikiSummary
---
# Dataset Card for [Needs More Information]
@@ -169,4 +170,4 @@ The dataset was created by Mehrdad Farahani.
### Contributions
-Thanks to [@tanmoyio](https://github.com/tanmoyio) for adding this dataset.
\ No newline at end of file
+Thanks to [@tanmoyio](https://github.com/tanmoyio) for adding this dataset.
diff --git a/datasets/wikihow/README.md b/datasets/wikihow/README.md
index 853d2c38ebd..edff0079c82 100644
--- a/datasets/wikihow/README.md
+++ b/datasets/wikihow/README.md
@@ -1,7 +1,8 @@
---
paperswithcode_id: wikihow
+pretty_name: WikiHow
---
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
diff --git a/datasets/wikisql/README.md b/datasets/wikisql/README.md
index ee84fe0c099..40c11ed423f 100644
--- a/datasets/wikisql/README.md
+++ b/datasets/wikisql/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: wikisql
+pretty_name: WikiSQL
---
# Dataset Card for "wikisql"
@@ -189,4 +190,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@lewtun](https://github.com/lewtun), [@ghomasHudson](https://github.com/ghomasHudson), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@lewtun](https://github.com/lewtun), [@ghomasHudson](https://github.com/ghomasHudson), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/wikitext_tl39/README.md b/datasets/wikitext_tl39/README.md
index af0067a0314..40757886354 100644
--- a/datasets/wikitext_tl39/README.md
+++ b/datasets/wikitext_tl39/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- language-modeling
paperswithcode_id: wikitext-tl-39
+pretty_name: WikiText-TL-39
---
# Dataset Card for WikiText-TL-39
@@ -151,4 +152,4 @@ Tagalog Wikipedia
### Contributions
-Thanks to [@jcblaisecruz02](https://github.com/jcblaisecruz02) for adding this dataset.
\ No newline at end of file
+Thanks to [@jcblaisecruz02](https://github.com/jcblaisecruz02) for adding this dataset.
diff --git a/datasets/wili_2018/README.md b/datasets/wili_2018/README.md
index 941a55e65cc..f1e3cf51eaf 100644
--- a/datasets/wili_2018/README.md
+++ b/datasets/wili_2018/README.md
@@ -252,6 +252,7 @@ task_categories:
task_ids:
- text-classification-other-language-identification
paperswithcode_id: wili-2018
+pretty_name: Wili2018
---
# Dataset Card for wili_2018
@@ -390,4 +391,4 @@ ODC Open Database License v1.0
### Contributions
-Thanks to [@Shubhambindal2017](https://github.com/Shubhambindal2017) for adding this dataset.
\ No newline at end of file
+Thanks to [@Shubhambindal2017](https://github.com/Shubhambindal2017) for adding this dataset.
diff --git a/datasets/wino_bias/README.md b/datasets/wino_bias/README.md
index 28a4a0f7d0a..6f7b8d11042 100644
--- a/datasets/wino_bias/README.md
+++ b/datasets/wino_bias/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- coreference-resolution
paperswithcode_id: winobias
+pretty_name: WinoBias
---
# Dataset Card for Wino_Bias dataset
diff --git a/datasets/winograd_wsc/README.md b/datasets/winograd_wsc/README.md
index 3e4648433a6..34398c8d751 100644
--- a/datasets/winograd_wsc/README.md
+++ b/datasets/winograd_wsc/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- coreference-resolution
paperswithcode_id: wsc
+pretty_name: Winograd Schema Challenge
---
# Dataset Card for The Winograd Schema Challenge
@@ -208,4 +209,4 @@ The Winograd Schema Challenge including many of the examples here was proposed b
```
### Contributions
-Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
\ No newline at end of file
+Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
diff --git a/datasets/winogrande/README.md b/datasets/winogrande/README.md
index d266b85f099..7642870d809 100644
--- a/datasets/winogrande/README.md
+++ b/datasets/winogrande/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: winogrande
+pretty_name: WinoGrande
---
# Dataset Card for "winogrande"
@@ -229,4 +230,4 @@ year={2019}
### Contributions
-Thanks to [@thomwolf](https://github.com/thomwolf), [@TevenLeScao](https://github.com/TevenLeScao), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
\ No newline at end of file
+Thanks to [@thomwolf](https://github.com/thomwolf), [@TevenLeScao](https://github.com/TevenLeScao), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
diff --git a/datasets/wiqa/README.md b/datasets/wiqa/README.md
index def177f7462..ed610774ff3 100644
--- a/datasets/wiqa/README.md
+++ b/datasets/wiqa/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: wiqa
+pretty_name: What-If Question Answering
---
# Dataset Card for "wiqa"
@@ -177,4 +178,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
\ No newline at end of file
+Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf), [@mariamabarham](https://github.com/mariamabarham), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
diff --git a/datasets/wisesight1000/README.md b/datasets/wisesight1000/README.md
index 54fc8984317..7abd0a61cf8 100644
--- a/datasets/wisesight1000/README.md
+++ b/datasets/wisesight1000/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- structure-prediction-other-word-tokenization
paperswithcode_id: null
+pretty_name: wisesight1000
---
# Dataset Card for `wisesight1000`
@@ -190,4 +191,4 @@ Character type features:
### Contributions
-Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
\ No newline at end of file
+Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
diff --git a/datasets/wisesight_sentiment/README.md b/datasets/wisesight_sentiment/README.md
index 2cecc814a15..6481d1af28c 100644
--- a/datasets/wisesight_sentiment/README.md
+++ b/datasets/wisesight_sentiment/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: null
+pretty_name: WisesightSentiment
---
# Dataset Card for wisesight_sentiment
@@ -224,4 +225,4 @@ BibTeX:
### Contributions
-Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
\ No newline at end of file
+Thanks to [@cstorm125](https://github.com/cstorm125) for adding this dataset.
diff --git a/datasets/wongnai_reviews/README.md b/datasets/wongnai_reviews/README.md
index 88919cf791c..51c5620c8a0 100644
--- a/datasets/wongnai_reviews/README.md
+++ b/datasets/wongnai_reviews/README.md
@@ -16,6 +16,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: null
+pretty_name: WongnaiReviews
---
# Dataset Card for Wongnai_Reviews
@@ -77,4 +78,4 @@ See https://github.com/wongnai/wongnai-corpus
### Contributions
-Thanks to [@mapmeld](https://github.com/mapmeld), [@cstorm125](https://github.com/cstorm125) for adding this dataset.
\ No newline at end of file
+Thanks to [@mapmeld](https://github.com/mapmeld), [@cstorm125](https://github.com/cstorm125) for adding this dataset.
diff --git a/datasets/woz_dialogue/README.md b/datasets/woz_dialogue/README.md
index 0811397c8c5..2689ecea5de 100644
--- a/datasets/woz_dialogue/README.md
+++ b/datasets/woz_dialogue/README.md
@@ -33,6 +33,7 @@ task_ids:
- multi-class-classification
- parsing
paperswithcode_id: wizard-of-oz
+pretty_name: Wizard-of-Oz
---
# Dataset Card Creation Guide
@@ -157,4 +158,4 @@ paperswithcode_id: wizard-of-oz
[More Information Needed]
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/wrbsc/README.md b/datasets/wrbsc/README.md
index e545cbdaa99..f9ae177d6b5 100644
--- a/datasets/wrbsc/README.md
+++ b/datasets/wrbsc/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- semantic-similarity-classification
paperswithcode_id: null
+pretty_name: wrbsc
---
# Dataset Card for wrbsc
@@ -170,4 +171,4 @@ Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)
```
### Contributions
-Thanks to [@kldarek](https://github.com/kldarek) for adding this dataset.
\ No newline at end of file
+Thanks to [@kldarek](https://github.com/kldarek) for adding this dataset.
diff --git a/datasets/x_stance/README.md b/datasets/x_stance/README.md
index 842a3870741..f5ffa984596 100644
--- a/datasets/x_stance/README.md
+++ b/datasets/x_stance/README.md
@@ -1,5 +1,6 @@
---
paperswithcode_id: x-stance
+pretty_name: x-stance
---
# Dataset Card for "x_stance"
@@ -172,4 +173,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@jvamvas](https://github.com/jvamvas) for adding this dataset.
\ No newline at end of file
+Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@jvamvas](https://github.com/jvamvas) for adding this dataset.
diff --git a/datasets/xcopa/README.md b/datasets/xcopa/README.md
index f035181deb9..52ad5be2a6f 100644
--- a/datasets/xcopa/README.md
+++ b/datasets/xcopa/README.md
@@ -1,5 +1,6 @@
---
paperswithcode_id: xcopa
+pretty_name: XCOPA
---
# Dataset Card for "xcopa"
@@ -294,4 +295,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
\ No newline at end of file
+Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
diff --git a/datasets/xed_en_fi/README.md b/datasets/xed_en_fi/README.md
index c3367742d7f..6b7b492e3f2 100644
--- a/datasets/xed_en_fi/README.md
+++ b/datasets/xed_en_fi/README.md
@@ -29,6 +29,7 @@ task_ids:
- multi-label-classification
- sentiment-classification
paperswithcode_id: xed
+pretty_name: XedEnglishFinnish
---
# Dataset Card for xed_english_finnish
@@ -171,4 +172,4 @@ License: Creative Commons Attribution 4.0 International License (CC-BY)
### Contributions
-Thanks to [@lhoestq](https://github.com/lhoestq), [@harshalmittal4](https://github.com/harshalmittal4) for adding this dataset.
\ No newline at end of file
+Thanks to [@lhoestq](https://github.com/lhoestq), [@harshalmittal4](https://github.com/harshalmittal4) for adding this dataset.
diff --git a/datasets/xglue/README.md b/datasets/xglue/README.md
index 7af9acf1cfd..bbff7f6924c 100644
--- a/datasets/xglue/README.md
+++ b/datasets/xglue/README.md
@@ -279,6 +279,7 @@ task_ids:
xnli:
- natural-language-inference
paperswithcode_id: null
+pretty_name: XGLUE
---
# Dataset Card for XGLUE
@@ -868,4 +869,4 @@ The licensing status of the dataset hinges on the legal status of [XGLUE](https:
### Contributions
-Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
\ No newline at end of file
+Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
diff --git a/datasets/xnli/README.md b/datasets/xnli/README.md
index 25b029aeb65..2e64209c22f 100644
--- a/datasets/xnli/README.md
+++ b/datasets/xnli/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: xnli
+pretty_name: Cross-lingual Natural Language Inference
---
# Dataset Card for "xnli"
@@ -261,4 +262,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
\ No newline at end of file
+Thanks to [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@lhoestq](https://github.com/lhoestq), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
diff --git a/datasets/xor_tydi_qa/README.md b/datasets/xor_tydi_qa/README.md
index 3aa0b126c23..87baba89e92 100644
--- a/datasets/xor_tydi_qa/README.md
+++ b/datasets/xor_tydi_qa/README.md
@@ -26,6 +26,7 @@ task_categories:
task_ids:
- open-domain-qa
paperswithcode_id: xor-tydi-qa
+pretty_name: XOR QA
---
# Dataset Card for XOR QA
@@ -179,4 +180,4 @@ XOR-TyDi QA is distributed under the [CC BY-SA 4.0](https://creativecommons.org/
```
### Contributions
-Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset.
\ No newline at end of file
+Thanks to [@sumanthd17](https://github.com/sumanthd17) for adding this dataset.
diff --git a/datasets/xquad_r/README.md b/datasets/xquad_r/README.md
index 52abdb6f3a9..ab18f342030 100644
--- a/datasets/xquad_r/README.md
+++ b/datasets/xquad_r/README.md
@@ -40,6 +40,7 @@ task_categories:
task_ids:
- extractive-qa
paperswithcode_id: xquad-r
+pretty_name: LAReQA
---
# Dataset Card for [Dataset Name]
@@ -211,4 +212,4 @@ XQuAD-R is distributed under the [CC BY-SA 4.0 license](https://creativecommons.
### Contributions
-Thanks to [@manandey](https://github.com/manandey) for adding this dataset.
\ No newline at end of file
+Thanks to [@manandey](https://github.com/manandey) for adding this dataset.
diff --git a/datasets/xsum_factuality/README.md b/datasets/xsum_factuality/README.md
index 3825308933a..50503bc2f8b 100644
--- a/datasets/xsum_factuality/README.md
+++ b/datasets/xsum_factuality/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- summarization
paperswithcode_id: null
+pretty_name: XSum Hallucination Annotations
---
# Dataset Card for XSum Hallucination Annotations
@@ -215,4 +216,4 @@ There is only a single split for both the Faithfulness annotations dataset and F
### Contributions
-Thanks to [@vineeths96](https://github.com/vineeths96) for adding this dataset.
\ No newline at end of file
+Thanks to [@vineeths96](https://github.com/vineeths96) for adding this dataset.
diff --git a/datasets/yahoo_answers_qa/README.md b/datasets/yahoo_answers_qa/README.md
index 2447e298fac..31bd4b2220b 100644
--- a/datasets/yahoo_answers_qa/README.md
+++ b/datasets/yahoo_answers_qa/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- open-domain-qa
paperswithcode_id: null
+pretty_name: YahooAnswersQa
---
# Dataset Card Creation Guide
@@ -142,4 +143,4 @@ paperswithcode_id: null
[More Information Needed]
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/yahoo_answers_topics/README.md b/datasets/yahoo_answers_topics/README.md
index 98b03a1ed0b..439fbd5b231 100644
--- a/datasets/yahoo_answers_topics/README.md
+++ b/datasets/yahoo_answers_topics/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- topic-classification
paperswithcode_id: null
+pretty_name: YahooAnswersTopics
---
# Dataset Card Creation Guide
@@ -143,4 +144,4 @@ paperswithcode_id: null
### Contributions
-Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
\ No newline at end of file
+Thanks to [@patil-suraj](https://github.com/patil-suraj) for adding this dataset.
diff --git a/datasets/yelp_polarity/README.md b/datasets/yelp_polarity/README.md
index 0bd3e6a5720..32d935de913 100644
--- a/datasets/yelp_polarity/README.md
+++ b/datasets/yelp_polarity/README.md
@@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: null
+pretty_name: YelpPolarity
---
# Dataset Card for "yelp_polarity"
@@ -191,4 +192,4 @@ The data fields are the same among all splits.
### Contributions
-Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@julien-c](https://github.com/julien-c) for adding this dataset.
\ No newline at end of file
+Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@julien-c](https://github.com/julien-c) for adding this dataset.
diff --git a/datasets/yelp_review_full/README.md b/datasets/yelp_review_full/README.md
index 4a6d12628bd..53a9cd5efb3 100644
--- a/datasets/yelp_review_full/README.md
+++ b/datasets/yelp_review_full/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- sentiment-classification
paperswithcode_id: null
+pretty_name: YelpReviewFull
---
# Dataset Card for YelpReviewFull
@@ -150,4 +151,4 @@ Xiang Zhang, Junbo Zhao, Yann LeCun. Character-level Convolutional Networks for
### Contributions
-Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset.
\ No newline at end of file
+Thanks to [@hfawaz](https://github.com/hfawaz) for adding this dataset.
diff --git a/datasets/yoruba_bbc_topics/README.md b/datasets/yoruba_bbc_topics/README.md
index 3f549ef8d24..f91e63a8dd9 100644
--- a/datasets/yoruba_bbc_topics/README.md
+++ b/datasets/yoruba_bbc_topics/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- topic-classification
paperswithcode_id: null
+pretty_name: Yoruba Bbc News Topic Classification Dataset (YorubaBbcTopics)
---
# Dataset Card for Yoruba BBC News Topic Classification dataset (yoruba_bbc_topics)
@@ -145,4 +146,4 @@ An instance consists of a news title sentence and the corresponding topic label
### Contributions
-Thanks to [@michael-aloys](https://github.com/michael-aloys) for adding this dataset.
\ No newline at end of file
+Thanks to [@michael-aloys](https://github.com/michael-aloys) for adding this dataset.
diff --git a/datasets/yoruba_gv_ner/README.md b/datasets/yoruba_gv_ner/README.md
index 899f19887eb..85af9fa0f75 100644
--- a/datasets/yoruba_gv_ner/README.md
+++ b/datasets/yoruba_gv_ner/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
+pretty_name: Yoruba GV NER Corpus
---
# Dataset Card for Yoruba GV NER Corpus
@@ -176,4 +177,4 @@ The data is under the [Creative Commons Attribution 3.0 ](https://creativecommon
```
### Contributions
-Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
\ No newline at end of file
+Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
diff --git a/datasets/yoruba_text_c3/README.md b/datasets/yoruba_text_c3/README.md
index be9003863dd..192df47fe55 100644
--- a/datasets/yoruba_text_c3/README.md
+++ b/datasets/yoruba_text_c3/README.md
@@ -18,6 +18,7 @@ task_categories:
task_ids:
- language-modeling
paperswithcode_id: null
+pretty_name: Yorùbá Text C3
---
# Dataset Card for Yorùbá Text C3
@@ -173,4 +174,4 @@ The data is under the [Creative Commons Attribution-NonCommercial 4.0 ](https://
### Contributions
-Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
\ No newline at end of file
+Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset.
diff --git a/datasets/yoruba_wordsim353/README.md b/datasets/yoruba_wordsim353/README.md
index 0a94e99a6a0..a687a6073b9 100644
--- a/datasets/yoruba_wordsim353/README.md
+++ b/datasets/yoruba_wordsim353/README.md
@@ -19,6 +19,7 @@ task_categories:
task_ids:
- semantic-similarity-scoring
paperswithcode_id: null
+pretty_name: Wordsim-353 In Yorùbá (YorubaWordsim353)
---
# Dataset Card for wordsim-353 in Yorùbá (yoruba_wordsim353)
@@ -145,4 +146,4 @@ An instance consists of a pair of words as well as their similarity. The dataset
### Contributions
-Thanks to [@michael-aloys](https://github.com/michael-aloys) for adding this dataset.
\ No newline at end of file
+Thanks to [@michael-aloys](https://github.com/michael-aloys) for adding this dataset.
diff --git a/datasets/youtube_caption_corrections/README.md b/datasets/youtube_caption_corrections/README.md
index 52229f0bd09..56fc987ccc4 100644
--- a/datasets/youtube_caption_corrections/README.md
+++ b/datasets/youtube_caption_corrections/README.md
@@ -21,6 +21,7 @@ task_ids:
- other-other-token-classification-of-text-errors
- slot-filling
paperswithcode_id: null
+pretty_name: YouTube Caption Corrections
---
# Dataset Card for YouTube Caption Corrections
@@ -172,4 +173,4 @@ https://github.com/2dot71mily/youtube_captions_corrections
### Contributions
-Thanks to [@2dot71mily](https://github.com/2dot71mily) for adding this dataset.
\ No newline at end of file
+Thanks to [@2dot71mily](https://github.com/2dot71mily) for adding this dataset.
diff --git a/datasets/zest/README.md b/datasets/zest/README.md
index 051c8762cfd..40141f5ba02 100644
--- a/datasets/zest/README.md
+++ b/datasets/zest/README.md
@@ -22,6 +22,7 @@ task_ids:
- question-answering-other-yes-no-qa
- structure-prediction-other-output-structure
paperswithcode_id: zest
+pretty_name: ZEST
---
# Dataset Card for "ZEST: ZEroShot learning from Task descriptions"
@@ -172,4 +173,4 @@ This dataset is licensed under [CC BY 4.0](https://creativecommons.org/licenses/
### Contributions
-Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
\ No newline at end of file
+Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.