Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
3 changes: 2 additions & 1 deletion datasets/event2Mind/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: event2mind
pretty_name: Event2Mind
---

# Dataset Card for "event2Mind"
Expand Down Expand Up @@ -164,4 +165,4 @@ The data fields are the same among all splits.

### Contributions

Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/factckbr/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ task_categories:
task_ids:
- fact-checking
paperswithcode_id: null
pretty_name: factckbr
---

# Dataset Card for [Dataset Name]
Expand Down Expand Up @@ -142,4 +143,4 @@ The FACTCK.BR dataset contains 1309 claims with its corresponding label.

### Contributions

Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
Thanks to [@hugoabonizio](https://github.com/hugoabonizio) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/fake_news_english/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ task_categories:
task_ids:
- multi-label-classification
paperswithcode_id: null
pretty_name: Fake News English
---

# Dataset Card for Fake News English
Expand Down Expand Up @@ -165,4 +166,4 @@ doi = {10.1145/3201064.3201100}

### Contributions

Thanks to [@MisbahKhan789](https://github.com/MisbahKhan789), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
Thanks to [@MisbahKhan789](https://github.com/MisbahKhan789), [@lhoestq](https://github.com/lhoestq) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/fake_news_filipino/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ task_categories:
task_ids:
- fact-checking
paperswithcode_id: fake-news-filipino-dataset
pretty_name: Fake News Filipino
---

# Dataset Card for Fake News Filipino
Expand Down Expand Up @@ -156,4 +157,4 @@ Jan Christian Blaise Cruz, Julianne Agatha Tan, and Charibeth Cheng

### Contributions

Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.
Thanks to [@anaerobeth](https://github.com/anaerobeth) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/farsi_news/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ task_categories:
task_ids:
- language-modeling
paperswithcode_id: null
pretty_name: FarsiNews
---

# Dataset Card Creation Guide
Expand Down Expand Up @@ -153,4 +154,4 @@ https://github.com/sci2lab/Farsi-datasets

### Contributions

Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset.
Thanks to [@Narsil](https://github.com/Narsil) for adding this dataset.
1 change: 1 addition & 0 deletions datasets/fashion_mnist/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ task_categories:
task_ids:
- other-other-image-classification
paperswithcode_id: fashion-mnist
pretty_name: FashionMNIST
---

# Dataset Card for FashionMNIST
Expand Down
1 change: 1 addition & 0 deletions datasets/few_rel/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ task_categories:
task_ids:
- other-other-relation-extraction
paperswithcode_id: fewrel
pretty_name: Few-Shot Relation Classification Dataset
---

# Dataset Card for few_rel
Expand Down
1 change: 1 addition & 0 deletions datasets/financial_phrasebank/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ task_ids:
- multi-class-classification
- sentiment-classification
paperswithcode_id: null
pretty_name: FinancialPhrasebank
---

# Dataset Card for financial_phrasebank
Expand Down
3 changes: 2 additions & 1 deletion datasets/finer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: finer
pretty_name: Finnish News Corpus for Named Entity Recognition
---

# Dataset Card for [Dataset Name]
Expand Down Expand Up @@ -155,4 +156,4 @@ IOB2 labeling scheme is used.

### Contributions

Thanks to [@stefan-it](https://github.com/stefan-it) for adding this dataset.
Thanks to [@stefan-it](https://github.com/stefan-it) for adding this dataset.
1 change: 1 addition & 0 deletions datasets/fquad/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ task_ids:
- extractive-qa
- closed-domain-qa
paperswithcode_id: fquad
pretty_name: French Question Answering Dataset
---

# Dataset Card for "fquad"
Expand Down
1 change: 1 addition & 0 deletions datasets/freebase_qa/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ task_categories:
task_ids:
- open-domain-qa
paperswithcode_id: freebaseqa
pretty_name: FreebaseQA
---

# Dataset Card for FreebaseQA
Expand Down
3 changes: 2 additions & 1 deletion datasets/gap/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: gap
pretty_name: GAP Benchmark Suite
---

# Dataset Card for "gap"
Expand Down Expand Up @@ -187,4 +188,4 @@ The data fields are the same among all splits.

### Contributions

Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@otakumesi](https://github.com/otakumesi), [@lewtun](https://github.com/lewtun) for adding this dataset.
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@otakumesi](https://github.com/otakumesi), [@lewtun](https://github.com/lewtun) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/generics_kb/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ task_categories:
task_ids:
- other-other-knowledge-base
paperswithcode_id: genericskb
pretty_name: GenericsKB
---

# Dataset Card for Generics KB
Expand Down Expand Up @@ -205,4 +206,4 @@ publisher = {Allen Institute for AI},

### Contributions

Thanks to [@bpatidar](https://github.com/bpatidar) for adding this dataset.
Thanks to [@bpatidar](https://github.com/bpatidar) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/german_legal_entity_recognition/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: legal-documents-entity-recognition
pretty_name: Legal Documents Entity Recognition
---

# Dataset Card Creation Guide
Expand Down Expand Up @@ -142,4 +143,4 @@ paperswithcode_id: legal-documents-entity-recognition
[More Information Needed]
### Contributions

Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
1 change: 1 addition & 0 deletions datasets/germaner/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
pretty_name: GermaNer
---

# Dataset Card Creation Guide
Expand Down
3 changes: 2 additions & 1 deletion datasets/germeval_14/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
paperswithcode_id: null
pretty_name: GermEval14
---

# Dataset Card for "germeval_14"
Expand Down Expand Up @@ -168,4 +169,4 @@ The data fields are the same among all splits.

### Contributions

Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@stefan-it](https://github.com/stefan-it), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
Thanks to [@thomwolf](https://github.com/thomwolf), [@jplu](https://github.com/jplu), [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@stefan-it](https://github.com/stefan-it), [@mariamabarham](https://github.com/mariamabarham) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/giga_fren/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ task_categories:
task_ids:
- machine-translation
paperswithcode_id: null
pretty_name: GigaFren
---

# Dataset Card Creation Guide
Expand Down Expand Up @@ -145,4 +146,4 @@ Here are some examples of questions and facts:
[More Information Needed]
### Contributions

Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/gigaword/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: null
pretty_name: gigaword
---

# Dataset Card for "gigaword"
Expand Down Expand Up @@ -176,4 +177,4 @@ The data fields are the same among all splits.

### Contributions

Thanks to [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
Thanks to [@lewtun](https://github.com/lewtun), [@lhoestq](https://github.com/lhoestq), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/glucose/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ task_categories:
task_ids:
- sequence-modeling-other-common-sense-inference
paperswithcode_id: glucose
pretty_name: GLUCOSE
---

# Dataset Card for [Dataset Name]
Expand Down Expand Up @@ -232,4 +233,4 @@ Creative Commons Attribution-NonCommercial 4.0 International Public License
```
### Contributions

Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset.
Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset.
1 change: 1 addition & 0 deletions datasets/glue/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,7 @@ task_ids:
wnli:
- text-classification-other-coreference-nli
paperswithcode_id: glue
pretty_name: General Language Understanding Evaluation benchmark
---

# Dataset Card for "glue"
Expand Down
3 changes: 2 additions & 1 deletion datasets/gnad10/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ task_categories:
task_ids:
- topic-classification
paperswithcode_id: null
pretty_name: 10k German News Articles Datasets
---

# Dataset Card for 10k German News Articles Datasets
Expand Down Expand Up @@ -153,4 +154,4 @@ This dataset is licensed under the Creative Commons Attribution-NonCommercial-Sh

### Contributions

Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/go_emotions/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ task_ids:
- multi-label-classification
- text-classification-other-emotion
paperswithcode_id: goemotions
pretty_name: GoEmotions
---

# Dataset Card for GoEmotions
Expand Down Expand Up @@ -181,4 +182,4 @@ The GitHub repository which houses this dataset has an

### Contributions

Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
Thanks to [@joeddav](https://github.com/joeddav) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/google_wellformed_query/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ size_categories:
licenses:
- CC-BY-SA-4.0
paperswithcode_id: null
pretty_name: GoogleWellformedQuery
---

# Dataset Card Creation Guide
Expand Down Expand Up @@ -154,4 +155,4 @@ Query-wellformedness dataset is licensed under CC BY-SA 4.0. Any third party con

### Contributions

Thanks to [@vasudevgupta7](https://github.com/vasudevgupta7) for adding this dataset.
Thanks to [@vasudevgupta7](https://github.com/vasudevgupta7) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/grail_qa/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ task_categories:
task_ids:
- question-answering-other-knowledge-base-qa
paperswithcode_id: null
pretty_name: Grail QA
---

# Dataset Card for Grail QA
Expand Down Expand Up @@ -178,4 +179,4 @@ Test | 13,231

### Contributions

Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset.
Thanks to [@mattbui](https://github.com/mattbui) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/guardian_authorship/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: null
pretty_name: GuardianAuthorship
---

# Dataset Card for "guardian_authorship"
Expand Down Expand Up @@ -266,4 +267,4 @@ The data fields are the same among all splits.

### Contributions

Thanks to [@thomwolf](https://github.com/thomwolf), [@eltoto1219](https://github.com/eltoto1219), [@malikaltakrori](https://github.com/malikaltakrori) for adding this dataset.
Thanks to [@thomwolf](https://github.com/thomwolf), [@eltoto1219](https://github.com/eltoto1219), [@malikaltakrori](https://github.com/malikaltakrori) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/gutenberg_time/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ task_categories:
task_ids:
- multi-class-classification
paperswithcode_id: gutenberg-time-dataset
pretty_name: the Gutenberg Time dataset
---

# Dataset Card for the Gutenberg Time dataset
Expand Down Expand Up @@ -163,4 +164,4 @@ Allen Kim, Charuta Pethe and Steven Skiena, Stony Brook University
```
### Contributions

Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset.
Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/hans/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
languages:
- en
paperswithcode_id: hans
pretty_name: Heuristic Analysis for NLI Systems
---

# Dataset Card for "hans"
Expand Down Expand Up @@ -170,4 +171,4 @@ The data fields are the same among all splits.

### Contributions

Thanks to [@TevenLeScao](https://github.com/TevenLeScao), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
Thanks to [@TevenLeScao](https://github.com/TevenLeScao), [@thomwolf](https://github.com/thomwolf) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/hansards/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
---
paperswithcode_id: null
pretty_name: hansards
---

# Dataset Card for "hansards"
Expand Down Expand Up @@ -186,4 +187,4 @@ The data fields are the same among all splits.

### Contributions

Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten), [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/hard/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ task_categories:
task_ids:
- multi-class-classification
paperswithcode_id: hard
pretty_name: Hotel Arabic-Reviews Dataset
---

# Dataset Card for Hard
Expand Down Expand Up @@ -137,4 +138,4 @@ The dataset is not split.

### Contributions

Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset.
Thanks to [@zaidalyafeai](https://github.com/zaidalyafeai) for adding this dataset.
3 changes: 2 additions & 1 deletion datasets/harem/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ task_categories:
task_ids:
- named-entity-recognition
paperswithcode_id: null
pretty_name: harem
---

# Dataset Card for [Dataset Name]
Expand Down Expand Up @@ -177,4 +178,4 @@ The data is split into train, validation and test set for each of the two versio

### Contributions

Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset.
Thanks to [@jonatasgrosman](https://github.com/jonatasgrosman) for adding this dataset.
Loading