-
Notifications
You must be signed in to change notification settings - Fork 3.1k
Adding HendrycksTest dataset #2370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
10 commits
Select commit
Hold shift + click to select a range
87fa6a6
adding hendrycks_test
8a3e1f7
minor adj
9feed80
update README
ed459bb
update README
09e2515
update README
594cd7f
minor modifications
73e0580
fix cropped csv lines
lhoestq 9566c47
remove json import
lhoestq 04ea29e
Merge remote-tracking branch 'upstream/master' into hendrycks_test
lhoestq 4186899
fix tags
lhoestq File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,178 @@ | ||
| --- | ||
| annotations_creators: | ||
| - no-annotation | ||
| language_creators: | ||
| - expert-generated | ||
| languages: | ||
| - en-US | ||
| licenses: | ||
| - mit | ||
| multilinguality: | ||
| - monolingual | ||
| size_categories: | ||
| - 10K<n<100K | ||
| source_datasets: | ||
| - original | ||
| task_categories: | ||
| - question-answering | ||
| task_ids: | ||
| - multiple-choice-qa | ||
| --- | ||
|
|
||
| # Dataset Card for HendrycksTest | ||
|
|
||
| ## Table of Contents | ||
| - [Table of Contents](#table-of-contents) | ||
| - [Dataset Description](#dataset-description) | ||
| - [Dataset Summary](#dataset-summary) | ||
| - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) | ||
| - [Languages](#languages) | ||
| - [Dataset Structure](#dataset-structure) | ||
| - [Data Instances](#data-instances) | ||
| - [Data Fields](#data-fields) | ||
| - [Data Splits](#data-splits) | ||
| - [Dataset Creation](#dataset-creation) | ||
| - [Curation Rationale](#curation-rationale) | ||
| - [Source Data](#source-data) | ||
| - [Annotations](#annotations) | ||
| - [Personal and Sensitive Information](#personal-and-sensitive-information) | ||
| - [Considerations for Using the Data](#considerations-for-using-the-data) | ||
| - [Social Impact of Dataset](#social-impact-of-dataset) | ||
| - [Discussion of Biases](#discussion-of-biases) | ||
| - [Other Known Limitations](#other-known-limitations) | ||
| - [Additional Information](#additional-information) | ||
| - [Dataset Curators](#dataset-curators) | ||
| - [Licensing Information](#licensing-information) | ||
| - [Citation Information](#citation-information) | ||
| - [Contributions](#contributions) | ||
|
|
||
| ## Dataset Description | ||
|
|
||
| [Measuring Massive Multitask Language Understanding](https://arxiv.org/pdf/2009.03300) by [Dan Hendrycks](https://people.eecs.berkeley.edu/~hendrycks/), [Collin Burns](http://collinpburns.com), [Steven Basart](https://stevenbas.art), Andy Zou, Mantas Mazeika, [Dawn Song](https://people.eecs.berkeley.edu/~dawnsong/), and [Jacob Steinhardt](https://www.stat.berkeley.edu/~jsteinhardt/) (ICLR 2021). | ||
|
|
||
| - **Repository**: https://github.com/hendrycks/test | ||
| - **Paper**: https://arxiv.org/abs/2009.03300 | ||
|
|
||
| A complete list of tasks: ['abstract_algebra', 'anatomy', 'astronomy', 'business_ethics', 'clinical_knowledge', 'college_biology', 'college_chemistry', 'college_computer_science', 'college_mathematics', 'college_medicine', 'college_physics', 'computer_security', 'conceptual_physics', 'econometrics', 'electrical_engineering', 'elementary_mathematics', 'formal_logic', 'global_facts', 'high_school_biology', 'high_school_chemistry', 'high_school_computer_science', 'high_school_european_history', 'high_school_geography', 'high_school_government_and_politics', 'high_school_macroeconomics', 'high_school_mathematics', 'high_school_microeconomics', 'high_school_physics', 'high_school_psychology', 'high_school_statistics', 'high_school_us_history', 'high_school_world_history', 'human_aging', 'human_sexuality', 'international_law', 'jurisprudence', 'logical_fallacies', 'machine_learning', 'management', 'marketing', 'medical_genetics', 'miscellaneous', 'moral_disputes', 'moral_scenarios', 'nutrition', 'philosophy', 'prehistory', 'professional_accounting', 'professional_law', 'professional_medicine', 'professional_psychology', 'public_relations', 'security_studies', 'sociology', 'us_foreign_policy', 'virology', 'world_religions'] | ||
|
|
||
| ### Dataset Summary | ||
|
|
||
| This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge. The test spans subjects in the humanities, social sciences, hard sciences, and other areas that are important for some people to learn. This covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability. | ||
|
|
||
| ### Supported Tasks and Leaderboards | ||
|
|
||
| | Model | Authors | Humanities | Social Science | STEM | Other | Average | | ||
| |------------------------------------|----------|:-------:|:-------:|:-------:|:-------:|:-------:| | ||
| | [UnifiedQA](https://arxiv.org/abs/2005.00700) | Khashabi et al., 2020 | 45.6 | 56.6 | 40.2 | 54.6 | 48.9 | ||
| | [GPT-3](https://arxiv.org/abs/2005.14165) (few-shot) | Brown et al., 2020 | 40.8 | 50.4 | 36.7 | 48.8 | 43.9 | ||
| | [GPT-2](https://arxiv.org/abs/2005.14165) | Radford et al., 2019 | 32.8 | 33.3 | 30.2 | 33.1 | 32.4 | ||
| | Random Baseline | N/A | 25.0 | 25.0 | 25.0 | 25.0 | 25.0 | 25.0 | ||
|
|
||
| ### Languages | ||
|
|
||
| English | ||
|
|
||
| ## Dataset Structure | ||
|
|
||
| ### Data Instances | ||
|
|
||
| An example from anatomy subtask looks as follows: | ||
| ``` | ||
| { | ||
| "question": "What is the embryological origin of the hyoid bone?", | ||
| "choices": ["The first pharyngeal arch", "The first and second pharyngeal arches", "The second pharyngeal arch", "The second and third pharyngeal arches"], | ||
| "answer": "D" | ||
| } | ||
| ``` | ||
|
|
||
| ### Data Fields | ||
|
|
||
| - `question`: a string feature | ||
| - `choices`: a list of 4 string features | ||
| - `answer`: a ClassLabel feature | ||
|
|
||
| ### Data Splits | ||
|
|
||
| - `auxiliary_train`: auxiliary multiple-choice training questions from ARC, MC_TEST, OBQA, RACE, etc. | ||
| - `dev`: 5 examples per subtask, meant for few-shot setting | ||
| - `test`: there are at least 100 examples per subtask | ||
|
|
||
| | | auxiliary_train | dev | val | test | | ||
| | ----- | :------: | :-----: | :-----: | :-----: | | ||
| | TOTAL | 99842 | 285 | 1531 | 14042 | ||
|
|
||
| ## Dataset Creation | ||
|
|
||
| ### Curation Rationale | ||
|
|
||
| Transformer models have driven this recent progress by pretraining on massive text corpora, including all of Wikipedia, thousands of books, and numerous websites. These models consequently see extensive information about specialized topics, most of which is not assessed by existing NLP benchmarks. To bridge the gap between the wide-ranging knowledge that models see during pretraining and the existing measures of success, we introduce a new benchmark for assessing models across a diverse set of subjects that humans learn. | ||
|
|
||
| ### Source Data | ||
|
|
||
| #### Initial Data Collection and Normalization | ||
|
|
||
| [More Information Needed] | ||
|
|
||
| #### Who are the source language producers? | ||
|
|
||
| [More Information Needed] | ||
|
|
||
| ### Annotations | ||
|
|
||
| #### Annotation process | ||
|
|
||
| [More Information Needed] | ||
|
|
||
| #### Who are the annotators? | ||
|
|
||
| [More Information Needed] | ||
|
|
||
| ### Personal and Sensitive Information | ||
|
|
||
| [More Information Needed] | ||
|
|
||
| ## Considerations for Using the Data | ||
|
|
||
| ### Social Impact of Dataset | ||
|
|
||
| [More Information Needed] | ||
|
|
||
| ### Discussion of Biases | ||
|
|
||
| [More Information Needed] | ||
|
|
||
| ### Other Known Limitations | ||
|
|
||
| [More Information Needed] | ||
|
|
||
| ## Additional Information | ||
|
|
||
| ### Dataset Curators | ||
|
|
||
| [More Information Needed] | ||
|
|
||
| ### Licensing Information | ||
|
|
||
| [MIT License](https://github.com/hendrycks/test/blob/master/LICENSE) | ||
|
|
||
| ### Citation Information | ||
|
|
||
| If you find this useful in your research, please consider citing the test and also the [ETHICS](https://arxiv.org/abs/2008.02275) dataset it draws from: | ||
| ``` | ||
| @article{hendryckstest2021, | ||
| title={Measuring Massive Multitask Language Understanding}, | ||
| author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt}, | ||
| journal={Proceedings of the International Conference on Learning Representations (ICLR)}, | ||
| year={2021} | ||
| } | ||
|
|
||
| @article{hendrycks2021ethics, | ||
| title={Aligning AI With Shared Human Values}, | ||
| author={Dan Hendrycks and Collin Burns and Steven Basart and Andrew Critch and Jerry Li and Dawn Song and Jacob Steinhardt}, | ||
| journal={Proceedings of the International Conference on Learning Representations (ICLR)}, | ||
| year={2021} | ||
| } | ||
| ``` | ||
| ### Contributions | ||
lhoestq marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| Thanks to [@andyzoujm](https://github.com/andyzoujm) for adding this dataset. | ||
Large diffs are not rendered by default.
Oops, something went wrong.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added
BIN
+11.7 KB
datasets/hendrycks_test/dummy/clinical_knowledge/1.0.0/dummy_data.zip
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added
BIN
+17.3 KB
datasets/hendrycks_test/dummy/college_computer_science/1.0.0/dummy_data.zip
Binary file not shown.
Binary file added
BIN
+17.3 KB
datasets/hendrycks_test/dummy/college_mathematics/1.0.0/dummy_data.zip
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added
BIN
+11.6 KB
datasets/hendrycks_test/dummy/conceptual_physics/1.0.0/dummy_data.zip
Binary file not shown.
Binary file not shown.
Binary file added
BIN
+11.8 KB
datasets/hendrycks_test/dummy/electrical_engineering/1.0.0/dummy_data.zip
Binary file not shown.
Binary file added
BIN
+11.8 KB
datasets/hendrycks_test/dummy/elementary_mathematics/1.0.0/dummy_data.zip
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added
BIN
+12.6 KB
datasets/hendrycks_test/dummy/high_school_biology/1.0.0/dummy_data.zip
Binary file not shown.
Binary file added
BIN
+12.1 KB
datasets/hendrycks_test/dummy/high_school_chemistry/1.0.0/dummy_data.zip
Binary file not shown.
Binary file added
BIN
+18 KB
datasets/hendrycks_test/dummy/high_school_computer_science/1.0.0/dummy_data.zip
Binary file not shown.
Binary file added
BIN
+23.9 KB
datasets/hendrycks_test/dummy/high_school_european_history/1.0.0/dummy_data.zip
Binary file not shown.
Binary file added
BIN
+12.1 KB
datasets/hendrycks_test/dummy/high_school_geography/1.0.0/dummy_data.zip
Binary file not shown.
Binary file added
BIN
+13.2 KB
datasets/hendrycks_test/dummy/high_school_government_and_politics/1.0.0/dummy_data.zip
Binary file not shown.
Binary file added
BIN
+12.2 KB
datasets/hendrycks_test/dummy/high_school_macroeconomics/1.0.0/dummy_data.zip
Binary file not shown.
Binary file added
BIN
+12.4 KB
datasets/hendrycks_test/dummy/high_school_mathematics/1.0.0/dummy_data.zip
Binary file not shown.
Binary file added
BIN
+12.1 KB
datasets/hendrycks_test/dummy/high_school_microeconomics/1.0.0/dummy_data.zip
Binary file not shown.
Binary file added
BIN
+12.6 KB
datasets/hendrycks_test/dummy/high_school_physics/1.0.0/dummy_data.zip
Binary file not shown.
Binary file added
BIN
+12.5 KB
datasets/hendrycks_test/dummy/high_school_psychology/1.0.0/dummy_data.zip
Binary file not shown.
Binary file added
BIN
+18.1 KB
datasets/hendrycks_test/dummy/high_school_statistics/1.0.0/dummy_data.zip
Binary file not shown.
Binary file added
BIN
+20.5 KB
datasets/hendrycks_test/dummy/high_school_us_history/1.0.0/dummy_data.zip
Binary file not shown.
Binary file added
BIN
+19 KB
datasets/hendrycks_test/dummy/high_school_world_history/1.0.0/dummy_data.zip
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added
BIN
+13.3 KB
datasets/hendrycks_test/dummy/professional_accounting/1.0.0/dummy_data.zip
Binary file not shown.
Binary file not shown.
Binary file added
BIN
+16.1 KB
datasets/hendrycks_test/dummy/professional_medicine/1.0.0/dummy_data.zip
Binary file not shown.
Binary file added
BIN
+13.1 KB
datasets/hendrycks_test/dummy/professional_psychology/1.0.0/dummy_data.zip
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,192 @@ | ||
| # coding=utf-8 | ||
| # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor. | ||
| # | ||
| # Licensed under the Apache License, Version 2.0 (the "License"); | ||
| # you may not use this file except in compliance with the License. | ||
| # You may obtain a copy of the License at | ||
| # | ||
| # http://www.apache.org/licenses/LICENSE-2.0 | ||
| # | ||
| # Unless required by applicable law or agreed to in writing, software | ||
| # distributed under the License is distributed on an "AS IS" BASIS, | ||
| # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| # See the License for the specific language governing permissions and | ||
| # limitations under the License. | ||
|
|
||
|
|
||
| import csv | ||
| import os | ||
|
|
||
| import datasets | ||
|
|
||
|
|
||
| _CITATION = """\ | ||
| @article{hendryckstest2021, | ||
| title={Measuring Massive Multitask Language Understanding}, | ||
| author={Dan Hendrycks and Collin Burns and Steven Basart and Andy Zou and Mantas Mazeika and Dawn Song and Jacob Steinhardt}, | ||
| journal={Proceedings of the International Conference on Learning Representations (ICLR)}, | ||
| year={2021} | ||
| } | ||
| """ | ||
|
|
||
| _DESCRIPTION = """\ | ||
| This is a massive multitask test consisting of multiple-choice questions from various branches of knowledge, covering 57 tasks including elementary mathematics, US history, computer science, law, and more. | ||
| """ | ||
|
|
||
| _HOMEPAGE = "https://github.com/hendrycks/test" | ||
|
|
||
| _URL = "https://people.eecs.berkeley.edu/~hendrycks/data.tar" | ||
|
|
||
| _SUBJECTS = [ | ||
| "abstract_algebra", | ||
| "anatomy", | ||
| "astronomy", | ||
| "business_ethics", | ||
| "clinical_knowledge", | ||
| "college_biology", | ||
| "college_chemistry", | ||
| "college_computer_science", | ||
| "college_mathematics", | ||
| "college_medicine", | ||
| "college_physics", | ||
| "computer_security", | ||
| "conceptual_physics", | ||
| "econometrics", | ||
| "electrical_engineering", | ||
| "elementary_mathematics", | ||
| "formal_logic", | ||
| "global_facts", | ||
| "high_school_biology", | ||
| "high_school_chemistry", | ||
| "high_school_computer_science", | ||
| "high_school_european_history", | ||
| "high_school_geography", | ||
| "high_school_government_and_politics", | ||
| "high_school_macroeconomics", | ||
| "high_school_mathematics", | ||
| "high_school_microeconomics", | ||
| "high_school_physics", | ||
| "high_school_psychology", | ||
| "high_school_statistics", | ||
| "high_school_us_history", | ||
| "high_school_world_history", | ||
| "human_aging", | ||
| "human_sexuality", | ||
| "international_law", | ||
| "jurisprudence", | ||
| "logical_fallacies", | ||
| "machine_learning", | ||
| "management", | ||
| "marketing", | ||
| "medical_genetics", | ||
| "miscellaneous", | ||
| "moral_disputes", | ||
| "moral_scenarios", | ||
| "nutrition", | ||
| "philosophy", | ||
| "prehistory", | ||
| "professional_accounting", | ||
| "professional_law", | ||
| "professional_medicine", | ||
| "professional_psychology", | ||
| "public_relations", | ||
| "security_studies", | ||
| "sociology", | ||
| "us_foreign_policy", | ||
| "virology", | ||
| "world_religions", | ||
| ] | ||
|
|
||
|
|
||
| class HendrycksTest(datasets.GeneratorBasedBuilder): | ||
| """Massive multitask MC test cosisting of 57 tasks""" | ||
|
|
||
| BUILDER_CONFIGS = [ | ||
| datasets.BuilderConfig( | ||
| name=sub, version=datasets.Version("1.0.0"), description=f"Hendrycks Test Subject {sub}" | ||
| ) | ||
| for sub in _SUBJECTS | ||
| ] | ||
|
|
||
| def _info(self): | ||
| # TODO: This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset | ||
| features = datasets.Features( | ||
| { | ||
| "question": datasets.Value("string"), | ||
| "choices": datasets.features.Sequence(datasets.Value("string")), | ||
| "answer": datasets.features.ClassLabel(num_classes=4, names=["A", "B", "C", "D"]), | ||
| } | ||
| ) | ||
| return datasets.DatasetInfo( | ||
| # This is the description that will appear on the datasets page. | ||
| description=_DESCRIPTION, | ||
| # This defines the different columns of the dataset and their types | ||
| features=features, # Here we define them above because they are different between the two configurations | ||
| # If there's a common (input, target) tuple from the features, | ||
| # specify them here. They'll be used if as_supervised=True in | ||
| # builder.as_dataset. | ||
| supervised_keys=None, | ||
| # Homepage of the dataset for documentation | ||
| homepage=_HOMEPAGE, | ||
| # Citation for the dataset | ||
| citation=_CITATION, | ||
| ) | ||
|
|
||
| def _split_generators(self, dl_manager): | ||
| """Returns SplitGenerators.""" | ||
| data_dir = dl_manager.download_and_extract(_URL) | ||
| return [ | ||
| datasets.SplitGenerator( | ||
| name=datasets.Split("auxiliary_train"), | ||
| # These kwargs will be passed to _generate_examples | ||
| gen_kwargs={ | ||
| "datadir": os.path.join(data_dir, "data", "auxiliary_train"), | ||
| "split": "auxiliary_train", | ||
| }, | ||
| ), | ||
| datasets.SplitGenerator( | ||
| name=datasets.Split.TEST, | ||
| # These kwargs will be passed to _generate_examples | ||
| gen_kwargs={"datadir": os.path.join(data_dir, "data", "test"), "split": "test"}, | ||
| ), | ||
| datasets.SplitGenerator( | ||
| name=datasets.Split.VALIDATION, | ||
| # These kwargs will be passed to _generate_examples | ||
| gen_kwargs={ | ||
| "datadir": os.path.join(data_dir, "data", "val"), | ||
| "split": "val", | ||
| }, | ||
| ), | ||
| datasets.SplitGenerator( | ||
| name=datasets.Split("dev"), | ||
| # These kwargs will be passed to _generate_examples | ||
| gen_kwargs={ | ||
| "datadir": os.path.join(data_dir, "data", "dev"), | ||
| "split": "dev", | ||
| }, | ||
| ), | ||
| ] | ||
|
|
||
| def _generate_examples(self, datadir, split): | ||
| """Yields examples as (key, example) tuples.""" | ||
| # This method handles input defined in _split_generators to yield (key, example) tuples from the dataset. | ||
| # The `key` is here for legacy reason (tfds) and is not important in itself. | ||
|
|
||
| id_ = 0 | ||
| if split == "auxiliary_train": | ||
| for f in sorted(os.listdir(datadir)): | ||
| reader = csv.reader( | ||
| open(os.path.join(datadir, f), "r", encoding="utf-8"), quotechar='"', delimiter="," | ||
| ) | ||
| for data in reader: | ||
| yield id_, {"question": data[0], "choices": data[1:5], "answer": data[5]} | ||
| id_ += 1 | ||
| else: | ||
| reader = csv.reader( | ||
| open(os.path.join(datadir, f"{self.config.name}_{split}.csv"), "r", encoding="utf-8"), | ||
| quotechar='"', | ||
| delimiter=",", | ||
| ) | ||
| for data in reader: | ||
| yield id_, {"question": data[0], "choices": data[1:5], "answer": data[5]} | ||
| id_ += 1 |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.