Skip to content

Commit 9e8b671

Browse files
committed
first commit
1 parent 360b71c commit 9e8b671

File tree

1 file changed

+4
-3
lines changed

1 file changed

+4
-3
lines changed

README.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# ScienceQA: Science Question Answering
22

3-
![VQA](https://img.shields.io/badge/Task-VQA-orange) ![Science Problems](https://img.shields.io/badge/Task-Science Problems-orange) ![ScienceQA](https://img.shields.io/badge/Dataset-ScienceQA-blue) ![Chain-of-Thought](https://img.shields.io/badge/Model-Chain of Thought-red) ![GPT-3](https://img.shields.io/badge/Model-GPT3-red) ![LLM](https://img.shields.io/badge/Model-LLM-red)
3+
![VQA](https://img.shields.io/badge/Task-VQA-orange) ![Science Problems](https://img.shields.io/badge/Task-Science_Problems-orange) ![ScienceQA](https://img.shields.io/badge/Dataset-ScienceQA-blue) ![Chain-of-Thought](https://img.shields.io/badge/Model-Chain_of_Thought-red) ![GPT-3](https://img.shields.io/badge/Model-GPT--3-red) ![LLM](https://img.shields.io/badge/Model-LLM-red)
44

55
Data and code for NeurIPS 2022 Paper "[Learn to Explain: Multimodal Reasoning via
66
Thought Chains for Science Question Answering](http://lupantech.github.io/papers/neurips22_scienceqa.pdf)".
@@ -25,7 +25,7 @@ For more details, you can find our project page [here](https://scienceqa.github.
2525

2626
## Download the dataset
2727

28-
The text part of the **ScienceQA** dataset is provided in [data/scienceqa/problems.json](https://github.com/lupantech/ScienceQA/data/scienceqa/problems.json). You can download the image data of ScienceQA by running:
28+
The text part of the **ScienceQA** dataset is provided in [data/scienceqa/problems.json](https://github.com/lupantech/ScienceQA/blob/main/data/scienceqa/problems.json). You can download the image data of ScienceQA by running:
2929

3030
```sh
3131
. tools/download.sh
@@ -62,7 +62,7 @@ pip install -r requirements.txt
6262

6363
### Generate the image captions
6464

65-
We use the image captioning model to generate the text content for images in ScienceQA. The pre-generated image captions are provided in [data/captions.json](https://github.com/lupantech/ScienceQA/data/problems.json).
65+
We use the image captioning model to generate the text content for images in ScienceQA. The pre-generated image captions are provided in [data/captions.json](https://github.com/lupantech/ScienceQA/blob/main/data/captions.json).
6666

6767
(Optionally) You can generate the image captions with user-specific arguments with the following command, which will save the caption data in `data/captions_user.json`.
6868

@@ -138,6 +138,7 @@ This work is licensed under a [MIT License](http://creativecommons.org/licenses/
138138
The ScienceQA dataset is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).
139139

140140

141+
141142
## Cite
142143

143144
If the paper, codes, or the dataset inspire you, please cite us:

0 commit comments

Comments
 (0)