Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
42 commits
Select commit Hold shift + click to select a range
4e18df1
Add parquet scan options and docs (#7801)
lhoestq Oct 9, 2025
cfcdfce
More Parquet streaming docs (#7803)
lhoestq Oct 9, 2025
02ee330
Less api calls when resolving data_files (#7805)
lhoestq Oct 9, 2025
5eec91a
Parquet: add `on_bad_file` argument to error/warn/skip bad files (#7806)
lhoestq Oct 9, 2025
fd8d287
typo (#7807)
lhoestq Oct 9, 2025
7e1350b
release: 4.2.0 (#7808)
lhoestq Oct 9, 2025
f25661f
Set dev version (#7809)
lhoestq Oct 9, 2025
88d53e2
fix conda deps (#7810)
lhoestq Oct 9, 2025
63c933a
Add pyarrow's binary view to features (#7795)
delta003 Oct 10, 2025
aa7f2a9
Fix polars cast column image (#7800)
CloseChoice Oct 13, 2025
3e13d30
Allow streaming hdf5 files (#7814)
lhoestq Oct 13, 2025
12f5aca
Retry open hf file (#7822)
lhoestq Oct 17, 2025
0b2a4c2
Keep hffs cache in workers when streaming (#7820)
lhoestq Oct 17, 2025
74c7154
Fix batch_size default description in to_polars docstrings (#7824)
albertvillanova Oct 20, 2025
fb445ff
docs: document_dataset PDFs & OCR (#7812)
ethanknights Oct 20, 2025
d10e846
Add custom fingerprint support to `from_generator` (#7533)
simonreise Oct 23, 2025
9332649
picklable batch_fn (#7826)
lhoestq Oct 23, 2025
41c0529
release: 4.3.0 (#7827)
lhoestq Oct 23, 2025
159a645
set dev version (#7828)
lhoestq Oct 23, 2025
5138876
Add nifti support (#7815)
CloseChoice Oct 24, 2025
a7600ac
Fix random seed on shuffle and interleave_datasets (#7823)
CloseChoice Oct 24, 2025
6d985d9
fix ci compressionfs (#7830)
lhoestq Oct 24, 2025
f7c8e46
fix: better args passthrough for `_batch_setitems()` (#7817)
sghng Oct 27, 2025
627ed2e
Fix: Properly render [!TIP] block in stream.shuffle documentation (#7…
art-test-stack Oct 28, 2025
9e5b0e6
resolves the ValueError: Unable to avoid copy while creating an array…
ArjunJagdale Oct 28, 2025
8b1bd4e
Python 3.14 (#7836)
lhoestq Oct 31, 2025
0e7c6ca
Add num channels to audio (#7840)
CloseChoice Nov 3, 2025
03c16ec
fix column with transform (#7843)
lhoestq Nov 3, 2025
fc7f97c
support fsspec 2025.10.0 (#7844)
lhoestq Nov 3, 2025
232cb10
Release: 4.4.0 (#7845)
lhoestq Nov 4, 2025
5cb2925
set dev version (#7846)
lhoestq Nov 4, 2025
f2f58b3
Better streaming retries (504 and 429) (#7847)
lhoestq Nov 4, 2025
d32a1f7
DOC: remove mode parameter in docstring of pdf and video feature (#7848)
CloseChoice Nov 5, 2025
6a6983a
release: 4.4.1 (#7849)
lhoestq Nov 5, 2025
91f96a0
dev version (#7850)
lhoestq Nov 5, 2025
3356d74
Fix embed storage nifti (#7853)
CloseChoice Nov 6, 2025
cf647ab
ArXiv -> HF Papers (#7855)
qgallouedec Nov 10, 2025
17f40a3
fix some broken links (#7859)
julien-c Nov 10, 2025
c97e757
Nifti visualization support (#7874)
CloseChoice Nov 21, 2025
004a5bf
Replace papaya with niivue (#7878)
CloseChoice Nov 27, 2025
872490c
fix(nifti): enable lazy loading for Nifti1ImageWrapper
The-Obstacle-Is-The-Way Nov 29, 2025
c706e73
chore: trigger CI
The-Obstacle-Is-The-Way Nov 29, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions .github/conda/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,12 @@ requirements:
- dill
- pandas
- requests >=2.19.0
- httpx <1.0.0
- tqdm >=4.66.3
- dataclasses
- multiprocess
- fsspec
- huggingface_hub >=0.24.0,<1.0.0
- huggingface_hub >=0.25.0,<2.0.0
- packaging
run:
- python
Expand All @@ -35,11 +36,12 @@ requirements:
- dill
- pandas
- requests >=2.19.0
- httpx <1.0.0
- tqdm >=4.66.3
- dataclasses
- multiprocess
- fsspec
- huggingface_hub >=0.24.0,<1.0.0
- huggingface_hub >=0.25.0,<2.0.0
- packaging

test:
Expand Down
16 changes: 8 additions & 8 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ jobs:
run: |
python -m pytest -rfExX -m ${{ matrix.test }} -n 2 --dist loadfile -sv ./tests/

test_py312:
test_py314:
needs: check_code_quality
strategy:
matrix:
Expand All @@ -100,18 +100,18 @@ jobs:
run: |
sudo apt update
sudo apt install -y ffmpeg
- name: Set up Python 3.12
- name: Set up Python 3.14
uses: actions/setup-python@v5
with:
python-version: "3.12"
python-version: "3.14"
- name: Setup conda env (windows)
if: ${{ matrix.os == 'windows-latest' }}
uses: conda-incubator/setup-miniconda@v2
with:
auto-update-conda: true
miniconda-version: "latest"
activate-environment: test
python-version: "3.12"
python-version: "3.14"
- name: Setup FFmpeg (windows)
if: ${{ matrix.os == 'windows-latest' }}
run: conda install "ffmpeg=7.0.1" -c conda-forge
Expand All @@ -127,7 +127,7 @@ jobs:
run: |
python -m pytest -rfExX -m ${{ matrix.test }} -n 2 --dist loadfile -sv ./tests/

test_py312_future:
test_py314_future:
needs: check_code_quality
strategy:
matrix:
Expand All @@ -145,18 +145,18 @@ jobs:
run: |
sudo apt update
sudo apt install -y ffmpeg
- name: Set up Python 3.12
- name: Set up Python 3.14
uses: actions/setup-python@v5
with:
python-version: "3.12"
python-version: "3.14"
- name: Setup conda env (windows)
if: ${{ matrix.os == 'windows-latest' }}
uses: conda-incubator/setup-miniconda@v2
with:
auto-update-conda: true
miniconda-version: "latest"
activate-environment: test
python-version: "3.12"
python-version: "3.14"
- name: Setup FFmpeg (windows)
if: ${{ matrix.os == 'windows-latest' }}
run: conda install "ffmpeg=7.0.1" -c conda-forge
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,7 +120,7 @@ If you are a **dataset author**... you know what to do, it is your dataset after

If you are a **user of a dataset**, the main source of information should be the dataset paper if it is available: we recommend pulling information from there into the relevant paragraphs of the template. We also eagerly welcome discussions on the [Considerations for Using the Data](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md#considerations-for-using-the-data) based on existing scholarship or personal experience that would benefit the whole community.

Finally, if you want more information on the how and why of dataset cards, we strongly recommend reading the foundational works [Datasheets for Datasets](https://arxiv.org/abs/1803.09010) and [Data Statements for NLP](https://www.aclweb.org/anthology/Q18-1041/).
Finally, if you want more information on the how and why of dataset cards, we strongly recommend reading the foundational works [Datasheets for Datasets](https://huggingface.co/papers/1803.09010) and [Data Statements for NLP](https://www.aclweb.org/anthology/Q18-1041/).

Thank you for your contribution!

Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ If you're a dataset owner and wish to update any part of it (description, citati

## BibTeX

If you want to cite our 🤗 Datasets library, you can use our [paper](https://arxiv.org/abs/2109.02846):
If you want to cite our 🤗 Datasets library, you can use our [paper](https://huggingface.co/papers/2109.02846):

```bibtex
@inproceedings{lhoest-etal-2021-datasets,
Expand Down
2 changes: 2 additions & 0 deletions docs/source/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,8 @@
title: Load document data
- local: document_dataset
title: Create a document dataset
- local: nifti_dataset
title: Create a medical imaging dataset
title: "Vision"
- sections:
- local: nlp_load
Expand Down
4 changes: 2 additions & 2 deletions docs/source/dataset_card.mdx
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Create a dataset card

Each dataset should have a dataset card to promote responsible usage and inform users of any potential biases within the dataset.
This idea was inspired by the Model Cards proposed by [Mitchell, 2018](https://arxiv.org/abs/1810.03993).
This idea was inspired by the Model Cards proposed by [Mitchell, 2018](https://huggingface.co/papers/1810.03993).
Dataset cards help users understand a dataset's contents, the context for using the dataset, how it was created, and any other considerations a user should be aware of.

Creating a dataset card is easy and can be done in just a few steps:
Expand All @@ -24,4 +24,4 @@ Creating a dataset card is easy and can be done in just a few steps:

YAML also allows you to customize the way your dataset is loaded by [defining splits and/or configurations](./repository_structure#define-your-splits-and-subsets-in-yaml) without the need to write any code.

Feel free to take a look at the [SNLI](https://huggingface.co/datasets/snli), [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail), and [Allociné](https://huggingface.co/datasets/allocine) dataset cards as examples to help you get started.
Feel free to take a look at the [SNLI](https://huggingface.co/datasets/stanfordnlp/snli), [CNN/DailyMail](https://huggingface.co/datasets/abisee/cnn_dailymail), and [Allociné](https://huggingface.co/datasets/tblard/allocine) dataset cards as examples to help you get started.
20 changes: 10 additions & 10 deletions docs/source/document_dataset.mdx
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# Create a document dataset

This guide will show you how to create a document dataset with `PdfFolder` and some metadata. This is a no-code solution for quickly creating a document dataset with several thousand pdfs.
This guide will show you how to create a document dataset with `PdfFolder` and some metadata. This is a no-code solution for quickly creating a document dataset with several thousand PDFs.

> [!TIP]
> You can control access to your dataset by requiring users to share their contact information first. Check out the [Gated datasets](https://huggingface.co/docs/hub/datasets-gated) guide for more information about how to enable this feature on the Hub.

## PdfFolder

The `PdfFolder` is a dataset builder designed to quickly load a document dataset with several thousand pdfs without requiring you to write any code.
The `PdfFolder` is a dataset builder designed to quickly load a document dataset with several thousand PDFs without requiring you to write any code.

> [!TIP]
> 💡 Take a look at the [Split pattern hierarchy](repository_structure#split-pattern-hierarchy) to learn more about how `PdfFolder` creates dataset splits based on your dataset repository structure.
Expand Down Expand Up @@ -72,32 +72,32 @@ file_name,additional_feature
or using `metadata.jsonl`:

```jsonl
{"file_name": "0001.pdf", "additional_feature": "This is a first value of a text feature you added to your pdfs"}
{"file_name": "0002.pdf", "additional_feature": "This is a second value of a text feature you added to your pdfs"}
{"file_name": "0003.pdf", "additional_feature": "This is a third value of a text feature you added to your pdfs"}
{"file_name": "0001.pdf", "additional_feature": "This is a first value of a text feature you added to your PDFs"}
{"file_name": "0002.pdf", "additional_feature": "This is a second value of a text feature you added to your PDFs"}
{"file_name": "0003.pdf", "additional_feature": "This is a third value of a text feature you added to your PDFs"}
```

Here the `file_name` must be the name of the PDF file next to the metadata file. More generally, it must be the relative path from the directory containing the metadata to the PDF file.

It's possible to point to more than one pdf in each row in your dataset, for example if both your input and output are pdfs:
It's possible to point to more than one PDF in each row in your dataset, for example if both your input and output are pdfs:

```jsonl
{"input_file_name": "0001.pdf", "output_file_name": "0001_output.pdf"}
{"input_file_name": "0002.pdf", "output_file_name": "0002_output.pdf"}
{"input_file_name": "0003.pdf", "output_file_name": "0003_output.pdf"}
```

You can also define lists of pdfs. In that case you need to name the field `file_names` or `*_file_names`. Here is an example:
You can also define lists of PDFs. In that case you need to name the field `file_names` or `*_file_names`. Here is an example:

```jsonl
{"pdfs_file_names": ["0001_part1.pdf", "0001_part2.pdf"], "label": "urgent"}
{"pdfs_file_names": ["0002_part1.pdf", "0002_part2.pdf"], "label": "urgent"}
{"pdfs_file_names": ["0003_part1.pdf", "0002_part2.pdf"], "label": "normal"}
```

### OCR (Optical character recognition)
### OCR (Optical Character Recognition)

OCR datasets have the text contained in a pdf. An example `metadata.csv` may look like:
OCR datasets have the text contained in a PDF. An example `metadata.csv` may look like:

```csv
file_name,text
Expand All @@ -106,7 +106,7 @@ file_name,text
0003.pdf,Attention is all you need. Abstract. The ...
```

Load the dataset with `PdfFolder`, and it will create a `text` column for the pdf captions:
Load the dataset with `PdfFolder`, and it will create a `text` column for the PDF captions:

```py
>>> dataset = load_dataset("pdffolder", data_dir="/path/to/folder", split="train")
Expand Down
4 changes: 2 additions & 2 deletions docs/source/faiss_es.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ FAISS retrieves documents based on the similarity of their vector representation

```py
>>> from datasets import load_dataset
>>> ds = load_dataset('crime_and_punish', split='train[:100]')
>>> ds = load_dataset('community-datasets/crime_and_punish', split='train[:100]')
>>> ds_with_embeddings = ds.map(lambda example: {'embeddings': ctx_encoder(**ctx_tokenizer(example["line"], return_tensors="pt"))[0][0].numpy()})
```

Expand Down Expand Up @@ -62,7 +62,7 @@ FAISS retrieves documents based on the similarity of their vector representation
7. Reload it at a later time with [`Dataset.load_faiss_index`]:

```py
>>> ds = load_dataset('crime_and_punish', split='train[:100]')
>>> ds = load_dataset('community-datasets/crime_and_punish', split='train[:100]')
>>> ds.load_faiss_index('embeddings', 'my_index.faiss')
```

Expand Down
4 changes: 2 additions & 2 deletions docs/source/image_load.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ When you load an image dataset and call the image column, the images are decoded
```py
>>> from datasets import load_dataset, Image

>>> dataset = load_dataset("beans", split="train")
>>> dataset = load_dataset("AI-Lab-Makerere/beans", split="train")
>>> dataset[0]["image"]
```

Expand All @@ -33,7 +33,7 @@ You can load a dataset from the image path. Use the [`~Dataset.cast_column`] fun
If you only want to load the underlying path to the image dataset without decoding the image object, set `decode=False` in the [`Image`] feature:

```py
>>> dataset = load_dataset("beans", split="train").cast_column("image", Image(decode=False))
>>> dataset = load_dataset("AI-Lab-Makerere/beans", split="train").cast_column("image", Image(decode=False))
>>> dataset[0]["image"]
{'bytes': None,
'path': '/root/.cache/huggingface/datasets/downloads/extracted/b0a21163f78769a2cf11f58dfc767fb458fc7cea5c05dccc0144a2c0f0bc1292/train/bean_rust/bean_rust_train.29.jpg'}
Expand Down
2 changes: 1 addition & 1 deletion docs/source/loading.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -327,7 +327,7 @@ Select specific rows of the `train` split:
```py
>>> train_10_20_ds = datasets.load_dataset("ajibawa-2023/General-Stories-Collection", split="train[10:20]")
===STRINGAPI-READINSTRUCTION-SPLIT===
>>> train_10_20_ds = datasets.load_dataset("bookcorpu", split=datasets.ReadInstruction("train", from_=10, to=20, unit="abs"))
>>> train_10_20_ds = datasets.load_dataset("rojagtap/bookcorpus", split=datasets.ReadInstruction("train", from_=10, to=20, unit="abs"))
```

Or select a percentage of a split with:
Expand Down
130 changes: 130 additions & 0 deletions docs/source/nifti_dataset.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
# Create a NIfTI dataset

This page shows how to create and share a dataset of medical images in NIfTI format (.nii / .nii.gz) using the `datasets` library.

You can share a dataset with your team or with anyone in the community by creating a dataset repository on the Hugging Face Hub:

```py
from datasets import load_dataset

dataset = load_dataset("<username>/my_nifti_dataset")
```

There are two common ways to create a NIfTI dataset:

- Create a dataset from local NIfTI files in Python and upload it with `Dataset.push_to_hub`.
- Use a folder-based convention (one file per example) and a small helper to convert it into a `Dataset`.

> [!TIP]
> You can control access to your dataset by requiring users to share their contact information first. Check out the [Gated datasets](https://huggingface.co/docs/hub/datasets-gated) guide for more information.

## Local files

If you already have a list of file paths to NIfTI files, the easiest workflow is to create a `Dataset` from that list and cast the column to the `Nifti` feature.

```py
from datasets import Dataset
from datasets import Nifti

# simple example: create a dataset from file paths
files = ["/path/to/scan_001.nii.gz", "/path/to/scan_002.nii.gz"]
ds = Dataset.from_dict({"nifti": files}).cast_column("nifti", Nifti())

# access a decoded nibabel image (if decode=True)
# ds[0]["nifti"] will be a nibabel.Nifti1Image object when decode=True
# or a dict {'bytes': None, 'path': '...'} when decode=False
```

The `Nifti` feature supports a `decode` parameter. When `decode=True` (the default), it loads the NIfTI file into a `nibabel.nifti1.Nifti1Image` object. You can access the image data as a numpy array with `img.get_fdata()`. When `decode=False`, it returns a dict with the file path and bytes.

```py
from datasets import Dataset, Nifti

ds = Dataset.from_dict({"nifti": ["/path/to/scan.nii.gz"]}).cast_column("nifti", Nifti(decode=True))
img = ds[0]["nifti"] # instance of: nibabel.nifti1.Nifti1Image
arr = img.get_fdata()
```

After preparing the dataset you can push it to the Hub:

```py
ds.push_to_hub("<username>/my_nifti_dataset")
```

This will create a dataset repository containing your NIfTI dataset with a `data/` folder of parquet shards.

## Folder conventions and metadata

If you organize your dataset in folders you can create splits automatically (train/test/validation) by following a structure like:

```
dataset/train/scan_0001.nii
dataset/train/scan_0002.nii
dataset/validation/scan_1001.nii
dataset/test/scan_2001.nii
```

If you have labels or other metadata, provide a `metadata.csv`, `metadata.jsonl`, or `metadata.parquet` in the folder so files can be linked to metadata rows. The metadata must contain a `file_name` (or `*_file_name`) field with the relative path to the NIfTI file next to the metadata file.

Example `metadata.csv`:

```csv
file_name,patient_id,age,diagnosis
scan_0001.nii.gz,P001,45,healthy
scan_0002.nii.gz,P002,59,disease_x
```

The `Nifti` feature works with zipped datasets too — each zip can contain NIfTI files and a metadata file. This is useful when uploading large datasets as archives.
This means your dataset structure could look like this (mixed compressed and uncompressed files):
```
dataset/train/scan_0001.nii.gz
dataset/train/scan_0002.nii
dataset/validation/scan_1001.nii.gz
dataset/test/scan_2001.nii
```

## Converting to PyTorch tensors

Use the [`~Dataset.set_transform`] function to apply the transformation on-the-fly to batches of the dataset:

```py
import torch
import nibabel
import numpy as np

def transform_to_pytorch(example):
example["nifti_torch"] = [torch.tensor(ex.get_fdata()) for ex in example["nifti"]]
return example

ds.set_transform(transform_to_pytorch)

```
Accessing elements now (e.g. `ds[0]`) will yield torch tensors in the `"nifti_torch"` key.


## Usage of NifTI1Image

NifTI is a format to store the result of 3 (or even 4) dimensional brain scans. This includes 3 spatial dimensions (x,y,z)
and optionally a time dimension (t). Furthermore, the given positions here are only relative to the scanner, therefore
the dimensions (4, 5, 6) are used to lift this to real world coordinates.

You can visualize nifti files for instance leveraging `matplotlib` as follows:
```python
import matplotlib.pyplot as plt
from datasets import load_dataset

def show_slices(slices):
""" Function to display row of image slices """
fig, axes = plt.subplots(1, len(slices))
for i, slice in enumerate(slices):
axes[i].imshow(slice.T, cmap="gray", origin="lower")

nifti_ds = load_dataset("<username>/my_nifti_dataset")
for epi_img in nifti_ds:
nifti_img = epi_img["nifti"].get_fdata()
show_slices([nifti_img[:, :, 16], nifti_img[26, :, :], nifti_img[:, 30, :]])
plt.show()
```

For further reading we refer to the [nibabel documentation](https://nipy.org/nibabel/index.html) and especially [this nibabel tutorial](https://nipy.org/nibabel/coordinate_systems.html)
---
4 changes: 2 additions & 2 deletions docs/source/object_detection.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,14 @@ To run these examples, make sure you have up-to-date versions of [albumentations
pip install -U albumentations opencv-python
```

In this example, you'll use the [`cppe-5`](https://huggingface.co/datasets/cppe-5) dataset for identifying medical personal protective equipment (PPE) in the context of the COVID-19 pandemic.
In this example, you'll use the [`cppe-5`](https://huggingface.co/datasets/rishitdagli/cppe-5) dataset for identifying medical personal protective equipment (PPE) in the context of the COVID-19 pandemic.

Load the dataset and take a look at an example:

```py
>>> from datasets import load_dataset

>>> ds = load_dataset("cppe-5")
>>> ds = load_dataset("rishitdagli/cppe-5")
>>> example = ds['train'][0]
>>> example
{'height': 663,
Expand Down
Loading