url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
3.71B
| node_id
stringlengths 18
32
| number
int64 1
7.9k
| title
stringlengths 1
290
| user
dict | labels
listlengths 0
4
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
4
| milestone
dict | comments
listlengths 0
30
| created_at
timestamp[ns, tz=UTC]date 2020-04-14 10:18:02
2025-12-09 16:41:47
| updated_at
timestamp[ns, tz=UTC]date 2020-04-27 16:04:17
2025-12-09 18:18:36
| closed_at
timestamp[ns, tz=UTC]date 2020-04-14 12:01:40
2025-12-09 14:45:13
⌀ | author_association
stringclasses 4
values | type
float64 | active_lock_reason
float64 | sub_issues_summary
dict | issue_dependencies_summary
dict | body
stringlengths 0
228k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
float64 | state_reason
stringclasses 4
values | draft
float64 0
1
⌀ | pull_request
dict | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7900
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7900/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7900/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7900/events
|
https://github.com/huggingface/datasets/issues/7900
| 3,711,751,590
|
I_kwDODunzps7dPNWm
| 7,900
|
`Permission denied` when sharing cache between users
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/19497738?v=4",
"events_url": "https://api.github.com/users/qthequartermasterman/events{/privacy}",
"followers_url": "https://api.github.com/users/qthequartermasterman/followers",
"following_url": "https://api.github.com/users/qthequartermasterman/following{/other_user}",
"gists_url": "https://api.github.com/users/qthequartermasterman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qthequartermasterman",
"id": 19497738,
"login": "qthequartermasterman",
"node_id": "MDQ6VXNlcjE5NDk3NzM4",
"organizations_url": "https://api.github.com/users/qthequartermasterman/orgs",
"received_events_url": "https://api.github.com/users/qthequartermasterman/received_events",
"repos_url": "https://api.github.com/users/qthequartermasterman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qthequartermasterman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qthequartermasterman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qthequartermasterman",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-09T16:41:47
| 2025-12-09T16:42:19
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
We want to use `datasets` and `transformers` on a shared machine. Right now, each user has a separate HF_HOME in their home directory. To reduce duplicates of the datasets, we want to share that cache. While experimenting, we are running into `Permission denied` errors.
It looks like this was supported in the past (see #6589)?
Is there a correct way to share caches across users?
### Steps to reproduce the bug
1. Create a directory `/models/hf_hub_shared_experiment` with read/write permissions for two different users
2. For each user run the script below
```python
import os
os.environ["HF_HOME"] = "/models/hf_hub_shared_experiment"
os.environ["HF_DATASETS_CACHE"] = "/models/hf_hub_shared_experiment/data"
import datasets
import transformers
DATASET = "tatsu-lab/alpaca"
MODEL = "meta-llama/Llama-3.2-1B-Instruct"
model = transformers.AutoModelForCausalLM.from_pretrained(MODEL)
tokenizer = transformers.AutoTokenizer.from_pretrained(MODEL)
dataset = datasets.load_dataset(DATASET)
```
The first user is able to download and use the model and dataset. The second user gets these errors:
```
$ python ./experiment_with_shared.py
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/models--meta-llama--Llama-3.2-1B-Instruct/.no_exist/9213176726f574b556790deb65791e0c5aa438b6/custom_generate/generate.py'
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/alpaca.py'
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/.huggingface.yaml'
Could not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/dataset_infos.json'
Traceback (most recent call last):
File "/home/user2/.venv/experiment_with_shared.py", line 17, in <module>
dataset = datasets.load_dataset(DATASET)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user2/.venv/lib/python3.12/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance = load_dataset_builder(
^^^^^^^^^^^^^^^^^^^^^
File "/home/user2/.venv/lib/python3.12/site-packages/datasets/load.py", line 1171, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
^^^^^^^^^^^^
File "/home/user2/.venv/lib/python3.12/site-packages/datasets/builder.py", line 390, in __init__
with FileLock(lock_path):
File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_api.py", line 377, in __enter__
self.acquire()
File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_api.py", line 333, in acquire
self._acquire()
File "/home/user2/.venv/lib/python3.12/site-packages/filelock/_unix.py", line 45, in _acquire
fd = os.open(self.lock_file, open_flags, self._context.mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PermissionError: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/data/_models_hf_hub_shared_experiment_data_tatsu-lab___alpaca_default_0.0.0_dce01c9b08f87459cf36a430d809084718273017.lock'
```
### Expected behavior
The second user should be able to read the shared cache files.
### Environment info
$ datasets-cli env
- `datasets` version: 4.4.1
- Platform: Linux-6.8.0-88-generic-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.36.0
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.10.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7900/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7900/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7899
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7899/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7899/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7899/events
|
https://github.com/huggingface/datasets/pull/7899
| 3,707,063,236
|
PR_kwDODunzps63t1LS
| 7,899
|
Add inspect_ai eval logs support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://huggingface.co/proxy/moon-ci-docs.huggingface.co/docs/datasets/pr_7899). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Very cool! \r\n\r\nAny reason not to directly use inspect for loading/converting from the binary format to JSON? https://inspect.aisi.org.uk/reference/inspect_ai.log.html#convert_eval_logs ",
"The format is simple enough to not have to rely on an additional dependency :)"
] | 2025-12-08T16:14:40
| 2025-12-09T14:45:15
| 2025-12-09T14:45:13
|
MEMBER
| null | null | null | null |
Support for .eval log files from inspect_ai
They are actually ZIP files according to the source code at https://github.com/UKGovernmentBEIS/inspect_ai/blob/main/src/inspect_ai/log/_log.py
Unfortunately their format can't be converted to Parquet, so I had to JSON-encode all the nested values
```python
ds = load_dataset("dvilasuero/kimi-bfcl")
```
this will enable the Viewer for datasets like https://huggingface.co/datasets/dvilasuero/kimi-bfcl
original tweet for context: https://x.com/dvilasuero/status/1996936988176343220?s=20
cc @dvsrepo @julien-c @davanstrien
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7899/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7899/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7899.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7899",
"merged_at": "2025-12-09T14:45:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7899.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7899"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7898
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7898/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7898/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7898/events
|
https://github.com/huggingface/datasets/pull/7898
| 3,698,376,429
|
PR_kwDODunzps63Q9BO
| 7,898
|
docs: making PyPi to PyPI ensuring no spelling errors
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/152784163?v=4",
"events_url": "https://api.github.com/users/kapoor1309/events{/privacy}",
"followers_url": "https://api.github.com/users/kapoor1309/followers",
"following_url": "https://api.github.com/users/kapoor1309/following{/other_user}",
"gists_url": "https://api.github.com/users/kapoor1309/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kapoor1309",
"id": 152784163,
"login": "kapoor1309",
"node_id": "U_kgDOCRtNIw",
"organizations_url": "https://api.github.com/users/kapoor1309/orgs",
"received_events_url": "https://api.github.com/users/kapoor1309/received_events",
"repos_url": "https://api.github.com/users/kapoor1309/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kapoor1309/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kapoor1309/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kapoor1309",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-05T10:20:48
| 2025-12-05T10:20:48
| null |
NONE
| null | null | null | null |
This PR adds a short clarification in the README section wherein PyPI the python package was mistakenly typed as PyPi which i have fixed
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7898/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7898/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7898.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7898",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7898.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7898"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7897
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7897/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7897/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7897/events
|
https://github.com/huggingface/datasets/pull/7897
| 3,691,300,022
|
PR_kwDODunzps624-k2
| 7,897
|
Save input shard lengths
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://huggingface.co/proxy/moon-ci-docs.huggingface.co/docs/datasets/pr_7897). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-12-03T17:56:55
| 2025-12-05T16:21:06
| 2025-12-05T16:21:03
|
MEMBER
| null | null | null | null |
will be useful for the Viewer, to know what (original) shard each row belongs to
cc @cfahlgren1
next step is use it in Dataset Viewer and expose an API that returns the file containing the row at rowId
(took the opportunity to remove unusued code)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7897/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7897/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7897.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7897",
"merged_at": "2025-12-05T16:21:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7897.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7897"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7896
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7896/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7896/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7896/events
|
https://github.com/huggingface/datasets/pull/7896
| 3,688,480,675
|
PR_kwDODunzps62vZtn
| 7,896
|
fix: force contiguous copy for sliced list arrays in embed_array_storage
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"id": 175985783,
"login": "The-Obstacle-Is-The-Way",
"node_id": "U_kgDOCn1Udw",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"type": "User",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://huggingface.co/proxy/moon-ci-docs.huggingface.co/docs/datasets/pr_7896). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-12-03T04:34:26
| 2025-12-09T15:37:49
| null |
NONE
| null | null | null | null |
## Summary
Fixes SIGKILL crash in `embed_array_storage` when processing sliced/sharded datasets with nested types like `Sequence(Nifti())` or `Sequence(Image())`.
**Root cause**: When `ds.shard()` or `ds.select()` creates a sliced view, `array.values` on a sliced `ListArray` returns values with internal offset references. For nested types, PyArrow's C++ layer can crash (SIGKILL, exit code 137) when materializing these sliced nested structs.
**Fix**: Force a contiguous copy via `pa.concat_arrays([array])` when the array has a non-zero offset before processing list/large_list arrays.
## Changes
- Add offset check in `embed_array_storage` for list/large_list arrays
- Force contiguous copy when `array.offset > 0` to break internal references
- Add regression tests for sliced arrays with Image, Nifti, and LargeList types
## Test plan
- [x] Added `tests/features/test_embed_storage_sliced.py` with 3 tests:
- `test_embed_array_storage_sliced_list_image`
- `test_embed_array_storage_sliced_list_nifti`
- `test_embed_array_storage_sliced_large_list`
- [x] All tests verify `embedded.offset == 0` (contiguous result)
- [x] All tests pass locally
- [x] ruff check passes
## Context
This was discovered while uploading a 270GB neuroimaging dataset (ARC) with `Sequence(Nifti())` columns. The process crashed with SIGKILL (no Python traceback) when `embed_table_storage` was called on sharded data.
Workaround that confirmed the fix: pandas round-trip (`shard.to_pandas()` → `Dataset.from_pandas()`) which forces a contiguous copy.
Fixes #7894
Related: #6686, #7852, #6790
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7896/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7896/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7896.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7896",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7896.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7896"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7895
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7895/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7895/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7895/events
|
https://github.com/huggingface/datasets/pull/7895
| 3,688,479,825
|
PR_kwDODunzps62vZik
| 7,895
|
fix: use temp files in push_to_hub to prevent OOM on large datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"id": 175985783,
"login": "The-Obstacle-Is-The-Way",
"node_id": "U_kgDOCn1Udw",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"type": "User",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"Closing this PR.\n\nAfter further investigation (prompted by @lhoestq's feedback on #7893), I discovered that `free_memory=True` does work correctly - memory doesn't actually accumulate in the `additions` list as I originally thought.\n\nI tested standard `push_to_hub()` with 902 shards and it processed 625 shards (69%) without OOM issues. The crash we were experiencing was actually **#7894** (`embed_table_storage` crash on `Sequence()` types), not memory accumulation.\n\nI've closed #7893 as invalid. This PR is no longer needed.\n\nApologies for the noise - lesson learned about testing before filing."
] | 2025-12-03T04:33:55
| 2025-12-06T13:58:44
| 2025-12-05T22:47:50
|
NONE
| null | null | null | null |
## Summary
Fixes memory accumulation in `_push_parquet_shards_to_hub_single` that causes OOM when uploading large datasets with many shards.
**Root cause**: The current implementation stores ALL parquet shard bytes in memory via `BytesIO`, accumulating in the `additions` list. For N shards of ~300MB each, this requires N × 300MB RAM.
**Fix**: Write parquet to temp file instead of `BytesIO`, pass file path to `CommitOperationAdd`. Delete temp file after `preupload_lfs_files` completes (for LFS uploads only - regular uploads need the file until `create_commit`).
## Changes
- Replace `BytesIO` with `tempfile.NamedTemporaryFile` in `_push_parquet_shards_to_hub_single`
- Use file path in `CommitOperationAdd.path_or_fileobj` instead of bytes
- Delete temp file after upload (only for LFS mode - regular uploads keep file for `create_commit`)
- Add `try...finally` for safe cleanup even on errors
- Remove unused `BytesIO` import
## Test plan
- [x] Added `tests/test_push_to_hub_memory.py` with 4 tests:
- `test_push_to_hub_uses_file_path_not_bytes_in_commit_operation`
- `test_push_to_hub_cleans_up_temp_files_for_lfs_uploads`
- `test_push_to_hub_keeps_temp_files_for_regular_uploads`
- `test_push_to_hub_uploaded_size_still_calculated`
- [x] All tests pass locally
- [x] ruff check passes
## Context
This was discovered while uploading a 270GB neuroimaging dataset (ARC) with 902 shards. The process was killed by OOM after accumulating ~270GB in the `additions` list.
Fixes #7893
Related: #5990, #7400
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"id": 175985783,
"login": "The-Obstacle-Is-The-Way",
"node_id": "U_kgDOCn1Udw",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"type": "User",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7895/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7895/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7895.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7895",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7895.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7895"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7894
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7894/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7894/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7894/events
|
https://github.com/huggingface/datasets/issues/7894
| 3,688,455,006
|
I_kwDODunzps7b2Vte
| 7,894
|
embed_table_storage crashes (SIGKILL) on sharded datasets with Sequence() nested types
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"id": 175985783,
"login": "The-Obstacle-Is-The-Way",
"node_id": "U_kgDOCn1Udw",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"type": "User",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"I wasn't able to reproduce the crash on my side (macos arm 54, pyarrow 22 and a nifti file I found [online](https://s3.amazonaws.com/openneuro.org/ds004884/sub-M2001/ses-1076/anat/sub-M2001_ses-1076_acq-tfl3_run-4_T1w.nii.gz?versionId=9aVGb3C.VcoBgxrhNzFnL6O0MvxQsXX7&AWSAccessKeyId=AKIARTA7OOV5WQ3DGSOB&Signature=LQMLzjsuzSV7MtNAdQaFdqWqmbM%3D&Expires=1765473937))\n\ncould the issue be specific to your env ? have you tried on other environments like colab maybe ?",
"Hi @lhoestq,\n\nThank you so much for taking the time to investigate this. Your comment about not being able to reproduce it with a single NIfTI file actually helped me understand the bug better.\n\n**Key finding:** This bug is scale-dependent. It only manifests with real, full-scale data, and not with synthetic test files.\n\nI created a sandbox branch that isolates the exact state before the workaround:\n\n**🔗 Reproduction branch:** https://github.com/The-Obstacle-Is-The-Way/arc-aphasia-bids/tree/sandbox/reproduce-bug-7894\n\n### What we confirmed\n\n| Test | Result |\n|------|--------|\n| Synthetic 2x2x2 NIfTI files | ✅ No crash |\n| Synthetic 64³ NIfTI files (1MB each) | ✅ No crash |\n| Real ARC dataset (273GB, 902 sessions) | ❌ **SIGKILL at 0%** |\n\n### Environment (same as yours)\n\n- macOS ARM64\n- PyArrow 22.0.0\n- datasets 4.4.2.dev0 (git main)\n\n### Crash output\n\n```\nCasting the dataset: 100%|██████████| 902/902\nUploading Shards: 0%| | 0/902\nUserWarning: resource_tracker: There appear to be 1 leaked semaphore objects\n```\n**Exit code: 137 (SIGKILL)**\n\nThe crash happens on the very first shard, at `embed_table_storage()`, when processing `Sequence(Nifti())` columns after `ds.shard()`.\n\n### The workaround (in main branch)\n\nA pandas round-trip before embedding breaks the problematic Arrow references:\n\n```python\nshard_df = shard.to_pandas()\nfresh_shard = Dataset.from_pandas(shard_df, preserve_index=False)\nfresh_shard = fresh_shard.cast(ds.features)\n# Now embed_table_storage works\n```\n\nWe understand that downloading 273GB to reproduce this isn't practical. The reproduction guide in the branch has full details if you'd like to dig deeper. Happy to help debug further if useful.\n\nThank you again for your time and for maintaining this library. ",
"@lhoestq Brief update - I've added a reproduction that uses standard `ds.push_to_hub()` (no custom code).\n\n**Reproduction branch:** https://github.com/The-Obstacle-Is-The-Way/arc-aphasia-bids/tree/sandbox/reproduce-bug-7894\n\n**To reproduce with standard library:**\n```bash\ngit clone -b sandbox/reproduce-bug-7894 https://github.com/The-Obstacle-Is-The-Way/arc-aphasia-bids.git\ncd arc-aphasia-bids && uv sync --all-extras\n# Download dataset (~273GB): aws s3 sync --no-sign-request s3://openneuro.org/ds004884 data/openneuro/ds004884\nHF_REPO=\"your-username/test\" uv run python test_prove_7894_standard.py\n```\n\nCrashes at 0% with the same semaphore warning.\n\nFull details in [REPRODUCE_BUG_7894.md](https://github.com/The-Obstacle-Is-The-Way/arc-aphasia-bids/blob/sandbox/reproduce-bug-7894/REPRODUCE_BUG_7894.md).\n\nAlso - you were right about #7893. I closed it. `free_memory=True` works as you said. That issue was my mistake."
] | 2025-12-03T04:20:06
| 2025-12-06T13:10:34
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
## Summary
`embed_table_storage` crashes with SIGKILL (exit code 137) when processing sharded datasets containing `Sequence()` nested types like `Sequence(Nifti())`. Likely affects `Sequence(Image())` and `Sequence(Audio())` as well.
The crash occurs at the C++ level with no Python traceback.
### Related Issues
- #7852 - Problems with NifTI (closed, but related embedding issues)
- #6790 - PyArrow 'Memory mapping file failed' (potentially related)
- #7893 - OOM issue (separate bug, but discovered together)
### Context
Discovered while uploading the [Aphasia Recovery Cohort (ARC)](https://openneuro.org/datasets/ds004884) neuroimaging dataset to HuggingFace Hub. Even after fixing the OOM issue (#7893), this crash blocked uploads.
Working implementation with workaround: [arc-aphasia-bids](https://github.com/The-Obstacle-Is-The-Way/arc-aphasia-bids)
## Reproduction
```python
from datasets import Dataset, Features, Sequence, Value
from datasets.features import Nifti
from datasets.table import embed_table_storage
features = Features({
"id": Value("string"),
"images": Sequence(Nifti()),
})
ds = Dataset.from_dict({
"id": ["a", "b"],
"images": [["/path/to/file.nii.gz"], []],
}).cast(features)
# This works fine:
table = ds._data.table.combine_chunks()
embedded = embed_table_storage(table) # OK
# This crashes with SIGKILL:
shard = ds.shard(num_shards=2, index=0)
shard_table = shard._data.table.combine_chunks()
embedded = embed_table_storage(shard_table) # CRASH - no Python traceback
```
## Key Observations
| Scenario | Result |
|----------|--------|
| Single `Nifti()` column | Works |
| `Sequence(Nifti())` on full dataset | Works |
| `Sequence(Nifti())` after `ds.shard()` | **CRASHES** |
| `Sequence(Nifti())` after `ds.select([i])` | **CRASHES** |
| Crash with empty Sequence `[]` | **YES** - not file-size related |
## Workaround
Convert shard to pandas and recreate the Dataset to break internal Arrow references:
```python
shard = ds.shard(num_shards=num_shards, index=i, contiguous=True)
# CRITICAL: Pandas round-trip breaks problematic references
shard_df = shard.to_pandas()
fresh_shard = Dataset.from_pandas(shard_df, preserve_index=False)
fresh_shard = fresh_shard.cast(ds.features)
# Now embedding works
table = fresh_shard._data.table.combine_chunks()
embedded = embed_table_storage(table) # OK!
```
## Disproven Hypotheses
| Hypothesis | Test | Result |
|------------|------|--------|
| PyArrow 2GB binary limit | Monkey-patched `Nifti.pa_type` to `pa.large_binary()` | Still crashed |
| Memory fragmentation | Called `table.combine_chunks()` | Still crashed |
| File size issue | Tested with tiny NIfTI files | Still crashed |
## Root Cause Hypothesis
When `ds.shard()` or `ds.select()` creates a subset, the resulting Arrow table retains internal references/views to the parent table. When `embed_table_storage` processes nested struct types like `Sequence(Nifti())`, these references cause a crash in the C++ layer.
The pandas round-trip forces a full data copy, breaking these problematic references.
## Environment
- datasets version: main branch (post-0.22.0)
- Platform: macOS 14.x ARM64 (may be platform-specific)
- Python: 3.13
- PyArrow: 18.1.0
## Notes
This may ultimately be a PyArrow issue surfacing through datasets. Happy to help debug further if maintainers can point to where to look in the embedding logic.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7894/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7894/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7893
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7893/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7893/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7893/events
|
https://github.com/huggingface/datasets/issues/7893
| 3,688,454,085
|
I_kwDODunzps7b2VfF
| 7,893
|
push_to_hub OOM: _push_parquet_shards_to_hub accumulates all shard bytes in memory
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"id": 175985783,
"login": "The-Obstacle-Is-The-Way",
"node_id": "U_kgDOCn1Udw",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"type": "User",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"`preupload_lfs_files` removes the parquet bytes in `shard_addition` since the default is `free_memory=True`: it doesn't accumulate in memory. Can you check this is indeed the case, i.e. that `shard_addition.path_or_fileobj` is indeed empty ?",
"@lhoestq Thank you for pushing back on this and helping me understand the code better.\n\nYou're correct, `free_memory=True` does prevent memory accumulation. I went back and tested this properly: I ran standard `push_to_hub()` with 902 shards and it processed **625 shards (69%)** without any OOM issues before I stopped it. Memory stayed reasonable throughout.\n\nI filed this issue based on reading the code and seeing `additions.append(shard_addition)`, without fully understanding that `preupload_lfs_files()` clears the bytes first. That was my mistake.\n\nThe crash we were actually experiencing was **#7894** (`embed_table_storage` crash on `Sequence(Nifti())`), which we've now isolated and reproduced separately.\n\nClosing this issue. Thanks again for the clarification; learned something important about the codebase today."
] | 2025-12-03T04:19:34
| 2025-12-05T22:45:59
| 2025-12-05T22:44:16
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
## Summary
Large dataset uploads crash or hang due to memory exhaustion. This appears to be the root cause of several long-standing issues.
### Related Issues
This is the root cause of:
- #5990 - Pushing a large dataset on the hub consistently hangs (46 comments, open since 2023)
- #7400 - 504 Gateway Timeout when uploading large dataset
- #6686 - Question: Is there any way for uploading a large image dataset?
### Context
Discovered while uploading the [Aphasia Recovery Cohort (ARC)](https://openneuro.org/datasets/ds004884) neuroimaging dataset (~270GB, 902 sessions) to HuggingFace Hub using the `Nifti()` feature.
Working implementation with workaround: [arc-aphasia-bids](https://github.com/The-Obstacle-Is-The-Way/arc-aphasia-bids)
## Root Cause
In `_push_parquet_shards_to_hub` (arrow_dataset.py), the `additions` list accumulates every `CommitOperationAdd` with full Parquet bytes in memory:
```python
additions = []
for shard in shards:
parquet_content = shard.to_parquet_bytes() # ~300 MB per shard
shard_addition = CommitOperationAdd(path_or_fileobj=parquet_content)
api.preupload_lfs_files(additions=[shard_addition])
additions.append(shard_addition) # THE BUG: bytes stay in memory forever
```
For a 902-shard dataset: **902 × 300 MB = ~270 GB RAM requested → OOM/hang**.
The bytes are held until the final `create_commit()` call, preventing garbage collection.
## Reproduction
```python
from datasets import load_dataset
# Any large dataset with embedded files (Image, Audio, Nifti, etc.)
ds = load_dataset("imagefolder", data_dir="path/to/large/dataset")
ds.push_to_hub("repo-id", num_shards=500) # Watch memory grow until crash
```
## Workaround
Process one shard at a time, upload via `HfApi.upload_file(path=...)`, delete before next iteration:
```python
from huggingface_hub import HfApi
import pyarrow.parquet as pq
api = HfApi()
for i in range(num_shards):
shard = ds.shard(num_shards=num_shards, index=i, contiguous=True)
# Write to disk, not memory
shard.to_parquet(local_path)
# Upload from file path (streams from disk)
api.upload_file(
path_or_fileobj=str(local_path),
path_in_repo=f"data/train-{i:05d}-of-{num_shards:05d}.parquet",
repo_id=repo_id,
repo_type="dataset",
)
# Clean up before next iteration
local_path.unlink()
del shard
```
Memory usage stays constant (~1-2 GB) instead of growing linearly.
## Suggested Fix
After `preupload_lfs_files` succeeds for each shard, release the bytes:
1. Clear `path_or_fileobj` from the `CommitOperationAdd` after preupload
2. Or write to temp file and pass file path instead of bytes
3. Or commit incrementally instead of batching all additions
## Environment
- datasets version: main branch (post-0.22.0)
- Platform: macOS 14.x ARM64
- Python: 3.13
- PyArrow: 18.1.0
- Dataset: 902 shards, ~270 GB total embedded NIfTI files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"id": 175985783,
"login": "The-Obstacle-Is-The-Way",
"node_id": "U_kgDOCn1Udw",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"type": "User",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7893/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7893/timeline
| null |
not_planned
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7892
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7892/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7892/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7892/events
|
https://github.com/huggingface/datasets/pull/7892
| 3,681,848,709
|
PR_kwDODunzps62ZFzh
| 7,892
|
encode nifti correctly when uploading lazily
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-12-01T16:35:07
| 2025-12-01T16:36:15
| null |
CONTRIBUTOR
| null | null | null | null |
When trying to upload nifti datasets lazily I got the error:
```python
from pathlib import Path
from datasets import load_dataset
nifti_dir = Path("<local_path>")
dataset = load_dataset(
"niftifolder",
data_dir=str(nifti_dir.absolute()),
streaming=True,
)
dataset.push_to_hub(repo_id="TobiasPitters/test-nifti-papaya-testdata")
```
```python
pyarrow.lib.ArrowInvalid: Could not convert <datasets.features.nifti.Nifti1ImageWrapper object at 0x77633407af90> with type Nifti1ImageWrapper: did not recognize Python value type when inferring an Arrow data type
```
This PR fixes that by encoding the Nifti1ImageWrappers correctly to bytes.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7892/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7892/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7892.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7892",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7892.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7892"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7891
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7891/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7891/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7891/events
|
https://github.com/huggingface/datasets/pull/7891
| 3,681,592,636
|
PR_kwDODunzps62YNeR
| 7,891
|
fix(fingerprint): treat TMPDIR as strict API and fail (Issue #7877)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/133336746?v=4",
"events_url": "https://api.github.com/users/ada-ggf25/events{/privacy}",
"followers_url": "https://api.github.com/users/ada-ggf25/followers",
"following_url": "https://api.github.com/users/ada-ggf25/following{/other_user}",
"gists_url": "https://api.github.com/users/ada-ggf25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ada-ggf25",
"id": 133336746,
"login": "ada-ggf25",
"node_id": "U_kgDOB_KOqg",
"organizations_url": "https://api.github.com/users/ada-ggf25/orgs",
"received_events_url": "https://api.github.com/users/ada-ggf25/received_events",
"repos_url": "https://api.github.com/users/ada-ggf25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ada-ggf25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ada-ggf25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ada-ggf25",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://huggingface.co/proxy/moon-ci-docs.huggingface.co/docs/datasets/pr_7891). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-12-01T15:37:57
| 2025-12-08T12:14:12
| null |
NONE
| null | null | null | null |
Fixes #7877 and follows up on feedback from [`#7890`](https://github.com/huggingface/datasets/pull/7890).
## Context
In [`#7890`](https://github.com/huggingface/datasets/pull/7890) it was introduced an automatic creation of the `TMPDIR` directory in `_TempCacheDir` so that `datasets` does not silently fall back to `/tmp` when `TMPDIR` points to a non‑existent path. This addressed the original “`tempfile` silently ignores TMPDIR” issue in [`#7877`](https://github.com/huggingface/datasets/issues/7877).
During review, it was pointed out by @stas00 (see [review comment](https://github.com/huggingface/datasets/pull/7890#discussion_r...)) that if the code treats `TMPDIR` as part of the public API, then failures to use it should **fail loudly**, not just emit warnings, because warnings are easy to miss in complex multi‑GPU setups.
This PR implements that stricter behaviour.
## What this PR changes
### `_TempCacheDir` behaviour In `src/datasets/fingerprint.py`:
- Continue to:
- Detect `TMPDIR` from the environment
- Normalise the path
- Auto‑create the directory when it does not exist
- Pass the (validated) directory explicitly to `tempfile.mkdtemp(...)` so `TMPDIR` is honoured even if `tempfile.gettempdir()` was already cached
- **New behaviour** (in response to review on [`#7890`](https://github.com/huggingface/datasets/pull/7890)):
- If `TMPDIR` is set, but the directory cannot be created, we now **re‑raise an `OSError`** with a clear, actionable message:
- “TMPDIR is set to '…' but the directory does not exist and could not be created: … Please create it manually or unset TMPDIR to fall back to the default temporary directory.”
- If `TMPDIR` is set but points to something that is **not a directory**, we also **raise `OSError`** with guidance:
- “TMPDIR is set to '…' but it is not a directory. Please point TMPDIR to a writable directory or unset it to fall back to the default temporary directory.”
- When `TMPDIR` is **not** set, behaviour is unchanged: we pass `dir=None` and let `tempfile` use the system default temp directory.
This aligns with the @stas00’s suggestion that TMPDIR should be treated as a strict API contract: if the user chooses a TMPDIR, we either use it or clearly fail, rather than silently falling back.
### Tests
In `tests/test_fingerprint.py`:
- Updated tests that previously expected warning‑and‑fallback to now expect **hard failures**:
- `test_temp_cache_dir_tmpdir_creation_failure`
- Uses `unittest.mock.patch` to force `os.makedirs` to raise `OSError("Permission denied")`
- Asserts that constructing `_TempCacheDir()` raises `OSError` and that the message contains both “TMPDIR is set to” and “could not be created”
- `test_temp_cache_dir_tmpdir_not_directory`
- Points `TMPDIR` to a regular file and asserts that `_TempCacheDir()` raises `OSError` with a message mentioning “is not a directory”
- Left the positive‑path tests in place:
- `test_temp_cache_dir_with_tmpdir_nonexistent` – verifies that a non‑existent `TMPDIR` is created and used
- `test_temp_cache_dir_with_tmpdir_existing` – verifies that an existing `TMPDIR` directory is used as the base for the temp cache dir
- `test_temp_cache_dir_without_tmpdir` – verifies behaviour when `TMPDIR` is not set (default temp directory)
- Kept the earlier fix to `test_fingerprint_in_multiprocessing`, which now uses `Pool.map` and asserts that fingerprints are stable across processes.
## Rationale
- Treating `TMPDIR` as part of the API for cache placement means:
- Users can rely on it to move large temporary Arrow files away from small `/tmp` partitions.
- Misconfigured TMPDIR should be **immediately visible** as a hard error, not as a warning lost among many logs.
- The stricter failure mode matches the concern on [`#7890`](https://github.com/huggingface/datasets/pull/7890) that “warnings are very easy to miss in complex applications where there are already dozens of warnings multiplied by multiple GPU processes”.
## Testing
- `pytest tests/test_fingerprint.py`
- `make style`
- No new linter issues introduced.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7891/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7891/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7891.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7891",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7891.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7891"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7890
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7890/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7890/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7890/events
|
https://github.com/huggingface/datasets/pull/7890
| 3,677,077,051
|
PR_kwDODunzps62JMyH
| 7,890
|
Fix: Auto-create TMPDIR directory when it doesn't exist (Issue #7877)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/133336746?v=4",
"events_url": "https://api.github.com/users/ada-ggf25/events{/privacy}",
"followers_url": "https://api.github.com/users/ada-ggf25/followers",
"following_url": "https://api.github.com/users/ada-ggf25/following{/other_user}",
"gists_url": "https://api.github.com/users/ada-ggf25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ada-ggf25",
"id": 133336746,
"login": "ada-ggf25",
"node_id": "U_kgDOB_KOqg",
"organizations_url": "https://api.github.com/users/ada-ggf25/orgs",
"received_events_url": "https://api.github.com/users/ada-ggf25/received_events",
"repos_url": "https://api.github.com/users/ada-ggf25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ada-ggf25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ada-ggf25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ada-ggf25",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"> Thank you for implementing this, @ada-ggf25\r\n> \r\n> I see there are a few more uses of `tempfile` in `datasets` besides the one treated here - should they be treated in the same way as well?\r\n\r\nI had a look through the other tempfile call sites:\r\n- In `arrow_dataset.py` the code always pass an explicit `dir=cache_dir` when using `NamedTemporaryFile`, so TMPDIR is not involved there.\r\n- In `search.py` the `NamedTemporaryFile` is only used to generate a random Elasticsearch index name, it doesn’t control where big cache files are written, so it doesn’t have the same “No space left on device” impact.\r\n\r\nGiven that, the fingerprint temp directory is the only place where respecting TMPDIR is part of the user‑visible API for disk usage. If you’d like, I’m happy to also thread TMPDIR through the `search.py` name generation, but I think this PR should keep scoped to the place that actually writes large temporary files.\r\n",
"Thank you for checking the other instances of tempfile use, @ada-ggf25 "
] | 2025-11-29T20:35:24
| 2025-12-02T12:24:17
| 2025-12-02T12:24:17
|
NONE
| null | null | null | null |
# Fix: Auto-create TMPDIR directory when it doesn't exist
Fixes #7877
## Description
This PR fixes issue #7877 by implementing automatic creation of the `TMPDIR` directory when it is set but doesn't exist. Previously, `tempfile.mkdtemp()` would silently ignore `TMPDIR` and fall back to `/tmp` if the specified directory didn't exist, causing confusion for users experiencing "No space left on device" errors.
## Problem
When users set `TMPDIR` to a non-existent directory (e.g., `export TMPDIR=/some/big/storage`), Python's `tempfile` module silently ignores it and falls back to the default temporary directory (`/tmp`). This leads to:
- Users unable to use their specified temporary directory
- Silent failures that are difficult to debug
- Continued "No space left on device" errors even after setting `TMPDIR`
## Solution
The fix automatically creates the `TMPDIR` directory if it is set but doesn't exist, ensuring that:
1. Users' `TMPDIR` settings are respected
2. Clear logging is provided when the directory is created
3. Graceful fallback with warnings if directory creation fails
4. The fix works even if `tempfile.gettempdir()` was already called and cached
## Changes
### Implementation (`src/datasets/fingerprint.py`)
- Modified `_TempCacheDir.__init__()` to check if `TMPDIR` environment variable is set
- Added logic to auto-create the directory if it doesn't exist using `os.makedirs()`
- Added informative logging when directory is created
- Added warning logging when directory creation fails, with graceful fallback
- Added path normalisation to handle path resolution issues
- Explicitly pass `dir` parameter to `tempfile.mkdtemp()` to ensure TMPDIR is respected
### Tests (`tests/test_fingerprint.py`)
Added comprehensive test coverage:
- `test_temp_cache_dir_with_tmpdir_nonexistent`: Tests auto-creation of non-existent TMPDIR
- `test_temp_cache_dir_with_tmpdir_existing`: Tests behaviour when TMPDIR already exists
- `test_temp_cache_dir_without_tmpdir`: Tests default behaviour when TMPDIR is not set
- `test_temp_cache_dir_tmpdir_creation_failure`: Tests error handling when directory creation fails
Also fixed incomplete `test_fingerprint_in_multiprocessing` test that was missing implementation.
## Testing
- All existing tests pass
- New tests added for TMPDIR handling scenarios
- Code formatted with `make style`
- No linter errors
- Manual testing confirms the fix works as expected
## Example Usage
Before this fix:
```bash
$ export TMPDIR='/tmp/username' # Directory doesn't exist
$ python -c "import tempfile; print(tempfile.gettempdir())"
/tmp # Silently falls back, ignoring TMPDIR
```
After this fix:
```bash
$ export TMPDIR='/tmp/username' # Directory doesn't exist
$ python -c "from datasets.fingerprint import get_temporary_cache_files_directory; print(get_temporary_cache_files_directory())"
# Directory is automatically created and used
# Log: "Created TMPDIR directory: /tmp/username"
```
## Type of Change
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Documentation update
---
**Note**: This implementation follows the approach suggested in the issue, automatically creating the TMPDIR directory when it doesn't exist, which provides the best user experience whilst maintaining security (we only create directories explicitly specified by the user via environment variable).
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/133336746?v=4",
"events_url": "https://api.github.com/users/ada-ggf25/events{/privacy}",
"followers_url": "https://api.github.com/users/ada-ggf25/followers",
"following_url": "https://api.github.com/users/ada-ggf25/following{/other_user}",
"gists_url": "https://api.github.com/users/ada-ggf25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ada-ggf25",
"id": 133336746,
"login": "ada-ggf25",
"node_id": "U_kgDOB_KOqg",
"organizations_url": "https://api.github.com/users/ada-ggf25/orgs",
"received_events_url": "https://api.github.com/users/ada-ggf25/received_events",
"repos_url": "https://api.github.com/users/ada-ggf25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ada-ggf25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ada-ggf25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ada-ggf25",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7890/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7890/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7890.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7890",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7890.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7890"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7889
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7889/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7889/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7889/events
|
https://github.com/huggingface/datasets/pull/7889
| 3,676,933,025
|
PR_kwDODunzps62Iykj
| 7,889
|
fix(tests): stabilize flaky Hub LFS integration test
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"id": 175985783,
"login": "The-Obstacle-Is-The-Way",
"node_id": "U_kgDOCn1Udw",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"type": "User",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-29T17:13:18
| 2025-12-09T15:37:49
| null |
NONE
| null | null | null | null |
## Problem
`test_push_dataset_dict_to_hub_overwrite_files` intermittently fails with:
```
BadRequestError: LFS pointer pointed to a file that does not exist
```
This has been causing the `deps-latest` integration tests to fail on main (visible in recent CI runs). I ran into this while working on the BIDS loader PR and dug into the root cause.
## Root Cause
Two race conditions in the test:
1. **LFS propagation timing** - Rapid successive `push_to_hub` calls don't wait for Hub to fully propagate LFS objects between pushes
2. **Repo name reuse** - The second test scenario reused the same repo name from scenario 1, creating a race between deletion and recreation
## Solution
- Add `_wait_for_repo_ready()` helper that polls `list_repo_files` to ensure the repo is consistent before subsequent operations
- Use a unique repo name (`ds_name_2`) for the second scenario, eliminating the delete/create race entirely
## Testing
All 4 integration test variants now pass:
- ✅ `ubuntu-latest, deps-latest` (was failing)
- ✅ `ubuntu-latest, deps-minimum`
- ✅ `windows-latest, deps-latest` (was failing)
- ✅ `windows-latest, deps-minimum`
Validated on fork: https://github.com/The-Obstacle-Is-The-Way/datasets/pull/4
## Related
- #7600 (push_to_hub concurrency)
- #6392 (push_to_hub connection robustness)
cc @lhoestq - small fix but should help CI reliability
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7889/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7889/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7889.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7889",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7889.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7889"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7888
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7888/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7888/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7888/events
|
https://github.com/huggingface/datasets/pull/7888
| 3,676,407,260
|
PR_kwDODunzps62HOM4
| 7,888
|
Add type overloads to load_dataset for better static type inference
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/157872593?v=4",
"events_url": "https://api.github.com/users/Aditya2755/events{/privacy}",
"followers_url": "https://api.github.com/users/Aditya2755/followers",
"following_url": "https://api.github.com/users/Aditya2755/following{/other_user}",
"gists_url": "https://api.github.com/users/Aditya2755/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Aditya2755",
"id": 157872593,
"login": "Aditya2755",
"node_id": "U_kgDOCWjx0Q",
"organizations_url": "https://api.github.com/users/Aditya2755/orgs",
"received_events_url": "https://api.github.com/users/Aditya2755/received_events",
"repos_url": "https://api.github.com/users/Aditya2755/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Aditya2755/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aditya2755/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Aditya2755",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://huggingface.co/proxy/moon-ci-docs.huggingface.co/docs/datasets/pr_7888). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-29T06:19:30
| 2025-12-08T12:06:57
| 2025-12-08T12:06:57
|
CONTRIBUTOR
| null | null | null | null |
Fixes #7883
This PR adds @overload decorators to load_dataset() to help type checkers like Pylance and mypy correctly infer the return type based on the split and streaming parameters.
Changes:
- Added typing imports (Literal, overload) to load.py
- Added 4 @overload signatures that map argument combinations to specific return types:
* split=None, streaming=False -> DatasetDict
* split specified, streaming=False -> Dataset
* split=None, streaming=True -> IterableDatasetDict
* split specified, streaming=True -> IterableDataset
This resolves the Pylance error where to_csv() was not recognized on Dataset objects returned by load_dataset(..., split='train'), since the type checker previously saw the return type as a Union that included types without to_csv().
No runtime behavior changes - this is purely a static typing improvement.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7888/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7888/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7888.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7888",
"merged_at": "2025-12-08T12:06:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7888.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7888"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7887
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7887/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7887/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7887/events
|
https://github.com/huggingface/datasets/pull/7887
| 3,676,203,387
|
PR_kwDODunzps62GltX
| 7,887
|
fix(nifti): enable lazy loading for Nifti1ImageWrapper
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"id": 175985783,
"login": "The-Obstacle-Is-The-Way",
"node_id": "U_kgDOCn1Udw",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"type": "User",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-29T01:40:27
| 2025-12-09T15:37:50
| null |
NONE
| null | null | null | null |
## Summary
- **Single-line fix**: Change `dataobj=nifti_image.get_fdata()` → `dataobj=nifti_image.dataobj`
- Preserves nibabel's `ArrayProxy` for true lazy loading instead of eagerly loading entire NIfTI files into memory
- Improves error handling: corrupted files now fail at access time with clear context instead of silently during decode
## Problem
The `Nifti1ImageWrapper.__init__` was calling `get_fdata()` which immediately loads the entire image into memory. For large 4D fMRI files (often 1-2GB), this causes:
1. **Memory issues** - Full data loaded during decode, not on demand
2. **Poor error handling** - Corrupted files crash at access time with unclear error messages (e.g., `EOFError` with no file path)
3. **No graceful recovery** - Entire dataset iteration fails on one bad file
## Solution
Use `nifti_image.dataobj` which preserves the underlying `ArrayProxy`, deferring actual I/O to `get_fdata()` calls.
## Test Plan
- [x] Added `test_nifti_lazy_loading` to verify `ArrayProxy` is preserved
- [x] All 22 existing NIfTI tests pass
- [x] End-to-end tested with real OpenNeuro data (ds000102)
- [x] CodeRabbit approved: "Switch to dataobj correctly restores nibabel's lazy loading semantics... This looks solid"
## Related
- Discovered while testing BIDS loader PR #7886
- Complements the NIfTI + NiiVue viewer work from #7878 and #7874
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"id": 175985783,
"login": "The-Obstacle-Is-The-Way",
"node_id": "U_kgDOCn1Udw",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"type": "User",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7887/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7887/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7887.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7887",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7887.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7887"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7886
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7886/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7886/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7886/events
|
https://github.com/huggingface/datasets/pull/7886
| 3,676,185,151
|
PR_kwDODunzps62Gh1c
| 7,886
|
feat(bids): Add BIDS dataset loader for neuroimaging data
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"id": 175985783,
"login": "The-Obstacle-Is-The-Way",
"node_id": "U_kgDOCn1Udw",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"type": "User",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-29T01:22:06
| 2025-12-09T15:37:51
| null |
NONE
| null | null | null | null |
## Summary
Adds native BIDS (Brain Imaging Data Structure) dataset loading support using PyBIDS, enabling `load_dataset('bids', data_dir='/path/to/bids')` workflow for neuroimaging researchers.
**Contributes to #7804** (Support scientific data formats) - BIDS is a widely-used standard for organizing neuroimaging data built on NIfTI files.
## Changes
### Core Implementation
- `src/datasets/packaged_modules/bids/bids.py` - GeneratorBasedBuilder implementation
- `src/datasets/packaged_modules/bids/__init__.py` - Module exports
- `src/datasets/packaged_modules/__init__.py` - Registration with module registry
- `src/datasets/config.py` - `PYBIDS_AVAILABLE` config flag
- `setup.py` - Optional `pybids>=0.21.0` + nibabel dependency
### Features
- Automatic BIDS structure validation
- Subject/session/datatype filtering via config
- JSON sidecar metadata extraction
- NIfTI file decoding via existing Nifti feature
### Documentation & Tests
- `docs/source/bids_dataset.mdx` - User guide with examples
- `tests/packaged_modules/test_bids.py` - Unit tests (4 tests)
## Usage
```python
from datasets import load_dataset
# Load entire BIDS dataset
ds = load_dataset('bids', data_dir='/path/to/bids_dataset')
# Filter by subject/session
ds = load_dataset('bids',
data_dir='/path/to/bids_dataset',
subjects=['01', '02'],
sessions=['baseline']
)
# Access samples
sample = ds['train'][0]
print(sample['subject']) # '01'
print(sample['nifti'].shape) # (176, 256, 256)
print(sample['metadata']) # JSON sidecar data
```
## Test plan
- [x] All 4 unit tests pass (`pytest tests/packaged_modules/test_bids.py`)
- [x] `make quality` passes (ruff check)
- [x] End-to-end tested with real OpenNeuro data (ds000102)
## Context
This PR is part of the neuroimaging initiative discussed with @TobiasPitters. Follows the BIDS 1.10.1 specification and leverages the existing Nifti feature for NIfTI file handling.
Related PRs:
- #7874 (Nifti visualization support)
- #7878 (Replace papaya with niivue)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/175985783?v=4",
"events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/events{/privacy}",
"followers_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/followers",
"following_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/following{/other_user}",
"gists_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/The-Obstacle-Is-The-Way",
"id": 175985783,
"login": "The-Obstacle-Is-The-Way",
"node_id": "U_kgDOCn1Udw",
"organizations_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/orgs",
"received_events_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/received_events",
"repos_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/The-Obstacle-Is-The-Way/subscriptions",
"type": "User",
"url": "https://api.github.com/users/The-Obstacle-Is-The-Way",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7886/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7886/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7886.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7886",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7886.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7886"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7885
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7885/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7885/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7885/events
|
https://github.com/huggingface/datasets/pull/7885
| 3,675,116,624
|
PR_kwDODunzps62DBlN
| 7,885
|
Add visualization paragraph to nifti readme
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-28T14:31:28
| 2025-11-28T15:01:29
| null |
CONTRIBUTOR
| null | null | null | null |
Add small paragraph and video.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7885/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7885/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7885.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7885",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7885.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7885"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7884
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7884/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7884/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7884/events
|
https://github.com/huggingface/datasets/pull/7884
| 3,672,811,099
|
PR_kwDODunzps617Uk6
| 7,884
|
Fix 7846: add_column and add_item erroneously(?) require new_fingerprint parameter
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/44121755?v=4",
"events_url": "https://api.github.com/users/sajmaru/events{/privacy}",
"followers_url": "https://api.github.com/users/sajmaru/followers",
"following_url": "https://api.github.com/users/sajmaru/following{/other_user}",
"gists_url": "https://api.github.com/users/sajmaru/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sajmaru",
"id": 44121755,
"login": "sajmaru",
"node_id": "MDQ6VXNlcjQ0MTIxNzU1",
"organizations_url": "https://api.github.com/users/sajmaru/orgs",
"received_events_url": "https://api.github.com/users/sajmaru/received_events",
"repos_url": "https://api.github.com/users/sajmaru/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sajmaru/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sajmaru/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sajmaru",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq can you help review this pull request?",
"The docs for this PR live [here](https://huggingface.co/proxy/moon-ci-docs.huggingface.co/docs/datasets/pr_7884). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-27T19:32:10
| 2025-12-04T16:09:51
| 2025-12-04T16:09:51
|
CONTRIBUTOR
| null | null | null | null |
Summary of the change:
Made new_fingerprint optional (Optional[str] = None) in Dataset.add_column and Dataset.add_item
Added a simple test to verify both methods work without providing a fingerprint
Why this change is safe:
The Dataset constructor already handles fingerprint=None by generating a new fingerprint automatically
No internal logic is broken — if a user provides a fingerprint, it’s still used as before
The change only affects the function signature, making it more user-friendly without changing any functionality
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7884/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7884/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7884.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7884",
"merged_at": "2025-12-04T16:09:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7884.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7884"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7883
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7883/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7883/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7883/events
|
https://github.com/huggingface/datasets/issues/7883
| 3,668,182,561
|
I_kwDODunzps7apAYh
| 7,883
|
Data.to_csv() cannot be recognized by pylance
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/154290630?v=4",
"events_url": "https://api.github.com/users/xi4ngxin/events{/privacy}",
"followers_url": "https://api.github.com/users/xi4ngxin/followers",
"following_url": "https://api.github.com/users/xi4ngxin/following{/other_user}",
"gists_url": "https://api.github.com/users/xi4ngxin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xi4ngxin",
"id": 154290630,
"login": "xi4ngxin",
"node_id": "U_kgDOCTJJxg",
"organizations_url": "https://api.github.com/users/xi4ngxin/orgs",
"received_events_url": "https://api.github.com/users/xi4ngxin/received_events",
"repos_url": "https://api.github.com/users/xi4ngxin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xi4ngxin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xi4ngxin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xi4ngxin",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2025-11-26T16:16:56
| 2025-12-08T12:06:58
| 2025-12-08T12:06:58
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
Hi, everyone ! I am a beginner with datasets.
I am testing reading multiple CSV files from a zip archive. The result of reading the dataset shows success, and it can ultimately be correctly saved to CSV.
Intermediate results:
```
Generating train split: 62973 examples [00:00, 175939.01 examples/s]
DatasetDict({
train: Dataset({
features: ['交易时间\t', '收支方向\t', '业务(产品)种类\t', '交易金额\t', '币种\t', '时点余额\t', '对手方名称\t', '对方机构名称\t', ' 对方钱包ID/账号\t', '交易对手名称\t', '交易对手编号\t', '交易流水号\t', '摘要\t', '附言\t', '备注\t', '用途\t', '客户流水号\t'],
num_rows: 62973
})
})
```
However, Pylance gives me the following error:
```
Cannot access attribute "to_csv" for class "DatasetDict"
Attribute "to_csv" is unknownPylance[reportAttributeAccessIssue](https://github.com/microsoft/pylance-release/blob/main/docs/diagnostics/reportAttributeAccessIssue.md)```
Cannot access attribute "to_csv" for class "IterableDatasetDict"
Attribute "to_csv" is unknownPylance[reportAttributeAccessIssue](https://github.com/microsoft/pylance-release/blob/main/docs/diagnostics/reportAttributeAccessIssue.md)
(method) to_csv: Unknown | ((path_or_buf: datasets.utils.typing.PathLike | BinaryIO, batch_size: int | None = None, num_proc: int | None = None, storage_options: dict[Unknown, Unknown] | None = None, **to_csv_kwargs: Unknown) -> int) | ((path_or_buf: datasets.utils.typing.PathLike | BinaryIO, batch_size: int | None = None, storage_options: dict[Unknown, Unknown] | None = None, **to_csv_kwargs: Unknown) -> int)
```
I ignored the error and continued executing to get the correct result:
```
Dataset({
features: ['交易时间\t', '收支方向\t', '业务(产品)种类\t', '交易金额\t', '币种\t', '时点余额\t', '对手方名称\t', '对方机构名称\t', '对方 钱包ID/账号\t', '交易对手名称\t', '交易对手编号\t', '交易流水号\t', '摘要\t', '附言\t', '备注\t', '用途\t', '客户流水号\t'],
num_rows: 62973
})
```
Since the data volume is small, I manually merged the CSV files, and the final result is consistent with what the program saved.
looks like :
<img width="1264" height="150" alt="Image" src="https://github.com/user-attachments/assets/743540d7-ad8c-4531-ae7e-de71a5243a32" />
### Steps to reproduce the bug
this is my code.
```
from datasets import load_dataset
def main():
url = "data/test.zip"
data_files = {"train": url}
dataset = load_dataset("csv", data_files=data_files,split="train", encoding="gbk", skiprows=2)
# print(dataset)
dataset.to_csv("data/test.csv")
if __name__ == "__main__":
main()
```
### Expected behavior
I want to know why this happens. Is there something wrong with my code?
### Environment info
OS: Windows 11 **upgrade from** OS: Windows_NT x64 10.0.22631
Editor:
VS Code Version: 1.106.2 (user setup)
"datasets" version = "4.4.1"
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7883/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7883/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7882
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7882/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7882/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7882/events
|
https://github.com/huggingface/datasets/issues/7882
| 3,667,664,527
|
I_kwDODunzps7anB6P
| 7,882
|
Inconsistent loading of LFS-hosted files in epfml/FineWeb-HQ dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/6270922?v=4",
"events_url": "https://api.github.com/users/Oligou/events{/privacy}",
"followers_url": "https://api.github.com/users/Oligou/followers",
"following_url": "https://api.github.com/users/Oligou/following{/other_user}",
"gists_url": "https://api.github.com/users/Oligou/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Oligou",
"id": 6270922,
"login": "Oligou",
"node_id": "MDQ6VXNlcjYyNzA5MjI=",
"organizations_url": "https://api.github.com/users/Oligou/orgs",
"received_events_url": "https://api.github.com/users/Oligou/received_events",
"repos_url": "https://api.github.com/users/Oligou/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Oligou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oligou/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Oligou",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-26T14:06:02
| 2025-11-26T14:06:02
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
Some files in the `epfml/FineWeb-HQ` dataset fail to load via the Hugging Face `datasets` library.
- xet-hosted files load fine
- LFS-hosted files sometimes fail
Example:
- Fails: https://huggingface.co/datasets/epfml/FineWeb-HQ/blob/main/data/CC-MAIN-2024-26/000_00003.parquet
- Works: https://huggingface.co/datasets/epfml/FineWeb-HQ/blob/main/data/CC-MAIN-2024-42/000_00027.parquet
Discussion: https://huggingface.co/datasets/epfml/FineWeb-HQ/discussions/2
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset(
"epfml/FineWeb-HQ",
data_files="data/CC-MAIN-2024-26/000_00003.parquet",
)
```
Error message:
```
HfHubHTTPError: 403 Forbidden: None.
Cannot access content at: https://cdn-lfs-us-1.hf.co/repos/...
Make sure your token has the correct permissions.
...
<Error><Code>AccessDenied</Code><Message>Access Denied</Message></Error>
```
### Expected behavior
It should load the dataset for all files.
### Environment info
- python 3.10
- datasets 4.4.1
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7882/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7882/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7881
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7881/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7881/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7881/events
|
https://github.com/huggingface/datasets/pull/7881
| 3,667,642,524
|
PR_kwDODunzps61qI8F
| 7,881
|
Fix spurious label column when directories match split names
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/132138786?v=4",
"events_url": "https://api.github.com/users/neha222222/events{/privacy}",
"followers_url": "https://api.github.com/users/neha222222/followers",
"following_url": "https://api.github.com/users/neha222222/following{/other_user}",
"gists_url": "https://api.github.com/users/neha222222/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neha222222",
"id": 132138786,
"login": "neha222222",
"node_id": "U_kgDOB-BHIg",
"organizations_url": "https://api.github.com/users/neha222222/orgs",
"received_events_url": "https://api.github.com/users/neha222222/received_events",
"repos_url": "https://api.github.com/users/neha222222/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neha222222/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neha222222/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neha222222",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-26T13:59:46
| 2025-12-08T12:21:54
| null |
NONE
| null | null | null | null |
Issue - https://github.com/huggingface/datasets/issues/7880
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7881/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7881/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7881.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7881",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7881.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7881"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7880
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7880/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7880/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7880/events
|
https://github.com/huggingface/datasets/issues/7880
| 3,667,561,864
|
I_kwDODunzps7amo2I
| 7,880
|
Spurious label column created when audiofolder/imagefolder directories match split names
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/132138786?v=4",
"events_url": "https://api.github.com/users/neha222222/events{/privacy}",
"followers_url": "https://api.github.com/users/neha222222/followers",
"following_url": "https://api.github.com/users/neha222222/following{/other_user}",
"gists_url": "https://api.github.com/users/neha222222/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neha222222",
"id": 132138786,
"login": "neha222222",
"node_id": "U_kgDOB-BHIg",
"organizations_url": "https://api.github.com/users/neha222222/orgs",
"received_events_url": "https://api.github.com/users/neha222222/received_events",
"repos_url": "https://api.github.com/users/neha222222/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neha222222/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neha222222/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neha222222",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-26T13:36:24
| 2025-11-26T13:36:24
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
## Describe the bug
When using `audiofolder` or `imagefolder` with directories for **splits** (train/test) rather than class labels, a spurious `label` column is incorrectly created.
**Example:** https://huggingface.co/datasets/datasets-examples/doc-audio-4
```
from datasets import load_dataset
ds = load_dataset("datasets-examples/doc-audio-4")
print(ds["train"].features)
```
Shows 'label' column with ClassLabel(names=['test', 'train']) - incorrect!## Root cause
In `folder_based_builder.py`, the `labels` set is accumulated across ALL splits (line 77). When directories are `train/` and `test/`:
- `labels = {"train", "test"}` → `len(labels) > 1` → `add_labels = True`
- Spurious label column is created with split names as class labels
## Expected behavior
No `label` column should be added when directory names match split names.
## Proposed fix
Skip label inference when inferred labels match split names.
cc @lhoestq
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7880/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7880/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7879
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7879/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7879/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7879/events
|
https://github.com/huggingface/datasets/issues/7879
| 3,657,249,446
|
I_kwDODunzps7Z_TKm
| 7,879
|
python core dump when downloading dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/5960219?v=4",
"events_url": "https://api.github.com/users/hansewetz/events{/privacy}",
"followers_url": "https://api.github.com/users/hansewetz/followers",
"following_url": "https://api.github.com/users/hansewetz/following{/other_user}",
"gists_url": "https://api.github.com/users/hansewetz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hansewetz",
"id": 5960219,
"login": "hansewetz",
"node_id": "MDQ6VXNlcjU5NjAyMTk=",
"organizations_url": "https://api.github.com/users/hansewetz/orgs",
"received_events_url": "https://api.github.com/users/hansewetz/received_events",
"repos_url": "https://api.github.com/users/hansewetz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hansewetz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hansewetz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hansewetz",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi @hansewetz I'm curious, for me it works just fine. Are you still observing the issue?",
"Yup ... still the same issue.\nHowever, after adding a ```sleep(1)``` call after the ``` for``` loop by accident during debugging, the program terminates properly (not a good solution though ... :-) ).\nAre there some threads created that handles the download that are still running when the program exits?\nHaven't had time yet to go through the code in ```iterable_dataset.py::IterableDataset```\n",
"Interesting, I was able to reproduce it, on a jupyter notebook the code runs just fine, as a Python script indeed it seems to never finish running (which is probably leading to the core dumped error). I'll try and take a look at the source code as well to see if I can figure it out.",
"Hi @hansewetz ,\nIf possible can I be assigned with this issue?\n\n",
"```If possible can I be assigned with this issue?```\nHi, I don't know how assignments work here and who can take decisions about assignments ... ",
"Hi @hansewetz and @Aymuos22, I have made some progress:\n\n1) Confirmed last working version is 3.1.0\n\n2) From 3.1.0 to 3.2.0, there was a change in how parquet files are read (see [here](https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/parquet/parquet.py/#168).\n\nThe issue seems to be the following code:\n\n```\nparquet_fragment.to_batches(\n batch_size=batch_size,\n columns=self.config.columns,\n filter=filter_expr,\n batch_readahead=0,\n fragment_readahead=0,\n )\n```\n\nAdding a `use_threads=False` parameter to the `to_batches` call solves the bug. However, this seems far from an optimal solution, since we'd like to be able to use multiple threads for reading the fragments. \n\nI'll keep investigating to see if there's a better solution.",
"Hi @lhoestq, may I ask if the current behaviour was expected by you folks and you don't think it needs solving, or should I keep on investigating a compromise between using multithreading / avoid unexpected behaviour? Thanks in advance :) ",
"Having the same issue. the code never stops executing. Using datasets 4.4.1\nTried with \"islice\" as well. When the streaming flag is True, the code doesn't end execution. On vs-code.",
"The issue on pyarrow side is here: https://github.com/apache/arrow/issues/45214 and the original issue in `datasets` here: https://github.com/huggingface/datasets/issues/7357\n\nIt would be cool to have a fix on the pyarrow side",
"Thank you very much @lhoestq, I'm reading the issue thread in pyarrow and realizing you've been raising awareness around this for a long time now. When I have some time I'll look at @pitrou's PR to see if I can get a better understanding of what's going on on pyarrow. "
] | 2025-11-24T06:22:53
| 2025-11-25T20:45:55
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
When downloading a dataset in streamed mode and exiting the program before the download completes, the python program core dumps when exiting:
```
terminate called without an active exception
Aborted (core dumped)
```
Tested with python 3.12.3, python 3.9.21
### Steps to reproduce the bug
Create python venv:
```bash
python -m venv venv
./venv/bin/activate
pip install datasets==4.4.1
```
Execute the following program:
```
from datasets import load_dataset
ds = load_dataset("HuggingFaceFW/fineweb-2", 'hrv_Latn', split="test", streaming=True)
for sample in ds:
break
```
### Expected behavior
Clean program exit
### Environment info
described above
**note**: the example works correctly when using ```datasets==3.1.0```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7879/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7879/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7878
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7878/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7878/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7878/events
|
https://github.com/huggingface/datasets/pull/7878
| 3,653,262,027
|
PR_kwDODunzps606R81
| 7,878
|
Replace papaya with niivue
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"@CloseChoice thanks for your work on this. As you mentioned, the prime developers for Papaya have moved on, so it is in maintenance mode, albeit it is mature and may fill all your requirements. \r\n\r\nPapaya does reflect the era of its creation, so it uses WebGL1 (which only supports 2D textures) for display and pako for decompression. In contrast, NiiVue uses WebGL2 (where 3D textures provide a native representation for volumes) and compression streams (x4 decoding speed). A major benefit of 3D textures is simple support for 3D volume rendering using ray casting. Note the Papaya README shows an isosurface rendering based on a triangulated mesh. In contrast, NiiVue can show both volume rendering (good for data with fuzzy boundaries) as well as surface rendering (good when a clean isosurface can be defined). I think the [gallery](https://niivue.com/gallery) provides a nice example of NiiVue capabilities as well as minimal recipes.\r\n\r\nI do agree that Papaya UI is more advanced: by design NiiVue is a graphic widget that can be embedded into a container that provides your preferred user interface (React, Angular, Vue, pure html, or even jupyter notebooks). \r\n\r\nI think DICOM support is a challenge for any tool for several reasons: the diversity of the implementations and compression methods (transfer syntaxes), the fact that in classic DICOM each 2D slice is saved as a separate file (though note modern enhanced DICOM can save an entire 3D volume or even 4D timeseries in a single file), and the rate that this format has evolved over time. Papaya uses [Daikon](https://github.com/rii-mango/Daikon) to handle DICOM images, and I think it is only one file at a time. In contrast, NiiVue provides plugins for complex image formats, so you can choose your desired tool. We do provide illustrate how to use [dcm2niix WASM](https://github.com/niivue/niivue-dcm2niix) as a DICOM loader, and it can extract coherent volumes from a random assortment of files or a manifest of files - see the [live demo](https://github.com/niivue/niivue-dcm2niix). Note that diakon development has halted, while dcm2niix is actively maintained, which impacts support for emerging compression methods (e.g. JPEG2000-HT). Having said that, if your primary focus is DICOM, [cornerstonejs](https://www.cornerstonejs.org/) is probably a better choice than NiiVue or Papaya.\r\n\r\nAnother feature that may or may not be worth noting is that NiiVue has a plugin model that allows you to use a lot of mature image processing tools. So you can do image conversion, image processing (itk-wasm, niimath), image registration (flirt, elastix) and edge-based AI models. [brainchop](https://brainchop.org/) illustrates edge-based AI model inference for brain segmentation, extraction and parcellation, though we provide minimal examples for ONNX, tensorflowjs and tinygrad. This would provide a convenient way for huggingface inference models to be shared. After training, the models could be converted to ONNX and deployed on a web page, allowing the user to drag-and-drop images and process them regardless of operating system or graphics card manufacturer. Since the AI model inference leverages the users own graphics card, the privacy issues and hardware scaling concerns of cloud distribution are mitigated.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"@neurolabusc thanks so much for the nuanced and informative reply.\r\nI am convinced that niivue is the better option here, having 3D support is huge and Papaya's UI features are actually not necessary at all, and AFAIS we can get what we need and more with some additional configuration for niivue as well.\r\nThanks a lot for words about DICOM, though the focus of this PR is not NifTI and not DICOM, I think having one tool being able to load both (and potentially more formats) is best, I'll definitely test the live demo. My primary interest in your thoughts about DICOM is to enable visualization as a follow-up to this PR #https://github.com/huggingface/datasets/pull/7835. Even for the DICOM case NiiVue seems like a great option using the [dcm2niix](https://github.com/niivue/niivue-dcm2niix) webassembly plugin, I think the main challenge is here how we let the user organize files in an intuitive way (e.g. provide DICOM folder class, and a DICOM document class where one folder can contain multiple documents and 3d visualization is on the folder level). \r\n\r\nGiven that NiiVue is a modern neuroimaging viewer, well maintained and widely used and we have @neurolabusc attention in case of questions/problems I think we should go ahead with NiiVue.\r\n\r\n@lhoestq your thoughts are highly appreciated.",
"Following the @neurolabusc 's suggestion I updated to [ipyniivue](https://github.com/niivue/ipyniivue?tab=readme-ov-file) which helps so that we don't need to bother with javascript and speeds up load times since ipyniivue comes with a bundled niivue version and therefore avoids to download. Since DICOM is out of the picture for now, I consider this ready to be reviewed.",
"The docs for this PR live [here](https://huggingface.co/proxy/moon-ci-docs.huggingface.co/docs/datasets/pr_7878). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-11-21T22:19:56
| 2025-11-27T20:37:04
| 2025-11-27T18:00:19
|
CONTRIBUTOR
| null | null | null | null |
I was contacted by Chris Rorden whose group is developing NiiVue (see https://github.com/niivue/niivue), which leverages WebGL2 (in contrast to Papaya which is WebGL1 based). He also offered support in the implementation, which might come in handy in case of any questions later on (see DICOM implemenation). I completely overlooked NiiVue when searching for frameworks.
Development speed or lack thereof was already mentioned as a potential risk with Papaya. NiiVue is well and actively maintained, simply compare these two contribution charts:
NiiVue:
<img width="920" height="378" alt="image" src="https://github.com/user-attachments/assets/37a0a256-60aa-4758-bb07-97e421c68ae1" />
Papaya:
<img width="920" height="378" alt="image" src="https://github.com/user-attachments/assets/1e1cf0c9-ec0a-4ffc-ae03-a79ea12bcb3b" />
I gave NiiVue a try and it supports all features Papaya does, though I find Papaya's UI slightly more appealing but that is just personal taste. There is also a 3D image of the scanned object included in the NiiVue UI, but that is possible for Papaya aswell (at least in some way, check the image in their github repo README.md).
```python
from datasets import load_dataset
# new dataset compared to papaya PR, this has more interesting images
ds = load_dataset("TobiasPitters/nifti-papaya-testdata",
split="train")
ds[1]['nifti'] # ds[2]['nifti'] is also interesting
```
Here's a brief video how this looks with NiiVue: https://github.com/user-attachments/assets/3f2a52d4-2109-45e2-aca8-e4a4b1e46b32
NOTE: I explicitly created this as draft PR since I suspect the DICOM support to be a crucial factor to decide which of these two is better suited for our needs. DICOM is supported by Papaya, and for NiiVue as well using a plugin, but as far as I understand one DICOM file contains one 2D image, therefore support for loading a whole folder, containing all 2D layers for a complete 3D image is desired. NiiVue supports this according to their docs, I am unsure about Papaya.
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7878/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7878/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7878.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7878",
"merged_at": "2025-11-27T18:00:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7878.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7878"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7877
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7877/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7877/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7877/events
|
https://github.com/huggingface/datasets/issues/7877
| 3,652,906,788
|
I_kwDODunzps7Zuu8k
| 7,877
|
work around `tempfile` silently ignoring `TMPDIR` if the dir doesn't exist
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi! Just created a Pull Request (#7890) to try to fix this using your suggestions. I hope it helps!"
] | 2025-11-21T19:51:48
| 2025-11-29T20:37:42
| null |
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
This should help a lot of users running into `No space left on device` while using `datasets`. Normally the issue is is that `/tmp` is too small and the user needs to use another path, which they would normally set as `export TMPDIR=/some/big/storage`
However, the `tempfile` facility that `datasets` and `pyarrow` use is somewhat broken. If the path doesn't exist it'd ignore it and fall back to using `/tmp`. Watch this:
```
$ export TMPDIR='/tmp/username'
$ python -c "\
import os
import tempfile
print(os.environ['TMPDIR'])
print(tempfile.gettempdir())"
/tmp/username
/tmp
```
Now let's ensure the path exists:
```
$ export TMPDIR='/tmp/username'
$ mkdir -p $TMPDIR
$ python -c "\
import os
import tempfile
print(os.environ['TMPDIR'])
print(tempfile.gettempdir())"
/tmp/username
/tmp/username
```
So I recommend `datasets` do either of the 2:
1. assert if `$TMPDIR` dir doesn't exist, telling the user to create it
2. auto-create it
The reason for (1) is that I don't know why `tempdir` doesn't auto-create the dir - perhaps some security implication? I will let you guys make the decision, but the key is not to let things silently fall through and the user puzzling why no matter what they do they can't break past `No space left on device` while using `datasets`
Thank you.
I found this via https://stackoverflow.com/questions/37229398/python-tempfile-gettempdir-does-not-respect-tmpdir while trying to help a colleague to solve this exact issue.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7877/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7877/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7876
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7876/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7876/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7876/events
|
https://github.com/huggingface/datasets/pull/7876
| 3,652,170,832
|
PR_kwDODunzps602lac
| 7,876
|
test: add verification for HuggingFaceM4/InterleavedWebDocuments
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/122142345?v=4",
"events_url": "https://api.github.com/users/venkatsai2004/events{/privacy}",
"followers_url": "https://api.github.com/users/venkatsai2004/followers",
"following_url": "https://api.github.com/users/venkatsai2004/following{/other_user}",
"gists_url": "https://api.github.com/users/venkatsai2004/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/venkatsai2004",
"id": 122142345,
"login": "venkatsai2004",
"node_id": "U_kgDOB0e-iQ",
"organizations_url": "https://api.github.com/users/venkatsai2004/orgs",
"received_events_url": "https://api.github.com/users/venkatsai2004/received_events",
"repos_url": "https://api.github.com/users/venkatsai2004/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/venkatsai2004/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/venkatsai2004/subscriptions",
"type": "User",
"url": "https://api.github.com/users/venkatsai2004",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-21T15:42:09
| 2025-11-21T15:42:09
| null |
NONE
| null | null | null | null |
Adds an integration test for the `HuggingFaceM4/InterleavedWebDocuments` dataset.
- Gracefully skips if the dataset is not yet available on the Hub
- Checks basic loading and structure once it becomes available
Closes #7394
First-time contributor to `datasets` — really excited about this! Happy to make any adjustments needed. 🙂
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7876/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7876/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7876.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7876",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7876.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7876"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7875
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7875/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7875/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7875/events
|
https://github.com/huggingface/datasets/pull/7875
| 3,649,326,175
|
PR_kwDODunzps60s9my
| 7,875
|
Add quickstart example to datasets README
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/101023542?v=4",
"events_url": "https://api.github.com/users/hajermabrouk/events{/privacy}",
"followers_url": "https://api.github.com/users/hajermabrouk/followers",
"following_url": "https://api.github.com/users/hajermabrouk/following{/other_user}",
"gists_url": "https://api.github.com/users/hajermabrouk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hajermabrouk",
"id": 101023542,
"login": "hajermabrouk",
"node_id": "U_kgDOBgV_Ng",
"organizations_url": "https://api.github.com/users/hajermabrouk/orgs",
"received_events_url": "https://api.github.com/users/hajermabrouk/received_events",
"repos_url": "https://api.github.com/users/hajermabrouk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hajermabrouk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hajermabrouk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hajermabrouk",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-20T22:13:52
| 2025-11-20T22:13:52
| null |
NONE
| null | null | null | null | null | null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7875/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7875/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7875.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7875",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7875.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7875"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7874
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7874/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7874/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7874/events
|
https://github.com/huggingface/datasets/pull/7874
| 3,644,558,046
|
PR_kwDODunzps60c4sg
| 7,874
|
Nifti visualization support
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"The docs for this PR live [here](https://huggingface.co/proxy/moon-ci-docs.huggingface.co/docs/datasets/pr_7874). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I tested in Colab and it works perfectly :) now I want to add `_repr_html_` everywhere xD\r\n\r\nRe: testing, I think it's fine to test manually such features"
] | 2025-11-19T21:56:56
| 2025-11-21T12:41:43
| 2025-11-21T12:31:18
|
CONTRIBUTOR
| null | null | null | null |
closes #7870
leverage Papaya to visualize nifti images. For this I created a Wrapper class for `nibabel.nifti1.Nifti1Image` that provides the same interface but exposes an additional `_repr_html_` method, which is needed to visualize the image in jupyter (didn't test in colab, but that should work equivalently).
Code to test (execute in a notebook):
```python
from datasets import load_dataset
ds = load_dataset("TobiasPitters/nifti-nitest-extracted",
split="train")
image = ds[1]
image
```
Here a small video, not the most exciting scan though:
https://github.com/user-attachments/assets/1cca5f01-6fd2-48ef-a4d7-a92c1259c224
Am open to good ways to test this.
EDIT: papaya also supports dicom, didn't test it yet though
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7874/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7874/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7874.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7874",
"merged_at": "2025-11-21T12:31:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7874.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7874"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7873
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7873/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7873/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7873/events
|
https://github.com/huggingface/datasets/pull/7873
| 3,643,993,705
|
PR_kwDODunzps60a_IZ
| 7,873
|
Fix chunk casting and schema unification in dataset
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
"events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
"followers_url": "https://api.github.com/users/ArjunJagdale/followers",
"following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
"gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArjunJagdale",
"id": 142811259,
"login": "ArjunJagdale",
"node_id": "U_kgDOCIMgew",
"organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
"received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
"repos_url": "https://api.github.com/users/ArjunJagdale/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArjunJagdale",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"\r\n@lhoestq would like to hear from you!\r\n"
] | 2025-11-19T18:43:47
| 2025-11-22T19:51:30
| null |
CONTRIBUTOR
| null | null | null | null |
Updated chunk handling to cast to expected schema when features are provided or to unify schemas when not. This ensures proper schema alignment for the yielded batches.
fixes #7872
This PR fixes a bug where `IterableDataset` created from a generator with explicit `features` parameter would fail during arrow operations (like `.to_pandas()`) when the data contains missing or null values.
## Problem
When an `IterableDataset` is created with explicit features but the generator yields data with missing values (e.g., empty lists), PyArrow would infer different schemas for different batches based on the actual data rather than using the provided schema. This caused `ArrowInvalid` errors when trying to concatenate batches with mismatched schemas.
### Example error:
```python
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
a: int64
b: list
vs
a: int64
b: list>
```
## Solution
Modified `RebatchedArrowExamplesIterable._iter_arrow()` to:
1. Cast chunks to the expected schema when explicit features are provided
2. Unify schemas across chunks when no explicit features are set
3. Gracefully handle cast failures by falling back to the original chunk
This ensures that the user-provided schema is respected throughout the iteration process.
## Testing
Verified the fix with the following test case:
```python
import datasets
from datasets import features
def test_to_pandas_works_with_explicit_schema():
common_features = features.Features(
{
"a": features.Value("int64"),
"b": features.List({"c": features.Value("int64")}),
}
)
def row_generator():
data = [{"a": 1, "b": []}, {"a": 1, "b": [{"c": 1}]}]
for row in data:
yield row
d = datasets.IterableDataset.from_generator(row_generator, features=common_features)
print("Iterating…")
for _ in d.to_pandas():
pass
test_to_pandas_works_with_explicit_schema()
```
Before Patch -
```
@ArjunJagdale ➜ /workspaces/datasets (main) $ python test_arjun.py
Iterating…
Traceback (most recent call last):
File "/workspaces/datasets/test_arjun.py", line 24, in <module>
test_to_pandas_works_with_explicit_schema()
File "/workspaces/datasets/test_arjun.py", line 21, in test_to_pandas_works_with_explicit_schema
for _ in d.to_pandas():
File "/workspaces/datasets/src/datasets/iterable_dataset.py", line 3736, in to_pandas
table = pa.concat_tables(list(self.with_format("arrow").iter(batch_size=1000)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspaces/datasets/src/datasets/iterable_dataset.py", line 2596, in iter
for key, pa_table in iterator:
File "/workspaces/datasets/src/datasets/iterable_dataset.py", line 2111, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
File "/workspaces/datasets/src/datasets/iterable_dataset.py", line 632, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
a: int64
b: list<item: null>
vs
a: int64
b: list<item: struct<c: int64>>
```
After Patch -
```
@ArjunJagdale ➜ /workspaces/datasets (main) $ python test_arjun.py
Iterating…
@ArjunJagdale ➜ /workspaces/datasets (main) $
```
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7873/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7873/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7873.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7873",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7873.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7873"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/7872
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7872/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7872/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7872/events
|
https://github.com/huggingface/datasets/issues/7872
| 3,643,681,893
|
I_kwDODunzps7ZLixl
| 7,872
|
IterableDataset does not use features information in to_pandas
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/790640?v=4",
"events_url": "https://api.github.com/users/bonext/events{/privacy}",
"followers_url": "https://api.github.com/users/bonext/followers",
"following_url": "https://api.github.com/users/bonext/following{/other_user}",
"gists_url": "https://api.github.com/users/bonext/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bonext",
"id": 790640,
"login": "bonext",
"node_id": "MDQ6VXNlcjc5MDY0MA==",
"organizations_url": "https://api.github.com/users/bonext/orgs",
"received_events_url": "https://api.github.com/users/bonext/received_events",
"repos_url": "https://api.github.com/users/bonext/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bonext/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bonext/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bonext",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Created A PR!",
"Another test script that can be used to test the behavior - \n\n```\nimport datasets\nfrom datasets import features\n\ndef test_crash():\n common_features = features.Features({\n \"a\": features.Value(\"int64\"),\n \"b\": features.List({\"c\": features.Value(\"int64\")}),\n })\n\n def row_generator():\n yield {\"a\": 1, \"b\": []}\n yield {\"a\": 1, \"b\": [{\"c\": 1}]}\n\n d = datasets.IterableDataset.from_generator(row_generator, features=common_features)\n\n list(d.to_pandas()) # <-- this triggers the crash\n\n```"
] | 2025-11-19T17:12:59
| 2025-11-19T18:52:14
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
`IterableDataset` created from generator with explicit `features=` parameter seems to ignore provided features description for certain operations, e.g. `.to_pandas(...)` when data coming from the generator has missing values.
### Steps to reproduce the bug
```python
import datasets
from datasets import features
def test_to_pandas_works_with_explicit_schema():
common_features = features.Features(
{
"a": features.Value("int64"),
"b": features.List({"c": features.Value("int64")}),
}
)
def row_generator():
data = [{"a": 1, "b": []}, {"a": 1, "b": [{"c": 1}]}]
for row in data:
yield row
d = datasets.IterableDataset.from_generator(row_generator, features=common_features)
for _ in d.to_pandas():
pass
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:3703: in to_pandas
# table = pa.concat_tables(list(self.with_format("arrow").iter(batch_size=1000)))
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:2563: in iter
# for key, pa_table in iterator:
# ^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:2078: in _iter_arrow
# for key, pa_table in self.ex_iterable._iter_arrow():
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# .venv/lib/python3.13/site-packages/datasets/iterable_dataset.py:599: in _iter_arrow
# yield new_key, pa.Table.from_batches(chunks_buffer)
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
# pyarrow/table.pxi:5039: in pyarrow.lib.Table.from_batches
# ???
# pyarrow/error.pxi:155: in pyarrow.lib.pyarrow_internal_check_status
# ???
# _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
# > ???
# E pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
# E a: int64
# E b: list<item: null>
# E vs
# E a: int64
# E b: list<item: struct<c: int64>>
# pyarrow/error.pxi:92: ArrowInvalid
```
### Expected behavior
arrow operations use schema provided through `features=` and not the one inferred from the data
### Environment info
- datasets version: 4.4.1
- Platform: macOS-15.7.1-arm64-arm-64bit-Mach-O
- Python version: 3.13.1
- huggingface_hub version: 1.1.4
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- fsspec version: 2025.10.0
| null |
{
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7872/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7872/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7871
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7871/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7871/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7871/events
|
https://github.com/huggingface/datasets/issues/7871
| 3,643,607,371
|
I_kwDODunzps7ZLQlL
| 7,871
|
Reqwest Error: HTTP status client error (429 Too Many Requests)
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"events_url": "https://api.github.com/users/yanan1116/events{/privacy}",
"followers_url": "https://api.github.com/users/yanan1116/followers",
"following_url": "https://api.github.com/users/yanan1116/following{/other_user}",
"gists_url": "https://api.github.com/users/yanan1116/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yanan1116",
"id": 26405281,
"login": "yanan1116",
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"organizations_url": "https://api.github.com/users/yanan1116/orgs",
"received_events_url": "https://api.github.com/users/yanan1116/received_events",
"repos_url": "https://api.github.com/users/yanan1116/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yanan1116/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanan1116/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yanan1116",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"the dataset repo: `https://huggingface.co/datasets/nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim`",
"Hi @yanan1116,\n\nThanks for the detailed report! However, this issue was filed in the wrong repository. This is a `huggingface_hub` issue, not a `datasets` issue.\n\nLooking at your traceback, you're using the `hf download` CLI command (from `huggingface_hub`), and the error occurs in `huggingface_hub/file_download.py` at line 571 in the `xet_get` function. The `datasets` library is not involved in this download at all.\n\nThe 429 error means the CAS (Content Addressable Storage) service at `https://cas-server.xethub.hf.co` is rate-limiting your requests. The `huggingface_hub` library currently doesn't have automatic retry logic for 429 errors from the CAS service.\n\nPlease reopen this issue at: https://github.com/huggingface/huggingface_hub/issues"
] | 2025-11-19T16:52:24
| 2025-11-30T13:38:32
| 2025-11-30T13:38:32
|
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
full error message:
```
Traceback (most recent call last):
File "/home/yanan/miniconda3/bin/hf", line 7, in <module>
sys.exit(main())
~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/hf.py", line 56, in main
app()
~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 327, in __call__
raise e
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 310, in __call__
return get_command(self)(*args, **kwargs)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1161, in __call__
return self.main(*args, **kwargs)
~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/core.py", line 803, in main
return _main(
self,
...<6 lines>...
**extra,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/core.py", line 192, in _main
rv = self.invoke(ctx)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1697, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/typer/main.py", line 691, in wrapper
return callback(**use_params)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/download.py", line 188, in download
_print_result(run_download())
~~~~~~~~~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/download.py", line 149, in run_download
return snapshot_download(
repo_id=repo_id,
...<10 lines>...
dry_run=dry_run,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/_snapshot_download.py", line 451, in snapshot_download
thread_map(
~~~~~~~~~~^
_inner_hf_hub_download,
^^^^^^^^^^^^^^^^^^^^^^^
...<3 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map
return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))
File "/home/yanan/miniconda3/lib/python3.13/site-packages/tqdm/std.py", line 1181, in __iter__
for obj in iterable:
^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 619, in result_iterator
yield _result_or_cancel(fs.pop())
~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 317, in _result_or_cancel
return fut.result(timeout)
~~~~~~~~~~^^^^^^^^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
~~~~~~~~~~~~~~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/home/yanan/miniconda3/lib/python3.13/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/_snapshot_download.py", line 431, in _inner_hf_hub_download
hf_hub_download( # type: ignore
~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
repo_id,
^^^^^^^^
...<14 lines>...
dry_run=dry_run,
^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py", line 89, in _inner_fn
return fn(*args, **kwargs)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 986, in hf_hub_download
return _hf_hub_download_to_local_dir(
# Destination
...<16 lines>...
dry_run=dry_run,
)
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 1390, in _hf_hub_download_to_local_dir
_download_to_tmp_and_move(
~~~~~~~~~~~~~~~~~~~~~~~~~^
incomplete_path=paths.incomplete_path(etag),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<8 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 1791, in _download_to_tmp_and_move
xet_get(
~~~~~~~^
incomplete_path=incomplete_path,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...<4 lines>...
tqdm_class=tqdm_class,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/file_download.py", line 571, in xet_get
download_files(
~~~~~~~~~~~~~~^
xet_download_info,
^^^^^^^^^^^^^^^^^^
...<3 lines>...
progress_updater=[progress_updater],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
RuntimeError: Data processing error: CAS service error : Reqwest Error: HTTP status client error (429 Too Many Requests), domain: https://cas-server.xethub.hf.co/reconstructions/04b8a4667b84b3b874a6a2f070cec88920f6289e71185d69fa87e3cf29834710
```
### Steps to reproduce the bug
my command
```bash
hf download nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim --repo-type dataset --include "single_panda_gripper.CoffeePressButton/**" --local-dir /home/yanan/robotics/Isaac-GR00T/gr00t_dataset_official/
```
### Expected behavior
expect the data can be downloaded without any issue
### Environment info
huggingface_hub 1.1.4
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"events_url": "https://api.github.com/users/yanan1116/events{/privacy}",
"followers_url": "https://api.github.com/users/yanan1116/followers",
"following_url": "https://api.github.com/users/yanan1116/following{/other_user}",
"gists_url": "https://api.github.com/users/yanan1116/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yanan1116",
"id": 26405281,
"login": "yanan1116",
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"organizations_url": "https://api.github.com/users/yanan1116/orgs",
"received_events_url": "https://api.github.com/users/yanan1116/received_events",
"repos_url": "https://api.github.com/users/yanan1116/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yanan1116/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yanan1116/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yanan1116",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7871/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7871/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7870
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7870/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7870/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7870/events
|
https://github.com/huggingface/datasets/issues/7870
| 3,642,209,953
|
I_kwDODunzps7ZF7ah
| 7,870
|
Visualization for Medical Imaging Datasets
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_url": "https://api.github.com/users/CloseChoice/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CloseChoice",
"id": 31857876,
"login": "CloseChoice",
"node_id": "MDQ6VXNlcjMxODU3ODc2",
"organizations_url": "https://api.github.com/users/CloseChoice/orgs",
"received_events_url": "https://api.github.com/users/CloseChoice/received_events",
"repos_url": "https://api.github.com/users/CloseChoice/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CloseChoice/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CloseChoice/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CloseChoice",
"user_view_type": "public"
}
|
[] |
closed
| false
| null |
[] | null |
[
"It would be amazing to be able to show the Papaya UI in google colab / jupyter notebook. IIRC both allow serving javascript via nbextensions that we can surely use in HTML() objects.\n\nAlternatively we could also start with a simple approach and dump the medical image data as a video file that goes through the slices, so we don't need javascript."
] | 2025-11-19T11:05:39
| 2025-11-21T12:31:19
| 2025-11-21T12:31:19
|
CONTRIBUTOR
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
This is a followup to: https://github.com/huggingface/datasets/pull/7815.
I checked the possibilities to visualize the nifti (and potentially dicom), and here's what I found:
- https://github.com/aces/brainbrowser, AGPL3 license, last commit 3 months ago, latest (github) release from 2017. It's available on jsdelivr: https://www.jsdelivr.com/package/npm/brainbrowser (but that is from 2015!)
- https://github.com/rii-mango/Papaya, custom but BSD-style license that would require datasets to list the conditions in their readme somewhere, last commit June 2024. I looked into this library and it looks mature and good enough for our use case, but just working on it for a short time I wasn't able to get this to work, but am sure we could get this working, would probably require some JS on datasets' end. Available on jsdelivr as well: https://www.jsdelivr.com/package/npm/papaya-viewer. Seems like it's frequently loaded.
- https://github.com/hanayik/niivue, BSD3 license, last commit May 26, 2021. Archived. Doesn't look like an option.
I think the only real option for us Papaya, but there is also the risk that we'll end up with an unmaintained package after a while, since development seems to be slow or even halted.
I think conceptually we would need to figure out how we can build a good solution for visualizing Medical Image data. On shap, we have a separate javascript folder in which we render visualizations, this could be a blueprint but will require a bundler, etc. Alternatively one could go with a naive approach to just write some html code in a python string and load the package via jsdelivr.
@lhoestq thoughts?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
|
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7870/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7870/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7869
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7869/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7869/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7869/events
|
https://github.com/huggingface/datasets/issues/7869
| 3,636,808,734
|
I_kwDODunzps7YxUwe
| 7,869
|
Why does dataset merge fail when tools have different parameters?
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/116297296?v=4",
"events_url": "https://api.github.com/users/hitszxs/events{/privacy}",
"followers_url": "https://api.github.com/users/hitszxs/followers",
"following_url": "https://api.github.com/users/hitszxs/following{/other_user}",
"gists_url": "https://api.github.com/users/hitszxs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hitszxs",
"id": 116297296,
"login": "hitszxs",
"node_id": "U_kgDOBu6OUA",
"organizations_url": "https://api.github.com/users/hitszxs/orgs",
"received_events_url": "https://api.github.com/users/hitszxs/received_events",
"repos_url": "https://api.github.com/users/hitszxs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hitszxs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hitszxs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hitszxs",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi @hitszxs,\n This is indeed by design,\n\nThe `datasets` library is built on top of [Apache Arrow](https://arrow.apache.org/), which uses a **columnar storage format** with strict schema requirements. When you try to concatenate/merge datasets, the library checks if features can be aligned using the [`_check_if_features_can_be_aligned`](https://github.com/huggingface/datasets/blob/main/src/datasets/features/features.py#L2297-L2316) function.\n\nTwo datasets can be merged if:\n1. Columns with the same name have the **same type**, OR\n2. One of them has `Value(\"null\")` (representing missing data)\n\nFor struct types (nested dictionaries like your tool schemas), **all fields must match exactly**. This ensures type safety and efficient columnar storage.\n\n## Workarounds for Your Use Case\n Store tools as JSON strings\n\nInstead of using nested struct types, store the tool definitions as JSON strings\n\n\n"
] | 2025-11-18T08:33:04
| 2025-11-30T03:52:07
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
Hi, I have a question about SFT (Supervised Fine-tuning) for an agent model.
Suppose I want to fine-tune an agent model that may receive two different tools: tool1 and tool2. These tools have different parameters and types in their schema definitions.
When I try to merge datasets containing different tool definitions, I get the following error:
TypeError: Couldn't cast array of type
struct<refundFee: struct<description: string, type: string>, ... , servicerId: struct<description: string, type: string>>
to
{
'refundFee': {'description': Value(dtype='string'), 'type': Value(dtype='string')},
...
'templateId': {'description': Value(dtype='string'), 'type': Value(dtype='string')}
}
From my understanding, the merge fails because the tools column's nested structure is different across datasets — e.g., one struct contains an extra field servicerId while the other does not. This causes HuggingFace Datasets (and its underlying Apache Arrow schema) to reject the merge.
My question is: why is it designed this way?
Is this strict schema matching a hard requirement of the library?
Is there a recommended way to merge datasets with different tool schemas (different parameters and types)?
For an agent model supporting multiple tools, what's the best practice for preparing/merging training data without losing flexibility?
Any guidance or design rationale would be greatly appreciated. Thanks!
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7869/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7869/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7868
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7868/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7868/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7868/events
|
https://github.com/huggingface/datasets/issues/7868
| 3,632,429,308
|
I_kwDODunzps7Ygnj8
| 7,868
|
Data duplication with `split_dataset_by_node` and `interleaved_dataset`
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/42485228?v=4",
"events_url": "https://api.github.com/users/ValMystletainn/events{/privacy}",
"followers_url": "https://api.github.com/users/ValMystletainn/followers",
"following_url": "https://api.github.com/users/ValMystletainn/following{/other_user}",
"gists_url": "https://api.github.com/users/ValMystletainn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ValMystletainn",
"id": 42485228,
"login": "ValMystletainn",
"node_id": "MDQ6VXNlcjQyNDg1MjI4",
"organizations_url": "https://api.github.com/users/ValMystletainn/orgs",
"received_events_url": "https://api.github.com/users/ValMystletainn/received_events",
"repos_url": "https://api.github.com/users/ValMystletainn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ValMystletainn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ValMystletainn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ValMystletainn",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"Hi @ValMystletainn ,\nCan I be assigned this issue?",
"> split_dataset_by_node\n\nHello, I have some questions about your intended use: (1) It seems unnecessary to use interleaving for a single dataset. (2) For multiple datasets, it seems possible to interleave first and then split by node?"
] | 2025-11-17T09:15:24
| 2025-11-29T03:21:34
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
Data duplication in different rank, when process a iterabledataset with first `split_dataset_by_node` and then `interleaved_dataset`
### Steps to reproduce the bug
I have provide a minimum scripts
```python
import os
from datasets import interleave_datasets, load_dataset
from datasets.distributed import split_dataset_by_node
path = "/mnt/wwx/datasets/fineweb/data/CC-MAIN-2013-20/"
files = [os.path.join(path, fn) for fn in os.listdir(path)]
dataset = load_dataset("parquet", split="train", data_files=files, streaming=True)
print(f"{dataset.n_shards=}")
dataset_rank0 = split_dataset_by_node(dataset, 0, 4)
dataset_rank1 = split_dataset_by_node(dataset, 1, 4)
dataset_rank0_interleaved = interleave_datasets([dataset_rank0], seed=42, probabilities=[1.0])
dataset_rank1_interleaved = interleave_datasets([dataset_rank1], seed=42, probabilities=[1.0])
print("print the first sample id from all datasets")
print("dataset", next(iter(dataset))['id'])
print("dataset_rank0", next(iter(dataset_rank0))['id'])
print("dataset_rank1", next(iter(dataset_rank1))['id'])
print("dataset_rank0_interleaved", next(iter(dataset_rank0_interleaved))['id'])
print("dataset_rank1_interleaved", next(iter(dataset_rank1_interleaved))['id'])
dataset_rank0_shard = dataset.shard(4, 0)
dataset_rank1_shard = dataset.shard(4, 1)
dataset_rank0_shard_interleaved = interleave_datasets([dataset_rank0_shard], seed=42, probabilities=[1.0])
dataset_rank1_shard_interleaved = interleave_datasets([dataset_rank1_shard], seed=42, probabilities=[1.0])
print("dataset_rank0_shard", next(iter(dataset_rank0_shard))['id'])
print("dataset_rank1_shard", next(iter(dataset_rank1_shard))['id'])
print("dataset_rank0_shard_interleaved", next(iter(dataset_rank0_shard_interleaved))['id'])
print("dataset_rank1_shard_interleaved", next(iter(dataset_rank1_shard_interleaved))['id'])
```
I just use a subfold of C4 with 14 paruets to do the quick run and get
```
dataset.n_shards=14
print the first sample id from all datasets
dataset <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank0 <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1 <urn:uuid:6b7da64f-c26e-4086-aef5-4b6f01106223>
dataset_rank0_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank0_shard <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_shard <urn:uuid:67cf7216-dd05-4f55-a28a-1a1c96989c51>
dataset_rank0_shard_interleaved <urn:uuid:c84a7f00-f3e8-4b67-baa4-df5adaf23bae>
dataset_rank1_shard_interleaved <urn:uuid:67cf7216-dd05-4f55-a28a-1a1c96989c51>
```
### Expected behavior
the first sample of `dataset_rank0_interleaved` and `dataset_rank1_interleaved` should be different, as other `rank0` `rank1` couples.
I have dive into the function and try to find how it work in `split -> interleaved` process.
the `split_dataset_by_node` of iterable dataset does't not change `._ex_iterable` attribute of the dataset. it just set the distributed config in dataset, and the distributed dataset is used in actually `__iter__` call, to handle with shard split or sample skipping.
however, in `interleaved_dataset` of iterable dataset. it copy out all of the `._ex_iterable` of provided datasets, and consist a new `_ex_iterable`, so the missing copy of `distributed config` caused the data duplication in different dp rank.
So I may first ask, is it an unexpected using order of those function, which means:
- always do `split_dataset_by_node` at final rather than in middle way.
- or use `dataset.shard(dp_size, dp_rank)` rather than `split_dataset_by_node` in case similar of mine.
if the using order is permiited, I think it is a bug, and I can do a PR to fix it
(I meet this bug in real training, related issue is https://github.com/ByteDance-Seed/VeOmni/issues/200 if it helps.
### Environment info
datasets 4.4.1
ubuntu 20.04
python 3.11.4
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7868/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7868/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7867
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7867/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7867/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7867/events
|
https://github.com/huggingface/datasets/issues/7867
| 3,620,931,722
|
I_kwDODunzps7X0wiK
| 7,867
|
NonMatchingSplitsSizesError when loading partial dataset files
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/13678719?v=4",
"events_url": "https://api.github.com/users/QingGo/events{/privacy}",
"followers_url": "https://api.github.com/users/QingGo/followers",
"following_url": "https://api.github.com/users/QingGo/following{/other_user}",
"gists_url": "https://api.github.com/users/QingGo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/QingGo",
"id": 13678719,
"login": "QingGo",
"node_id": "MDQ6VXNlcjEzNjc4NzE5",
"organizations_url": "https://api.github.com/users/QingGo/orgs",
"received_events_url": "https://api.github.com/users/QingGo/received_events",
"repos_url": "https://api.github.com/users/QingGo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/QingGo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QingGo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/QingGo",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[
"While using verification_mode='no_checks' parameter in load_dataset() can bypass this validation, this solution is not intuitive or convenient for most users, especially those who are not familiar with all the parameters of the load_dataset() function.\n\n```python\nbook_corpus_ds = load_dataset(\n \"SaylorTwift/the_pile_books3_minus_gutenberg\",\n name=\"default\",\n data_files=\"data/train-00000-of-00213-312fd8d7a3c58a63.parquet\",\n split=\"train\",\n cache_dir=\"./data\",\n verification_mode='no_checks'\n)\n```",
"Thanks for the report and reproduction steps @QingGo \n@lhoestq which one of the following looks like a nicer way to handle this?\n\n1] Skip split-size validation entirely for partial loads\nIf the user passes data_files manually and it represents only a subset, then verify_splits() should simply not run, or skip validation only for that split.\n\n2] Replace the error with a warning\n\n3] Automatically detect partial-load cases(i mean we can try this out!)\n\nAssume this, \nIf data_files is provided AND\nthe number of provided files ≠ number of expected files in metadata,\nthen treat it as a partial load and disable strict verification.\n"
] | 2025-11-13T12:03:23
| 2025-11-16T15:39:23
| null |
NONE
| null | null |
{
"completed": 0,
"percent_completed": 0,
"total": 0
}
|
{
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
}
|
### Describe the bug
When loading only a subset of dataset files while the dataset's README.md contains split metadata, the system throws a NonMatchingSplitsSizesError . This prevents users from loading partial datasets for quick validation in cases of poor network conditions or very large datasets.
### Steps to reproduce the bug
1. Use the Hugging Face `datasets` library to load a dataset with only specific files specified
2. Ensure the dataset repository has split metadata defined in README.md
3. Observe the error when attempting to load a subset of files
```python
# Example code that triggers the error
from datasets import load_dataset
book_corpus_ds = load_dataset(
"SaylorTwift/the_pile_books3_minus_gutenberg",
name="default",
data_files="data/train-00000-of-00213-312fd8d7a3c58a63.parquet",
split="train",
cache_dir="./data"
)
```
### Error Message
```
Traceback (most recent call last):
File "/Users/QingGo/code/llm_learn/src/data/clean_cc_bc.py", line 13, in <module>
book_corpus_ds = load_dataset(
"SaylorTwift/the_pile_books3_minus_gutenberg",
...
File "/Users/QingGo/code/llm_learn/.venv/lib/python3.13/site-packages/datasets/utils/info_utils.py", line 77, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.exceptions.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=106199627990.47722, num_examples=192661, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=454897326, num_examples=905, shard_lengths=None, dataset_name='the_pile_books3_minus_gutenberg')}]
```
### Expected behavior
When loading partial dataset files, the system should:
1. Skip the `NonMatchingSplitsSizesError` validation, OR
2. Only log a warning message instead of raising an error
### Environment info
- `datasets` version: 4.3.0
- Platform: macOS-15.7.1-arm64-arm-64bit-Mach-O
- Python version: 3.13.2
- `huggingface_hub` version: 0.36.0
- PyArrow version: 22.0.0
- Pandas version: 2.3.3
- `fsspec` version: 2025.9.0
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7867/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7867/timeline
| null | null | null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/7866
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/7866/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/7866/comments
|
https://api.github.com/repos/huggingface/datasets/issues/7866/events
|
https://github.com/huggingface/datasets/pull/7866
| 3,620,436,248
|
PR_kwDODunzps6zL7Sz
| 7,866
|
docs: add Python version requirement note to installation section
|
{
"avatar_url": "https://avatars.githubusercontent.com/u/222381706?v=4",
"events_url": "https://api.github.com/users/ananthasai-2006/events{/privacy}",
"followers_url": "https://api.github.com/users/ananthasai-2006/followers",
"following_url": "https://api.github.com/users/ananthasai-2006/following{/other_user}",
"gists_url": "https://api.github.com/users/ananthasai-2006/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ananthasai-2006",
"id": 222381706,
"login": "ananthasai-2006",
"node_id": "U_kgDODUFGig",
"organizations_url": "https://api.github.com/users/ananthasai-2006/orgs",
"received_events_url": "https://api.github.com/users/ananthasai-2006/received_events",
"repos_url": "https://api.github.com/users/ananthasai-2006/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ananthasai-2006/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ananthasai-2006/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ananthasai-2006",
"user_view_type": "public"
}
|
[] |
open
| false
| null |
[] | null |
[] | 2025-11-13T09:54:35
| 2025-11-13T09:54:35
| null |
NONE
| null | null | null | null |
Added note about Python version requirement for conda installation.
| null |
{
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7866/reactions"
}
|
https://api.github.com/repos/huggingface/datasets/issues/7866/timeline
| null | null | 0
|
{
"diff_url": "https://github.com/huggingface/datasets/pull/7866.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7866",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7866.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7866"
}
| true
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 9