| url
				 string | repository_url
				 string | labels_url
				 string | comments_url
				 string | events_url
				 string | html_url
				 string | id
				 int64 | node_id
				 string | number
				 int64 | title
				 string | user
				 dict | labels
				 list | state
				 string | locked
				 bool | assignee
				 dict | assignees
				 list | milestone
				 dict | comments
				 list | created_at
				 timestamp[s] | updated_at
				 timestamp[s] | closed_at
				 timestamp[s] | author_association
				 string | type
				 null | active_lock_reason
				 null | draft
				 bool | pull_request
				 dict | body
				 string | closed_by
				 dict | reactions
				 dict | timeline_url
				 string | performed_via_github_app
				 null | state_reason
				 string | sub_issues_summary
				 dict | is_pull_request
				 bool | 
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 
	https://api.github.com/repos/huggingface/datasets/issues/7737 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7737/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7737/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7737/events | 
	https://github.com/huggingface/datasets/pull/7737 | 3,318,670,801 | 
	PR_kwDODunzps6jf5io | 7,737 | 
	docs: Add column overwrite example to batch mapping guide | 
	{
  "login": "Sanjaykumar030",
  "id": 183703408,
  "node_id": "U_kgDOCvMXcA",
  "avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/Sanjaykumar030",
  "html_url": "https://github.com/Sanjaykumar030",
  "followers_url": "https://api.github.com/users/Sanjaykumar030/followers",
  "following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}",
  "gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions",
  "organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs",
  "repos_url": "https://api.github.com/users/Sanjaykumar030/repos",
  "events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}",
  "received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-08-13T14:20:19 | 2025-08-13T14:20:19 | null | 
	NONE | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7737",
  "html_url": "https://github.com/huggingface/datasets/pull/7737",
  "diff_url": "https://github.com/huggingface/datasets/pull/7737.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7737.patch",
  "merged_at": null
} | 
	This PR adds a complementary example showing the **column-overwriting** pattern, which is both more direct and more flexible for many transformations.
### Proposed Change
The original `remove_columns` example remains untouched. Below it, this PR introduces an alternative approach that overwrites an existing column during batch mapping.
This teaches users a core `.map()` capability for in-place transformations without extra intermediate steps.
**New Example:**
> ```python
> >>> from datasets import Dataset
> >>> dataset = Dataset.from_dict({"a": [0, 1, 2]})
> # Overwrite "a" directly to duplicate each value
> >>> duplicated_dataset = dataset.map(
> ...     lambda batch: {"a": [x for x in batch["a"] for _ in range(2)]},
> ...     batched=True
> ... )
> >>> duplicated_dataset
> Dataset({
>     features: ['a'],
>     num_rows: 6
> })
> >>> duplicated_dataset["a"]
> [0, 0, 1, 1, 2, 2]
> ```
 | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7737/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7737/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7736 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7736/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7736/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7736/events | 
	https://github.com/huggingface/datasets/pull/7736 | 3,311,618,096 | 
	PR_kwDODunzps6jIWQ3 | 7,736 | 
	Fix type hint `train_test_split` | 
	{
  "login": "qgallouedec",
  "id": 45557362,
  "node_id": "MDQ6VXNlcjQ1NTU3MzYy",
  "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/qgallouedec",
  "html_url": "https://github.com/qgallouedec",
  "followers_url": "https://api.github.com/users/qgallouedec/followers",
  "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
  "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
  "organizations_url": "https://api.github.com/users/qgallouedec/orgs",
  "repos_url": "https://api.github.com/users/qgallouedec/repos",
  "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
  "received_events_url": "https://api.github.com/users/qgallouedec/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Fdatasets%2Fpr_7736). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-08-11T20:46:53 | 2025-08-13T13:13:50 | 2025-08-13T13:13:48 | 
	MEMBER | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7736",
  "html_url": "https://github.com/huggingface/datasets/pull/7736",
  "diff_url": "https://github.com/huggingface/datasets/pull/7736.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7736.patch",
  "merged_at": "2025-08-13T13:13:48"
} | null | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7736/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7736/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7735 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7735/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7735/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7735/events | 
	https://github.com/huggingface/datasets/pull/7735 | 3,310,514,828 | 
	PR_kwDODunzps6jEq5w | 7,735 | 
	fix largelist repr | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Fdatasets%2Fpr_7735). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-08-11T15:17:42 | 2025-08-11T15:39:56 | 2025-08-11T15:39:54 | 
	MEMBER | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7735",
  "html_url": "https://github.com/huggingface/datasets/pull/7735",
  "diff_url": "https://github.com/huggingface/datasets/pull/7735.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7735.patch",
  "merged_at": "2025-08-11T15:39:54"
} | null | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7735/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7735/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7734 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7734/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7734/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7734/events | 
	https://github.com/huggingface/datasets/pull/7734 | 3,306,519,239 | 
	PR_kwDODunzps6i4pmA | 7,734 | 
	Fixing __getitem__ of datasets which behaves inconsistent to documentation when setting _format_type to None | 
	{
  "login": "awagen",
  "id": 40367113,
  "node_id": "MDQ6VXNlcjQwMzY3MTEz",
  "avatar_url": "https://avatars.githubusercontent.com/u/40367113?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/awagen",
  "html_url": "https://github.com/awagen",
  "followers_url": "https://api.github.com/users/awagen/followers",
  "following_url": "https://api.github.com/users/awagen/following{/other_user}",
  "gists_url": "https://api.github.com/users/awagen/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/awagen/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/awagen/subscriptions",
  "organizations_url": "https://api.github.com/users/awagen/orgs",
  "repos_url": "https://api.github.com/users/awagen/repos",
  "events_url": "https://api.github.com/users/awagen/events{/privacy}",
  "received_events_url": "https://api.github.com/users/awagen/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "this breaking change is actually expected, happy to help with a fix in sentencetransformers to account for this"
] | 2025-08-09T15:52:54 | 2025-08-13T13:10:14 | null | 
	NONE | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7734",
  "html_url": "https://github.com/huggingface/datasets/pull/7734",
  "diff_url": "https://github.com/huggingface/datasets/pull/7734.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7734.patch",
  "merged_at": null
} | 
	Setting _format_type to None, should return plain python object but as of 4.0.0 returns Column. This fails in libs such as sentencetransformers (such as in generation of hard negatives) where plain python is expected. | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7734/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7734/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7733 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7733/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7733/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7733/events | 
	https://github.com/huggingface/datasets/issues/7733 | 3,304,979,299 | 
	I_kwDODunzps7E_ftj | 7,733 | 
	Dataset Repo Paths to Locally Stored Images Not Being Appended to Image Path | 
	{
  "login": "dennys246",
  "id": 27898715,
  "node_id": "MDQ6VXNlcjI3ODk4NzE1",
  "avatar_url": "https://avatars.githubusercontent.com/u/27898715?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/dennys246",
  "html_url": "https://github.com/dennys246",
  "followers_url": "https://api.github.com/users/dennys246/followers",
  "following_url": "https://api.github.com/users/dennys246/following{/other_user}",
  "gists_url": "https://api.github.com/users/dennys246/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/dennys246/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/dennys246/subscriptions",
  "organizations_url": "https://api.github.com/users/dennys246/orgs",
  "repos_url": "https://api.github.com/users/dennys246/repos",
  "events_url": "https://api.github.com/users/dennys246/events{/privacy}",
  "received_events_url": "https://api.github.com/users/dennys246/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "This is the download issues I come into, about ever other time it fails...\n<img width=\"1719\" height=\"1226\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/2e5b4b3e-7c13-4bad-a77c-34b47a932831\" />"
] | 2025-08-08T19:10:58 | 2025-08-12T00:54:58 | null | 
	NONE | null | null | null | null | 
	### Describe the bug
I’m not sure if this is a bug or a feature and I just don’t fully understand how dataset loading is to work, but it appears there may be a bug with how locally stored Image() are being accessed. I’ve uploaded a new dataset to hugging face (rmdig/rocky_mountain_snowpack) but I’ve come into a ton of trouble trying to have the images handled properly (at least in the way I’d expect them to be handled). 
I find that I cannot use relative paths for loading images remotely from the Hugging Face repo or from a local repository. Any time I do it always simply appends my current working directory to the dataset. As a result to use the datasets library with my dataset I have to change my working directory to the dataset folder or abandon the dataset object structure, which I cannot imagine you intended. As a result I have to use URL’s since an absolute path on my system obviously wouldn’t work for others. The URL works ok, but despite me having it locally downloaded, it appears to be redownloading the dataset every time I train my snowGAN model on it (and often times I’m coming into HTTPS errors for over requesting the data).
Or maybe image relative paths aren't intended to be loaded directly through your datasets library as images and should be kept as strings for the user to handle? If so I feel like you’re missing out on some pretty seamless functionality
### Steps to reproduce the bug
1. Download a local copy of the dataset (rmdig/rocky_mountain_snowpack) through git or whatever you prefer.
2. Alter the README.md YAML for file_path (the relative path to each image) to be type Image instead of type string
`
---
dataset_info:
  features:
    - name: image
      dtype: Image
    - name: file_path
      dtype: Image
`
3. Initialize the dataset locally, make sure your working directory is not the dataset directory root 
`dataset = datasets.load_dataset(‘path/to/local/rocky_mountain_snowpack/‘)`
 
4. Call to one of the samples and you’ll get an error that the image was not found in current/working/directory/preprocessed/cores/image_1.png. Showing that it’s simply looking in the current working directory + relative path
`
>>> dataset['train'][0]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2859, in __getitem__
    return self._getitem(key)
           ^^^^^^^^^^^^^^^^^^
  File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 2841, in _getitem
    formatted_output = format_table(
                       ^^^^^^^^^^^^^
  File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 657, in format_table
    return formatter(pa_table, query_type=query_type)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 410, in __call__
    return self.format_row(pa_table)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 459, in format_row
    row = self.python_features_decoder.decode_row(row)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 223, in decode_row
    return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 2093, in decode_example
    column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/features.py", line 1405, in decode_nested_example
    return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/datasets/features/image.py", line 171, in decode_example
    image = PIL.Image.open(path)
            ^^^^^^^^^^^^^^^^^^^^
  File "/Users/dennyschaedig/miniconda3/lib/python3.12/site-packages/PIL/Image.py", line 3277, in open
    fp = builtins.open(filename, "rb")
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/Users/dennyschaedig/Datasets/preprocessed/cores/image_1.png'
`
### Expected behavior
I expect the datasets and Image() to load the locally hosted data using path/to/local/rocky_mountain_snowpack/ (that I pass in with my datasets.load_dataset() or the you all handle on the backend) call + relative path. 
Instead it appears to load from my current working directory + relative path.
### Environment info
Tested on…
Windows 11, Ubuntu Linux 22.04 and Mac Sequoia 15.5 Silicone M2
datasets version 4.0.0
Python 3.12 and 3.13 | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7733/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7733/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7732 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7732/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7732/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7732/events | 
	https://github.com/huggingface/datasets/issues/7732 | 3,304,673,383 | 
	I_kwDODunzps7E-VBn | 7,732 | 
	webdataset: key errors when `field_name` has upper case characters | 
	{
  "login": "YassineYousfi",
  "id": 29985433,
  "node_id": "MDQ6VXNlcjI5OTg1NDMz",
  "avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/YassineYousfi",
  "html_url": "https://github.com/YassineYousfi",
  "followers_url": "https://api.github.com/users/YassineYousfi/followers",
  "following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}",
  "gists_url": "https://api.github.com/users/YassineYousfi/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/YassineYousfi/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/YassineYousfi/subscriptions",
  "organizations_url": "https://api.github.com/users/YassineYousfi/orgs",
  "repos_url": "https://api.github.com/users/YassineYousfi/repos",
  "events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}",
  "received_events_url": "https://api.github.com/users/YassineYousfi/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-08-08T16:56:42 | 2025-08-08T16:56:42 | null | 
	NONE | null | null | null | null | 
	### Describe the bug
When using a webdataset each sample can be a collection of different "fields" 
like this:
```
images17/image194.left.jpg
images17/image194.right.jpg
images17/image194.json
images17/image12.left.jpg
images17/image12.right.jpg
images17/image12.json
```
if the field_name contains upper case characters, the HF webdataset integration throws a key error when trying to load the dataset:
e.g. from a dataset (now updated so that it doesn't throw this error)
```
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
Cell In[1], line 2
      1 from datasets import load_dataset
----> 2 ds = load_dataset("commaai/comma2k19", data_files={'train': ['data-00000.tar.gz']}, num_proc=1)
File ~/xx/.venv/lib/python3.11/site-packages/datasets/load.py:1412, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, **config_kwargs)
   1409     return builder_instance.as_streaming_dataset(split=split)
   1411 # Download and prepare data
-> 1412 builder_instance.download_and_prepare(
   1413     download_config=download_config,
   1414     download_mode=download_mode,
   1415     verification_mode=verification_mode,
   1416     num_proc=num_proc,
   1417     storage_options=storage_options,
   1418 )
   1420 # Build dataset for splits
   1421 keep_in_memory = (
   1422     keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
   1423 )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:894, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
    892 if num_proc is not None:
    893     prepare_split_kwargs["num_proc"] = num_proc
--> 894 self._download_and_prepare(
    895     dl_manager=dl_manager,
    896     verification_mode=verification_mode,
    897     **prepare_split_kwargs,
    898     **download_and_prepare_kwargs,
    899 )
    900 # Sync info
    901 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:1609, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)
   1608 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):
-> 1609     super()._download_and_prepare(
   1610         dl_manager,
   1611         verification_mode,
   1612         check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS
   1613         or verification_mode == VerificationMode.ALL_CHECKS,
   1614         **prepare_splits_kwargs,
   1615     )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/builder.py:948, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
    946 split_dict = SplitDict(dataset_name=self.dataset_name)
    947 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 948 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
    950 # Checksums verification
    951 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:81, in WebDataset._split_generators(self, dl_manager)
     78 if not self.info.features:
     79     # Get one example to get the feature types
     80     pipeline = self._get_pipeline_from_tar(tar_paths[0], tar_iterators[0])
---> 81     first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
     82     if any(example.keys() != first_examples[0].keys() for example in first_examples):
     83         raise ValueError(
     84             "The TAR archives of the dataset should be in WebDataset format, "
     85             "but the files in the archive don't share the same prefix or the same types."
     86         )
File ~/xx/.venv/lib/python3.11/site-packages/datasets/packaged_modules/webdataset/webdataset.py:55, in WebDataset._get_pipeline_from_tar(cls, tar_path, tar_iterator)
     53         data_extension = field_name.split(".")[-1]
     54     if data_extension in cls.DECODERS:
---> 55         current_example[field_name] = cls.DECODERS[data_extension](current_example[field_name])
     56 if current_example:
     57     yield current_example
KeyError: 'processed_log_IMU_magnetometer_value.npy'
```
### Steps to reproduce the bug
unit test was added in: https://github.com/huggingface/datasets/pull/7726
it fails without the fixed proposed in the same PR
### Expected behavior
Not throwing a key error. 
### Environment info
```
- `datasets` version: 4.0.0
- Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.39
- Python version: 3.11.4
- `huggingface_hub` version: 0.33.4
- PyArrow version: 21.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.7.0
``` | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7732/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7732/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7731 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7731/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7731/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7731/events | 
	https://github.com/huggingface/datasets/issues/7731 | 3,303,637,075 | 
	I_kwDODunzps7E6YBT | 7,731 | 
	Add the possibility of a backend for audio decoding | 
	{
  "login": "intexcor",
  "id": 142020129,
  "node_id": "U_kgDOCHcOIQ",
  "avatar_url": "https://avatars.githubusercontent.com/u/142020129?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/intexcor",
  "html_url": "https://github.com/intexcor",
  "followers_url": "https://api.github.com/users/intexcor/followers",
  "following_url": "https://api.github.com/users/intexcor/following{/other_user}",
  "gists_url": "https://api.github.com/users/intexcor/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/intexcor/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/intexcor/subscriptions",
  "organizations_url": "https://api.github.com/users/intexcor/orgs",
  "repos_url": "https://api.github.com/users/intexcor/repos",
  "events_url": "https://api.github.com/users/intexcor/events{/privacy}",
  "received_events_url": "https://api.github.com/users/intexcor/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[
  {
    "id": 1935892871,
    "node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
    "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
    "name": "enhancement",
    "color": "a2eeef",
    "default": true,
    "description": "New feature or request"
  }
] | 
	open | false | null | 
	[] | null | 
	[] | 2025-08-08T11:08:56 | 2025-08-08T11:08:56 | null | 
	NONE | null | null | null | null | 
	### Feature request
Add the possibility of a backend for audio decoding. Before version 4.0.0, soundfile was used, and now torchcodec is used, but the problem is that torchcodec requires ffmpeg, which is problematic to install on the same colab. Therefore, I suggest adding a decoder selection when loading the dataset.
### Motivation
I use a service for training models in which ffmpeg cannot be installed.
### Your contribution
I use a service for training models in which ffmpeg cannot be installed. | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7731/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7731/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7730 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7730/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7730/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7730/events | 
	https://github.com/huggingface/datasets/pull/7730 | 3,301,907,242 | 
	PR_kwDODunzps6iqTZI | 7,730 | 
	Grammar fix: correct "showed" to "shown" in fingerprint.py | 
	{
  "login": "brchristian",
  "id": 2460418,
  "node_id": "MDQ6VXNlcjI0NjA0MTg=",
  "avatar_url": "https://avatars.githubusercontent.com/u/2460418?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/brchristian",
  "html_url": "https://github.com/brchristian",
  "followers_url": "https://api.github.com/users/brchristian/followers",
  "following_url": "https://api.github.com/users/brchristian/following{/other_user}",
  "gists_url": "https://api.github.com/users/brchristian/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/brchristian/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/brchristian/subscriptions",
  "organizations_url": "https://api.github.com/users/brchristian/orgs",
  "repos_url": "https://api.github.com/users/brchristian/repos",
  "events_url": "https://api.github.com/users/brchristian/events{/privacy}",
  "received_events_url": "https://api.github.com/users/brchristian/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[] | 2025-08-07T21:22:56 | 2025-08-13T18:34:30 | 2025-08-13T13:12:56 | 
	CONTRIBUTOR | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7730",
  "html_url": "https://github.com/huggingface/datasets/pull/7730",
  "diff_url": "https://github.com/huggingface/datasets/pull/7730.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7730.patch",
  "merged_at": "2025-08-13T13:12:56"
} | 
	This PR corrects a small grammatical issue in the outputs of fingerprint.py:
```diff
- "This warning is only showed once. Subsequent hashing failures won't be showed."
+ "This warning is only shown once. Subsequent hashing failures won't be shown."
```
 | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7730/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7730/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7729 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7729/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7729/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7729/events | 
	https://github.com/huggingface/datasets/issues/7729 | 3,300,672,954 | 
	I_kwDODunzps7EvEW6 | 7,729 | 
	OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory | 
	{
  "login": "SaleemMalik632",
  "id": 115183904,
  "node_id": "U_kgDOBt2RIA",
  "avatar_url": "https://avatars.githubusercontent.com/u/115183904?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/SaleemMalik632",
  "html_url": "https://github.com/SaleemMalik632",
  "followers_url": "https://api.github.com/users/SaleemMalik632/followers",
  "following_url": "https://api.github.com/users/SaleemMalik632/following{/other_user}",
  "gists_url": "https://api.github.com/users/SaleemMalik632/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/SaleemMalik632/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/SaleemMalik632/subscriptions",
  "organizations_url": "https://api.github.com/users/SaleemMalik632/orgs",
  "repos_url": "https://api.github.com/users/SaleemMalik632/repos",
  "events_url": "https://api.github.com/users/SaleemMalik632/events{/privacy}",
  "received_events_url": "https://api.github.com/users/SaleemMalik632/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-08-07T14:07:23 | 2025-08-07T14:07:23 | null | 
	NONE | null | null | null | null | 
	> Hi  is there any solution for that eror i try to install this one 
pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html  
this is working fine but tell me how to install pytorch version that is fit for gpu  | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7729/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7729/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7728 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7728/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7728/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7728/events | 
	https://github.com/huggingface/datasets/issues/7728 | 3,298,854,904 | 
	I_kwDODunzps7EoIf4 | 7,728 | 
	NonMatchingSplitsSizesError and ExpectedMoreSplitsError | 
	{
  "login": "efsotr",
  "id": 104755879,
  "node_id": "U_kgDOBj5ypw",
  "avatar_url": "https://avatars.githubusercontent.com/u/104755879?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/efsotr",
  "html_url": "https://github.com/efsotr",
  "followers_url": "https://api.github.com/users/efsotr/followers",
  "following_url": "https://api.github.com/users/efsotr/following{/other_user}",
  "gists_url": "https://api.github.com/users/efsotr/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/efsotr/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/efsotr/subscriptions",
  "organizations_url": "https://api.github.com/users/efsotr/orgs",
  "repos_url": "https://api.github.com/users/efsotr/repos",
  "events_url": "https://api.github.com/users/efsotr/events{/privacy}",
  "received_events_url": "https://api.github.com/users/efsotr/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-08-07T04:04:50 | 2025-08-07T07:31:47 | null | 
	NONE | null | null | null | null | 
	### Describe the bug
When loading dataset, the info specified by `data_files` did not overwrite the original info.
### Steps to reproduce the bug
```python
from datasets import load_dataset
traindata = load_dataset(
            "allenai/c4",
            "en",
            data_files={"train": "en/c4-train.00000-of-01024.json.gz", 
                        "validation": "en/c4-validation.00000-of-00008.json.gz"},
        )
```
```log
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=828589180707, num_examples=364868892, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=809262831, num_examples=356317, shard_lengths=[223006, 133311], dataset_name='c4')}, {'expected': SplitInfo(name='validation', num_bytes=825767266, num_examples=364608, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=102199431, num_examples=45576, shard_lengths=None, dataset_name='c4')}]
```
```python
from datasets import load_dataset
traindata = load_dataset(
            "allenai/c4",
            "en",
            data_files={"train": "en/c4-train.00000-of-01024.json.gz"},
            split="train"
        )
```
```log
ExpectedMoreSplitsError: {'validation'}
```
### Expected behavior
No error
### Environment info
datasets 4.0.0 | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7728/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7728/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7727 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7727/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7727/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7727/events | 
	https://github.com/huggingface/datasets/issues/7727 | 3,295,718,578 | 
	I_kwDODunzps7EcKyy | 7,727 | 
	config paths that start with ./ are not valid as hf:// accessed repos, but are valid when accessed locally | 
	{
  "login": "doctorpangloss",
  "id": 2229300,
  "node_id": "MDQ6VXNlcjIyMjkzMDA=",
  "avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/doctorpangloss",
  "html_url": "https://github.com/doctorpangloss",
  "followers_url": "https://api.github.com/users/doctorpangloss/followers",
  "following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}",
  "gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions",
  "organizations_url": "https://api.github.com/users/doctorpangloss/orgs",
  "repos_url": "https://api.github.com/users/doctorpangloss/repos",
  "events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}",
  "received_events_url": "https://api.github.com/users/doctorpangloss/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-08-06T08:21:37 | 2025-08-06T08:21:37 | null | 
	NONE | null | null | null | null | 
	### Describe the bug
```
- config_name: some_config
  data_files:
  - split: train
    path:
    - images/xyz/*.jpg
```
will correctly download but
```
- config_name: some_config
  data_files:
  - split: train
    path:
    - ./images/xyz/*.jpg
```
will error with `FileNotFoundError` due to improper url joining. `load_dataset` on the same directory locally works fine.
### Steps to reproduce the bug
1. create a README.md with the front matter of the form
```
- config_name: some_config
  data_files:
  - split: train
    path:
    - ./images/xyz/*.jpg
```
2. `touch ./images/xyz/1.jpg`
3. Observe this directory loads with `load_dataset("filesystem_path", "some_config")` correctly.
4. Observe exceptions when you load this with `load_dataset("repoid/filesystem_path", "some_config")`
### Expected behavior
`./` prefix should be interpreted correctly
### Environment info
datasets 4.0.0
datasets 3.4.0
reproduce | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7727/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7727/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7726 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7726/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7726/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7726/events | 
	https://github.com/huggingface/datasets/pull/7726 | 3,293,789,832 | 
	PR_kwDODunzps6iO_oF | 7,726 | 
	fix(webdataset): don't .lower() field_name | 
	{
  "login": "YassineYousfi",
  "id": 29985433,
  "node_id": "MDQ6VXNlcjI5OTg1NDMz",
  "avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/YassineYousfi",
  "html_url": "https://github.com/YassineYousfi",
  "followers_url": "https://api.github.com/users/YassineYousfi/followers",
  "following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}",
  "gists_url": "https://api.github.com/users/YassineYousfi/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/YassineYousfi/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/YassineYousfi/subscriptions",
  "organizations_url": "https://api.github.com/users/YassineYousfi/orgs",
  "repos_url": "https://api.github.com/users/YassineYousfi/repos",
  "events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}",
  "received_events_url": "https://api.github.com/users/YassineYousfi/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "fixes: https://github.com/huggingface/datasets/issues/7732"
] | 2025-08-05T16:57:09 | 2025-08-13T13:12:22 | null | 
	NONE | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7726",
  "html_url": "https://github.com/huggingface/datasets/pull/7726",
  "diff_url": "https://github.com/huggingface/datasets/pull/7726.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7726.patch",
  "merged_at": null
} | 
	This fixes cases where keys have upper case identifiers | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7726/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7726/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7724 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7724/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7724/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7724/events | 
	https://github.com/huggingface/datasets/issues/7724 | 3,292,315,241 | 
	I_kwDODunzps7EPL5p | 7,724 | 
	Can not stepinto load_dataset.py? | 
	{
  "login": "micklexqg",
  "id": 13776012,
  "node_id": "MDQ6VXNlcjEzNzc2MDEy",
  "avatar_url": "https://avatars.githubusercontent.com/u/13776012?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/micklexqg",
  "html_url": "https://github.com/micklexqg",
  "followers_url": "https://api.github.com/users/micklexqg/followers",
  "following_url": "https://api.github.com/users/micklexqg/following{/other_user}",
  "gists_url": "https://api.github.com/users/micklexqg/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/micklexqg/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/micklexqg/subscriptions",
  "organizations_url": "https://api.github.com/users/micklexqg/orgs",
  "repos_url": "https://api.github.com/users/micklexqg/repos",
  "events_url": "https://api.github.com/users/micklexqg/events{/privacy}",
  "received_events_url": "https://api.github.com/users/micklexqg/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-08-05T09:28:51 | 2025-08-05T09:28:51 | null | 
	NONE | null | null | null | null | 
	I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ?
<!-- Failed to upload "截图 2025-08-05 17-25-18.png" --> | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7724/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7724/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7723 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7723/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7723/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7723/events | 
	https://github.com/huggingface/datasets/issues/7723 | 3,289,943,261 | 
	I_kwDODunzps7EGIzd | 7,723 | 
	Don't remove `trust_remote_code` arg!!! | 
	{
  "login": "autosquid",
  "id": 758925,
  "node_id": "MDQ6VXNlcjc1ODkyNQ==",
  "avatar_url": "https://avatars.githubusercontent.com/u/758925?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/autosquid",
  "html_url": "https://github.com/autosquid",
  "followers_url": "https://api.github.com/users/autosquid/followers",
  "following_url": "https://api.github.com/users/autosquid/following{/other_user}",
  "gists_url": "https://api.github.com/users/autosquid/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/autosquid/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/autosquid/subscriptions",
  "organizations_url": "https://api.github.com/users/autosquid/orgs",
  "repos_url": "https://api.github.com/users/autosquid/repos",
  "events_url": "https://api.github.com/users/autosquid/events{/privacy}",
  "received_events_url": "https://api.github.com/users/autosquid/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[
  {
    "id": 1935892871,
    "node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
    "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
    "name": "enhancement",
    "color": "a2eeef",
    "default": true,
    "description": "New feature or request"
  }
] | 
	open | false | null | 
	[] | null | 
	[] | 2025-08-04T15:42:07 | 2025-08-04T15:42:07 | null | 
	NONE | null | null | null | null | 
	### Feature request
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
Add `trust_remote_code` arg back please!
### Motivation
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
### Your contribution
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
 | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7723/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7723/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7722 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7722/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7722/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7722/events | 
	https://github.com/huggingface/datasets/issues/7722 | 3,289,741,064 | 
	I_kwDODunzps7EFXcI | 7,722 | 
	Out of memory even though using load_dataset(..., streaming=True) | 
	{
  "login": "padmalcom",
  "id": 3961950,
  "node_id": "MDQ6VXNlcjM5NjE5NTA=",
  "avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/padmalcom",
  "html_url": "https://github.com/padmalcom",
  "followers_url": "https://api.github.com/users/padmalcom/followers",
  "following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
  "gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
  "organizations_url": "https://api.github.com/users/padmalcom/orgs",
  "repos_url": "https://api.github.com/users/padmalcom/repos",
  "events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
  "received_events_url": "https://api.github.com/users/padmalcom/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-08-04T14:41:55 | 2025-08-04T14:41:55 | null | 
	NONE | null | null | null | null | 
	### Describe the bug
I am iterating over a large dataset that I load using streaming=True to avoid running out of memory. Unfortunately, I am observing that memory usage increases over time and I'm finally running in an oom.
### Steps to reproduce the bug
```
  ds = load_dataset("openslr/librispeech_asr", split="train.clean.360", streaming=True)
  for i,sample in enumerate(tqdm(ds)):
    target_file = os.path.join(NSFW_TARGET_FOLDER, f'audio{i}.wav')
    try:
      sf.write(target_file, sample['audio']['array'], samplerate=sample['audio']['sampling_rate'])
    except Exception as e:
      print(f"Could not write audio {i} in ds: {e}")
```
### Expected behavior
I'd expect to have a small memory footprint and memory being freed after each iteration of the for loop. Instead the memory usage is increasing. I tried to remove the logic to write the sound file and just print the sample but the issue remains the same.
### Environment info
Python 3.12.11
Ubuntu 24
datasets 4.0.0 and 3.6.0 | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7722/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7722/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7721 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7721/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7721/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7721/events | 
	https://github.com/huggingface/datasets/issues/7721 | 3,289,426,104 | 
	I_kwDODunzps7EEKi4 | 7,721 | 
	Bad split error message when using percentages | 
	{
  "login": "padmalcom",
  "id": 3961950,
  "node_id": "MDQ6VXNlcjM5NjE5NTA=",
  "avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/padmalcom",
  "html_url": "https://github.com/padmalcom",
  "followers_url": "https://api.github.com/users/padmalcom/followers",
  "following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
  "gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
  "organizations_url": "https://api.github.com/users/padmalcom/orgs",
  "repos_url": "https://api.github.com/users/padmalcom/repos",
  "events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
  "received_events_url": "https://api.github.com/users/padmalcom/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "I'd like to work on this: add clearer validation/messages for percent-based splits + tests",
  "The most basic example is this code:\n`load_dataset(\"openslr/librispeech_asr\", split=\"train[10%:20%]\")`\n\nThis results in this ValueError:\n```\n    raise ValueError(f'Unknown split \"{split}\". Should be one of {list(name2len)}.')\nValueError: Unknown split \"train\". Should be one of ['test.clean', 'test.other', 'train.clean.100', 'train.clean.360', 'train.other.500', 'validation.clean', 'validation.other'].\n```\n"
] | 2025-08-04T13:20:25 | 2025-08-14T14:42:24 | null | 
	NONE | null | null | null | null | 
	### Describe the bug
Hi, I'm trying to download a dataset. To not load the entire dataset in memory, I split it as described [here](https://huggingface.co/docs/datasets/v4.0.0/loading#slice-splits) in 10% steps.
When doing so, the library returns this error:
    raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
ValueError: Bad split: train[0%:10%]. Available splits: ['train']
Edit: Same happens with a split like _train[:90000]_
### Steps to reproduce the bug
```
  for split in range(10):
    split_str = f"train[{split*10}%:{(split+1)*10}%]"
    print(f"Processing split {split_str}...")
    ds = load_dataset("user/dataset", split=split_str, streaming=True)
```
### Expected behavior
I'd expect the library to split my dataset in 10% steps.
### Environment info
python 3.12.11
ubuntu 24
dataset 4.0.0 | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7721/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7721/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7720 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7720/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7720/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7720/events | 
	https://github.com/huggingface/datasets/issues/7720 | 3,287,150,513 | 
	I_kwDODunzps7D7e-x | 7,720 | 
	Datasets 4.0 map function causing column not found | 
	{
  "login": "Darejkal",
  "id": 55143337,
  "node_id": "MDQ6VXNlcjU1MTQzMzM3",
  "avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/Darejkal",
  "html_url": "https://github.com/Darejkal",
  "followers_url": "https://api.github.com/users/Darejkal/followers",
  "following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
  "gists_url": "https://api.github.com/users/Darejkal/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/Darejkal/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/Darejkal/subscriptions",
  "organizations_url": "https://api.github.com/users/Darejkal/orgs",
  "repos_url": "https://api.github.com/users/Darejkal/repos",
  "events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
  "received_events_url": "https://api.github.com/users/Darejkal/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "Hi, I tried to reproduce this issue on the latest `main` branch but it seems to be working correctly now. My test script (which creates a dummy dataset and applies the `.map()` function) successfully creates and accesses the new column without a `KeyError`.\n\nIt's possible this was fixed by a recent commit. The maintainers might want to consider closing this issue.",
  "Hi, have you tried on a large dataset (200GB+) perhaps? I will try my best to do a rerun with main branch when I have the time.",
  "I ran it on a small dataset, maybe that’s why I didn’t hit the issue. If it still shows up on your side with the latest main, let me know. I can try it on a bigger set too."
] | 2025-08-03T12:52:34 | 2025-08-07T19:23:34 | null | 
	NONE | null | null | null | null | 
	### Describe the bug
Column returned after mapping is not found in new instance of the dataset.
### Steps to reproduce the bug
Code for reproduction. After running get_total_audio_length, it is errored out due to `data` not having `duration`
```
def compute_duration(x):
    return {"duration": len(x["audio"]["array"]) / x["audio"]["sampling_rate"]}
def get_total_audio_length(dataset):
    data = dataset.map(compute_duration, num_proc=NUM_PROC)
    print(data)
    durations=data["duration"]
    total_seconds = sum(durations)
    return total_seconds
```
### Expected behavior
New datasets.Dataset instance should have new columns attached.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.33.2
- PyArrow version: 20.0.0
- Pandas version: 2.3.0
- `fsspec` version: 2023.12.2 | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7720/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7720/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7719 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7719/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7719/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7719/events | 
	https://github.com/huggingface/datasets/issues/7719 | 3,285,928,491 | 
	I_kwDODunzps7D20or | 7,719 | 
	Specify dataset columns types in typehint | 
	{
  "login": "Samoed",
  "id": 36135455,
  "node_id": "MDQ6VXNlcjM2MTM1NDU1",
  "avatar_url": "https://avatars.githubusercontent.com/u/36135455?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/Samoed",
  "html_url": "https://github.com/Samoed",
  "followers_url": "https://api.github.com/users/Samoed/followers",
  "following_url": "https://api.github.com/users/Samoed/following{/other_user}",
  "gists_url": "https://api.github.com/users/Samoed/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/Samoed/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/Samoed/subscriptions",
  "organizations_url": "https://api.github.com/users/Samoed/orgs",
  "repos_url": "https://api.github.com/users/Samoed/repos",
  "events_url": "https://api.github.com/users/Samoed/events{/privacy}",
  "received_events_url": "https://api.github.com/users/Samoed/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[
  {
    "id": 1935892871,
    "node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
    "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
    "name": "enhancement",
    "color": "a2eeef",
    "default": true,
    "description": "New feature or request"
  }
] | 
	open | false | null | 
	[] | null | 
	[] | 2025-08-02T13:22:31 | 2025-08-02T13:22:31 | null | 
	NONE | null | null | null | null | 
	### Feature request
Make dataset optionaly generic to datasets usage with type annotations like it was done in `torch.Dataloader` https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/utils/data/dataloader.py#L131
### Motivation
In MTEB we're using a lot of datasets objects, but they're a bit poor in typehints. E.g. we can specify this for dataloder
```python
from typing import TypedDict
from torch.utils.data import DataLoader
class CorpusInput(TypedDict):
    title: list[str]
    body: list[str]
class QueryInput(TypedDict):
    query: list[str]
    instruction: list[str]
def queries_loader() -> DataLoader[QueryInput]:
    ...
def corpus_loader() -> DataLoader[CorpusInput]:
   ...
```
But for datasets we can only specify columns in type in comments 
```python
from datasets import Dataset
QueryDataset = Dataset
"""Query dataset should have `query` and `instructions` columns as `str` """
```
### Your contribution
I can create draft implementation | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7719/reactions",
  "total_count": 2,
  "+1": 2,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7719/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7718 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7718/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7718/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7718/events | 
	https://github.com/huggingface/datasets/pull/7718 | 3,284,221,177 | 
	PR_kwDODunzps6hvJ6R | 7,718 | 
	add support for pyarrow string view in features | 
	{
  "login": "onursatici",
  "id": 5051569,
  "node_id": "MDQ6VXNlcjUwNTE1Njk=",
  "avatar_url": "https://avatars.githubusercontent.com/u/5051569?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/onursatici",
  "html_url": "https://github.com/onursatici",
  "followers_url": "https://api.github.com/users/onursatici/followers",
  "following_url": "https://api.github.com/users/onursatici/following{/other_user}",
  "gists_url": "https://api.github.com/users/onursatici/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/onursatici/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/onursatici/subscriptions",
  "organizations_url": "https://api.github.com/users/onursatici/orgs",
  "repos_url": "https://api.github.com/users/onursatici/repos",
  "events_url": "https://api.github.com/users/onursatici/events{/privacy}",
  "received_events_url": "https://api.github.com/users/onursatici/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "@lhoestq who do you think would be the best to have a look at this? Any pointers would be appreciated, thanks!"
] | 2025-08-01T14:58:39 | 2025-08-13T13:09:44 | null | 
	NONE | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7718",
  "html_url": "https://github.com/huggingface/datasets/pull/7718",
  "diff_url": "https://github.com/huggingface/datasets/pull/7718.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7718.patch",
  "merged_at": null
} | null | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7718/reactions",
  "total_count": 3,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 3,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7718/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7717 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7717/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7717/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7717/events | 
	https://github.com/huggingface/datasets/issues/7717 | 3,282,855,127 | 
	I_kwDODunzps7DrGTX | 7,717 | 
	Cached dataset is not used when explicitly passing the cache_dir parameter | 
	{
  "login": "padmalcom",
  "id": 3961950,
  "node_id": "MDQ6VXNlcjM5NjE5NTA=",
  "avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/padmalcom",
  "html_url": "https://github.com/padmalcom",
  "followers_url": "https://api.github.com/users/padmalcom/followers",
  "following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
  "gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions",
  "organizations_url": "https://api.github.com/users/padmalcom/orgs",
  "repos_url": "https://api.github.com/users/padmalcom/repos",
  "events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
  "received_events_url": "https://api.github.com/users/padmalcom/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "Hi, I've investigated this issue and can confirm the bug. Here are my findings:\n\n**1. Reproduction:**\nI was able to reproduce the issue on the latest `main` branch. Using the provided code snippet, `snapshot_download` correctly populates the custom `cache_dir`, but `load_dataset` with the same `cache_dir` triggers a full re-download and re-processing of the dataset, ignoring the existing cache.\n\n**2. Investigation:**\nI traced the `cache_dir` parameter from `load_dataset` down to the `DatasetBuilder` class in `src/datasets/builder.py`. The root cause seems to be a mismatch between the cache path structure created by `snapshot_download` and the path structure expected by the `DatasetBuilder`.\n\nSpecifically, the `_relative_data_dir` method in `DatasetBuilder` constructs a path using `namespace___dataset_name` (with three underscores), while the cache from `snapshot_download` appears to use a `repo_id` based format like `datasets--namespace--dataset_name` (with double hyphens).\n\n**3. Attempted Fix & Result:**\nI attempted a fix by modifying the `_relative_data_dir` method to replace the path separator \"/\" in `self.repo_id` with \"--\", to align it with the `snapshot_download` structure.\n\nThis partially worked: `load_dataset` no longer re-downloads the files. However, it still re-processes them every time (triggering \"Generating train split...\", etc.) instead of loading the already processed Arrow files from the cache.\n\nThis suggests the issue is deeper than just the directory name and might be related to how the builder verifies the integrity or presence of the processed cache files.\n\nI hope these findings are helpful for whoever picks up this issue."
] | 2025-08-01T07:12:41 | 2025-08-05T19:19:36 | null | 
	NONE | null | null | null | null | 
	### Describe the bug
Hi, we are pre-downloading a dataset using snapshot_download(). When loading this exact dataset with load_dataset() the cached snapshot is not used. In both calls, I provide the cache_dir parameter.
### Steps to reproduce the bug
```
from datasets import load_dataset, concatenate_datasets
from huggingface_hub import snapshot_download
    
def download_ds(name: str):
    snapshot_download(repo_id=name, repo_type="dataset", cache_dir="G:/Datasets/cache")
def prepare_ds():
    audio_ds = load_dataset("openslr/librispeech_asr", num_proc=4, cache_dir="G:/Datasets/cache")
    print(sfw_ds.features)
if __name__ == '__main__':
    download_ds("openslr/librispeech_asr")
    prepare_ds()
```
### Expected behavior
I'd expect that the cached version of the dataset is used. Instead, the same dataset is downloaded again to the default cache directory.
### Environment info
Windows 11
datasets==4.0.0
Python 3.12.11 | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7717/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7717/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7716 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7716/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7716/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7716/events | 
	https://github.com/huggingface/datasets/pull/7716 | 3,281,204,362 | 
	PR_kwDODunzps6hk4Mq | 7,716 | 
	typo | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Fdatasets%2Fpr_7716). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-31T17:14:45 | 2025-07-31T17:17:15 | 2025-07-31T17:14:51 | 
	MEMBER | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7716",
  "html_url": "https://github.com/huggingface/datasets/pull/7716",
  "diff_url": "https://github.com/huggingface/datasets/pull/7716.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7716.patch",
  "merged_at": "2025-07-31T17:14:51"
} | null | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7716/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7716/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7715 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7715/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7715/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7715/events | 
	https://github.com/huggingface/datasets/pull/7715 | 3,281,189,955 | 
	PR_kwDODunzps6hk1CK | 7,715 | 
	Docs: Use Image(mode="F") for PNG/JPEG depth maps  | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Fdatasets%2Fpr_7715). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-31T17:09:49 | 2025-07-31T17:12:23 | 2025-07-31T17:10:10 | 
	MEMBER | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7715",
  "html_url": "https://github.com/huggingface/datasets/pull/7715",
  "diff_url": "https://github.com/huggingface/datasets/pull/7715.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7715.patch",
  "merged_at": "2025-07-31T17:10:10"
} | null | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7715/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7715/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7714 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7714/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7714/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7714/events | 
	https://github.com/huggingface/datasets/pull/7714 | 3,281,090,499 | 
	PR_kwDODunzps6hkfHj | 7,714 | 
	fix num_proc=1 ci test | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Fdatasets%2Fpr_7714). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-31T16:36:32 | 2025-07-31T16:39:03 | 2025-07-31T16:38:03 | 
	MEMBER | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7714",
  "html_url": "https://github.com/huggingface/datasets/pull/7714",
  "diff_url": "https://github.com/huggingface/datasets/pull/7714.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7714.patch",
  "merged_at": "2025-07-31T16:38:03"
} | null | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7714/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7714/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7713 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7713/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7713/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7713/events | 
	https://github.com/huggingface/datasets/pull/7713 | 3,280,813,699 | 
	PR_kwDODunzps6hjik2 | 7,713 | 
	Update cli.mdx to refer to the new "hf" CLI | 
	{
  "login": "evalstate",
  "id": 1936278,
  "node_id": "MDQ6VXNlcjE5MzYyNzg=",
  "avatar_url": "https://avatars.githubusercontent.com/u/1936278?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/evalstate",
  "html_url": "https://github.com/evalstate",
  "followers_url": "https://api.github.com/users/evalstate/followers",
  "following_url": "https://api.github.com/users/evalstate/following{/other_user}",
  "gists_url": "https://api.github.com/users/evalstate/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/evalstate/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/evalstate/subscriptions",
  "organizations_url": "https://api.github.com/users/evalstate/orgs",
  "repos_url": "https://api.github.com/users/evalstate/repos",
  "events_url": "https://api.github.com/users/evalstate/events{/privacy}",
  "received_events_url": "https://api.github.com/users/evalstate/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Fdatasets%2Fpr_7713). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-31T15:06:11 | 2025-07-31T16:37:56 | 2025-07-31T16:37:55 | 
	CONTRIBUTOR | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7713",
  "html_url": "https://github.com/huggingface/datasets/pull/7713",
  "diff_url": "https://github.com/huggingface/datasets/pull/7713.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7713.patch",
  "merged_at": "2025-07-31T16:37:55"
} | 
	Update to refer to `hf auth login` | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7713/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7713/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7712 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7712/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7712/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7712/events | 
	https://github.com/huggingface/datasets/pull/7712 | 3,280,706,762 | 
	PR_kwDODunzps6hjLF5 | 7,712 | 
	Retry intermediate commits too | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Fdatasets%2Fpr_7712). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-31T14:33:33 | 2025-07-31T14:37:43 | 2025-07-31T14:36:43 | 
	MEMBER | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7712",
  "html_url": "https://github.com/huggingface/datasets/pull/7712",
  "diff_url": "https://github.com/huggingface/datasets/pull/7712.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7712.patch",
  "merged_at": "2025-07-31T14:36:43"
} | null | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7712/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7712/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7711 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7711/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7711/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7711/events | 
	https://github.com/huggingface/datasets/pull/7711 | 3,280,471,353 | 
	PR_kwDODunzps6hiXm0 | 7,711 | 
	Update dataset_dict push_to_hub | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Fdatasets%2Fpr_7711). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-31T13:25:03 | 2025-07-31T14:18:55 | 2025-07-31T14:18:53 | 
	MEMBER | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7711",
  "html_url": "https://github.com/huggingface/datasets/pull/7711",
  "diff_url": "https://github.com/huggingface/datasets/pull/7711.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7711.patch",
  "merged_at": "2025-07-31T14:18:53"
} | 
	following https://github.com/huggingface/datasets/pull/7708 | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7711/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7711/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7710 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7710/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7710/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7710/events | 
	https://github.com/huggingface/datasets/pull/7710 | 3,279,878,230 | 
	PR_kwDODunzps6hgXxW | 7,710 | 
	Concurrent IterableDataset push_to_hub | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Fdatasets%2Fpr_7710). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-31T10:11:31 | 2025-07-31T10:14:00 | 2025-07-31T10:12:52 | 
	MEMBER | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7710",
  "html_url": "https://github.com/huggingface/datasets/pull/7710",
  "diff_url": "https://github.com/huggingface/datasets/pull/7710.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7710.patch",
  "merged_at": "2025-07-31T10:12:52"
} | 
	Same as https://github.com/huggingface/datasets/pull/7708 but for `IterableDataset` | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7710/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7710/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7709 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7709/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7709/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7709/events | 
	https://github.com/huggingface/datasets/issues/7709 | 3,276,677,990 | 
	I_kwDODunzps7DTiNm | 7,709 | 
	Release 4.0.0 breaks usage patterns of with_format | 
	{
  "login": "wittenator",
  "id": 9154515,
  "node_id": "MDQ6VXNlcjkxNTQ1MTU=",
  "avatar_url": "https://avatars.githubusercontent.com/u/9154515?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/wittenator",
  "html_url": "https://github.com/wittenator",
  "followers_url": "https://api.github.com/users/wittenator/followers",
  "following_url": "https://api.github.com/users/wittenator/following{/other_user}",
  "gists_url": "https://api.github.com/users/wittenator/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/wittenator/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/wittenator/subscriptions",
  "organizations_url": "https://api.github.com/users/wittenator/orgs",
  "repos_url": "https://api.github.com/users/wittenator/repos",
  "events_url": "https://api.github.com/users/wittenator/events{/privacy}",
  "received_events_url": "https://api.github.com/users/wittenator/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "This is a breaking change with 4.0 which introduced `Column` objects. To get the numpy array from a `Column` you can `col[i]`, `col[i:j]` or even `col[:]` if you want the full column as a numpy array:\n\n```python\nfrom datasets import load_dataset\ndataset = load_dataset(...)\ndataset = dataset.with_format(\"numpy\")\nprint(dataset[\"star\"][:].ndim)\n```",
  "Ah perfect, thanks for clearing this up. I would close this ticket then."
] | 2025-07-30T11:34:53 | 2025-08-07T08:27:18 | 2025-08-07T08:27:18 | 
	NONE | null | null | null | null | 
	### Describe the bug
Previously it was possible to access a whole column that was e.g. in numpy format via `with_format` by indexing the column. Now this possibility seems to be gone with the new Column() class. As far as I see, this makes working on a whole column (in-memory) more complex, i.e. normalizing an in-memory dataset for which iterating would be too slow. Is this intended behaviour? I couldn't find much documentation on the intended usage of the new Column class yet.
### Steps to reproduce the bug
Steps to reproduce:
```
from datasets import load_dataset
dataset = load_dataset("lhoestq/demo1")
dataset = dataset.with_format("numpy")
print(dataset["star"].ndim)
```
### Expected behavior
Working on whole columns should be possible.
### Environment info
- `datasets` version: 4.0.0
- Platform: Linux-6.8.0-63-generic-x86_64-with-glibc2.36
- Python version: 3.12.11
- `huggingface_hub` version: 0.34.3
- PyArrow version: 21.0.0
- Pandas version: 2.3.1
- `fsspec` version: 2025.3.0 | 
	{
  "login": "wittenator",
  "id": 9154515,
  "node_id": "MDQ6VXNlcjkxNTQ1MTU=",
  "avatar_url": "https://avatars.githubusercontent.com/u/9154515?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/wittenator",
  "html_url": "https://github.com/wittenator",
  "followers_url": "https://api.github.com/users/wittenator/followers",
  "following_url": "https://api.github.com/users/wittenator/following{/other_user}",
  "gists_url": "https://api.github.com/users/wittenator/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/wittenator/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/wittenator/subscriptions",
  "organizations_url": "https://api.github.com/users/wittenator/orgs",
  "repos_url": "https://api.github.com/users/wittenator/repos",
  "events_url": "https://api.github.com/users/wittenator/events{/privacy}",
  "received_events_url": "https://api.github.com/users/wittenator/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7709/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7709/timeline | null | 
	completed | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7708 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7708/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7708/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7708/events | 
	https://github.com/huggingface/datasets/pull/7708 | 3,273,614,584 | 
	PR_kwDODunzps6hLVip | 7,708 | 
	Concurrent push_to_hub | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Fdatasets%2Fpr_7708). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2025-07-29T13:14:30 | 2025-07-31T10:00:50 | 2025-07-31T10:00:49 | 
	MEMBER | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7708",
  "html_url": "https://github.com/huggingface/datasets/pull/7708",
  "diff_url": "https://github.com/huggingface/datasets/pull/7708.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7708.patch",
  "merged_at": "2025-07-31T10:00:49"
} | 
	Retry the step that (download + update + upload) the README.md using `create_commit(..., parent_commit=...)` if there was a commit in the meantime. This should enable concurrent `push_to_hub()` since it won't overwrite the README.md metadata anymore.
Note: we fixed an issue server side to make this work:
<details>
DO NOT MERGE FOR NOW since it seems there is one bug that prevents this logic from working:
I'm using parent_commit to enable concurrent push_to_hub() in datasets for a retry mechanism, but for some reason I always run into a weird situation.
Sometimes create_commit(.., parent_commit=...) returns error 500 but the commit did happen on the Hub side without respecting parent_commit
e.g. request id
```
huggingface_hub.errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/lhoestq/tmp/commit/main (Request ID: Root=1-6888d8af-2ce517bc60c69cb378b51526;d1b17993-c5d0-4ccd-9926-060c45f9ed61)
```
fix coming in [internal](https://github.com/huggingface-internal/moon-landing/pull/14617)
</details>
close https://github.com/huggingface/datasets/issues/7600 | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7708/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7708/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7707 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7707/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7707/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7707/events | 
	https://github.com/huggingface/datasets/issues/7707 | 3,271,867,998 | 
	I_kwDODunzps7DBL5e | 7,707 | 
	load_dataset() in 4.0.0 failed when decoding audio | 
	{
  "login": "jiqing-feng",
  "id": 107918818,
  "node_id": "U_kgDOBm614g",
  "avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/jiqing-feng",
  "html_url": "https://github.com/jiqing-feng",
  "followers_url": "https://api.github.com/users/jiqing-feng/followers",
  "following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
  "gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
  "organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
  "repos_url": "https://api.github.com/users/jiqing-feng/repos",
  "events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
  "received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "Hi @lhoestq . Would you please have a look at it? I use the official NV Docker ([NV official docker image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch): `nvcr.io/nvidia/pytorch:25.06-py3`) on A100 and encountered this issue, but I don't know how to fix it.",
  "Use !pip install -U datasets[audio] rather than !pip install datasets\n\nI got the solution from this link [https://github.com/huggingface/datasets/issues/7678](https://github.com/huggingface/datasets/issues/7678), and it processes the data; however,  it led to certain transformer importnerrors",
  "> https://github.com/huggingface/datasets/issues/7678\n\nHi @asantewaa-bremang . Thanks for your reply, but sadly it does not work for me.",
  "It looks like a torchcodec issue, have you tried to look at the torchcodec issues here in case someone has the same issue ? https://github.com/pytorch/torchcodec/issues\n\notherwise feel free to open a new issue there",
  "@jiqing-feng, are you running the code on Colab? If you are, you should restart after making this installation ! pip install -U datasets[audio]. ",
  "> [@jiqing-feng](https://github.com/jiqing-feng), are you running the code on Colab? If you are, you should restart after making this installation ! pip install -U datasets[audio].\n\nNo, I ran the script on the A100 instance locally.",
  "> It looks like a torchcodec issue, have you tried to look at the torchcodec issues here in case someone has the same issue ? https://github.com/pytorch/torchcodec/issues\n> \n> otherwise feel free to open a new issue there\n\nThanks! I've opened a new issue on torchcodec. Could we have a fallback implementation without torchcodec (just like datasets==3.6.0) ?",
  "> Thanks! I've opened a new issue on torchcodec. Could we have a fallback implementation without torchcodec (just like datasets==3.6.0) ?\n\nFor now I'd recommend using `datasets==3.6.0` if this issue is blocking for you",
  "Resolved by installing the pre-release torchcodec. Thanks!"
] | 2025-07-29T03:25:03 | 2025-08-01T05:15:45 | 2025-08-01T05:15:45 | 
	NONE | null | null | null | null | 
	### Describe the bug
Cannot decode audio data.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
print(dataset[0]["audio"]["array"])
```
1st round run, got
```
  File "/usr/local/lib/python3.12/dist-packages/datasets/features/audio.py", line 172, in decode_example
    raise ImportError("To support decoding audio data, please install 'torchcodec'.")
ImportError: To support decoding audio data, please install 'torchcodec'.
```
After `pip install torchcodec` and run, got
```
  File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/_metadata.py", line 16, in <module>
    from torchcodec._core.ops import (
  File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 84, in <module>
    load_torchcodec_shared_libraries()
  File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 69, in load_torchcodec_shared_libraries
    raise RuntimeError(
RuntimeError: Could not load libtorchcodec. Likely causes:
          1. FFmpeg is not properly installed in your environment. We support
             versions 4, 5, 6 and 7.
          2. The PyTorch version (2.8.0a0+5228986c39.nv25.06) is not compatible with
             this version of TorchCodec. Refer to the version compatibility
             table:
             https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec.
          3. Another runtime dependency; see exceptions below.
        The following exceptions were raised as we tried to load libtorchcodec:
[start of libtorchcodec loading traceback]
FFmpeg version 7: libavutil.so.59: cannot open shared object file: No such file or directory
FFmpeg version 6: libavutil.so.58: cannot open shared object file: No such file or directory
FFmpeg version 5: libavutil.so.57: cannot open shared object file: No such file or directory
FFmpeg version 4: libavutil.so.56: cannot open shared object file: No such file or directory
[end of libtorchcodec loading traceback].
```
After `apt update && apt install ffmpeg -y`, got
```
Traceback (most recent call last):
  File "/workspace/jiqing/test_datasets.py", line 4, in <module>
    print(dataset[0]["audio"]["array"])
          ~~~~~~~^^^
  File "/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py", line 2859, in __getitem__
    return self._getitem(key)
           ^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/datasets/arrow_dataset.py", line 2841, in _getitem
    formatted_output = format_table(
                       ^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 657, in format_table
    return formatter(pa_table, query_type=query_type)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 410, in __call__
    return self.format_row(pa_table)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 459, in format_row
    row = self.python_features_decoder.decode_row(row)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/datasets/formatting/formatting.py", line 223, in decode_row
    return self.features.decode_example(row, token_per_repo_id=self.token_per_repo_id) if self.features else row
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/datasets/features/features.py", line 2093, in decode_example
    column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/datasets/features/features.py", line 1405, in decode_nested_example
    return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/datasets/features/audio.py", line 198, in decode_example
    audio = AudioDecoder(bytes, stream_index=self.stream_index, sample_rate=self.sampling_rate)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torchcodec/decoders/_audio_decoder.py", line 62, in __init__
    self._decoder = create_decoder(source=source, seek_mode="approximate")
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torchcodec/decoders/_decoder_utils.py", line 33, in create_decoder
    return core.create_from_bytes(source, seek_mode)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torchcodec/_core/ops.py", line 144, in create_from_bytes
    return create_from_tensor(buffer, seek_mode)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch/_ops.py", line 756, in __call__
    return self._op(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
NotImplementedError: Could not run 'torchcodec_ns::create_from_tensor' with arguments from the 'CPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchcodec_ns::create_from_tensor' is only available for these backends: [Meta, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMTIA, AutogradMAIA, AutogradMeta, Tracer, AutocastCPU, AutocastMTIA, AutocastMAIA, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher].
Meta: registered at /dev/null:214 [kernel]
BackendSelect: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]
Python: registered at /__w/torchcodec/torchcodec/pytorch/torchcodec/src/torchcodec/_core/custom_ops.cpp:694 [kernel]
FuncTorchDynamicLayerBackMode: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:479 [backend fallback]
Functionalize: registered at /opt/pytorch/pytorch/aten/src/ATen/FunctionalizeFallbackKernel.cpp:349 [backend fallback]
Named: registered at /opt/pytorch/pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at /opt/pytorch/pytorch/aten/src/ATen/ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at /opt/pytorch/pytorch/aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]
ZeroTensor: registered at /opt/pytorch/pytorch/aten/src/ATen/ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:104 [backend fallback]
AutogradOther: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:63 [backend fallback]
AutogradCPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:67 [backend fallback]
AutogradCUDA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:75 [backend fallback]
AutogradXLA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:87 [backend fallback]
AutogradMPS: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:95 [backend fallback]
AutogradXPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:71 [backend fallback]
AutogradHPU: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:108 [backend fallback]
AutogradLazy: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:91 [backend fallback]
AutogradMTIA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:79 [backend fallback]
AutogradMAIA: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:83 [backend fallback]
AutogradMeta: registered at /opt/pytorch/pytorch/aten/src/ATen/core/VariableFallbackKernel.cpp:99 [backend fallback]
Tracer: registered at /opt/pytorch/pytorch/torch/csrc/autograd/TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:322 [backend fallback]
AutocastMTIA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:466 [backend fallback]
AutocastMAIA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:504 [backend fallback]
AutocastXPU: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:542 [backend fallback]
AutocastMPS: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:209 [backend fallback]
AutocastCUDA: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/autocast_mode.cpp:165 [backend fallback]
FuncTorchBatched: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:731 [backend fallback]
BatchedNestedTensor: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp:758 [backend fallback]
FuncTorchVmapMode: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/VmapModeRegistrations.cpp:27 [backend fallback]
Batched: registered at /opt/pytorch/pytorch/aten/src/ATen/LegacyBatchingRegistrations.cpp:1075 [backend fallback]
VmapMode: fallthrough registered at /opt/pytorch/pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/TensorWrapper.cpp:208 [backend fallback]
PythonTLSSnapshot: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:202 [backend fallback]
FuncTorchDynamicLayerFrontMode: registered at /opt/pytorch/pytorch/aten/src/ATen/functorch/DynamicLayer.cpp:475 [backend fallback]
PreDispatch: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:206 [backend fallback]
PythonDispatcher: registered at /opt/pytorch/pytorch/aten/src/ATen/core/PythonFallbackKernel.cpp:198 [backend fallback]
```
### Expected behavior
The result is 
```
[0.00238037 0.0020752  0.00198364 ... 0.00042725 0.00057983 0.0010376 ]
```
on `datasets==3.6.0`
### Environment info
[NV official docker image](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch): `nvcr.io/nvidia/pytorch:25.06-py3` 
```
- `datasets` version: 4.0.0
- Platform: Linux-5.4.292-1.el8.elrepo.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.3
- `huggingface_hub` version: 0.34.2
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2025.3.0
``` | 
	{
  "login": "jiqing-feng",
  "id": 107918818,
  "node_id": "U_kgDOBm614g",
  "avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/jiqing-feng",
  "html_url": "https://github.com/jiqing-feng",
  "followers_url": "https://api.github.com/users/jiqing-feng/followers",
  "following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
  "gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
  "organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
  "repos_url": "https://api.github.com/users/jiqing-feng/repos",
  "events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
  "received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7707/reactions",
  "total_count": 1,
  "+1": 1,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7707/timeline | null | 
	completed | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7706 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7706/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7706/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7706/events | 
	https://github.com/huggingface/datasets/pull/7706 | 3,271,129,240 | 
	PR_kwDODunzps6hC5uD | 7,706 | 
	Reimplemented partial split download support (revival of #6832) | 
	{
  "login": "ArjunJagdale",
  "id": 142811259,
  "node_id": "U_kgDOCIMgew",
  "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/ArjunJagdale",
  "html_url": "https://github.com/ArjunJagdale",
  "followers_url": "https://api.github.com/users/ArjunJagdale/followers",
  "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
  "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
  "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
  "repos_url": "https://api.github.com/users/ArjunJagdale/repos",
  "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
  "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  " Mario’s Patch (in PR #6832):\r\n```\r\ndef _make_split_generators_kwargs(self, prepare_split_kwargs):\r\n    # Pass `pipeline` into `_split_generators()` from `prepare_split_kwargs` if\r\n    # it's in the call signature of `_split_generators()`.\r\n    # This allows for global preprocessing in beam.\r\n    split_generators_kwargs = {}\r\n    if \"pipeline\" in inspect.signature(self._split_generators).parameters:\r\n        split_generators_kwargs[\"pipeline\"] = prepare_split_kwargs[\"pipeline\"]\r\n    split_generators_kwargs.update(super()._make_split_generators_kwargs(prepare_split_kwargs))\r\n    return split_generators_kwargs\r\n```\r\n\r\nIn the latest main(in my fork and og repo's main):\r\n```\r\ndef _make_split_generators_kwargs(self, prepare_split_kwargs):\r\n    \"\"\"Get kwargs for `self._split_generators()` from `prepare_split_kwargs`.\"\"\"\r\n    splits = prepare_split_kwargs.pop(\"splits\", None)\r\n    if self._supports_partial_generation():\r\n        return {\"splits\": splits}\r\n    return {}\r\n```\r\nIt enables passing splits into _split_generators() only for builders that support it(if i am not wrong..). So ignored Beam logic for now!"
] | 2025-07-28T19:40:40 | 2025-07-29T09:25:12 | null | 
	CONTRIBUTOR | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7706",
  "html_url": "https://github.com/huggingface/datasets/pull/7706",
  "diff_url": "https://github.com/huggingface/datasets/pull/7706.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7706.patch",
  "merged_at": null
} | 
	(revival of #6832)
https://github.com/huggingface/datasets/pull/7648#issuecomment-3084050130
Close https://github.com/huggingface/datasets/issues/4101, and more
---
### PR under work!!!!
 | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7706/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7706/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7705 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7705/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7705/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7705/events | 
	https://github.com/huggingface/datasets/issues/7705 | 3,269,070,499 | 
	I_kwDODunzps7C2g6j | 7,705 | 
	Can Not read installed dataset in dataset.load(.) | 
	{
  "login": "HuangChiEn",
  "id": 52521165,
  "node_id": "MDQ6VXNlcjUyNTIxMTY1",
  "avatar_url": "https://avatars.githubusercontent.com/u/52521165?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/HuangChiEn",
  "html_url": "https://github.com/HuangChiEn",
  "followers_url": "https://api.github.com/users/HuangChiEn/followers",
  "following_url": "https://api.github.com/users/HuangChiEn/following{/other_user}",
  "gists_url": "https://api.github.com/users/HuangChiEn/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/HuangChiEn/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/HuangChiEn/subscriptions",
  "organizations_url": "https://api.github.com/users/HuangChiEn/orgs",
  "repos_url": "https://api.github.com/users/HuangChiEn/repos",
  "events_url": "https://api.github.com/users/HuangChiEn/events{/privacy}",
  "received_events_url": "https://api.github.com/users/HuangChiEn/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "You can download the dataset locally using [huggingface_hub.snapshot_download](https://huggingface.co/docs/huggingface_hub/v0.34.3/en/package_reference/file_download#huggingface_hub.snapshot_download) and then do\n\n```python\ndataset = load_dataset(local_directory_path)\n```",
  "> You can download the dataset locally using [huggingface_hub.snapshot_download](https://huggingface.co/docs/huggingface_hub/v0.34.3/en/package_reference/file_download#huggingface_hub.snapshot_download) and then do\n> \n> dataset = load_dataset(local_directory_path)\n\nIt's good suggestion, but my server env is network restriction. It can not directly fetch data from huggingface. I spent lot of time to download and transfer it to the server.\nSo, I attempt to make load_dataset connect to my local dataset.  ",
  "Just Solved it few day before. Will post solution later...\nalso thanks folks quick reply.."
] | 2025-07-28T09:43:54 | 2025-08-05T01:24:32 | null | 
	NONE | null | null | null | null | 
	Hi, folks, I'm newbie in huggingface dataset api.
As title, i'm facing the issue that the dataset.load api can not connect to the installed dataset.
code snippet :
<img width="572" height="253" alt="Image" src="https://github.com/user-attachments/assets/10f48aaf-d6ca-4239-b1cf-145d74f125d1" />
data path : 
"/xxx/joseph/llava_ds/vlm_ds"
it contains all video clips i want!
<img width="1398" height="261" alt="Image" src="https://github.com/user-attachments/assets/bf213b66-e344-4311-97e7-bc209677ae77" />
i run the py script by 
<img width="1042" height="38" alt="Image" src="https://github.com/user-attachments/assets/8b3fcee4-e1a6-41b8-bee1-91567b00d9d2" />
But bad happended, even i provide dataset path by "HF_HUB_CACHE", it still attempt to download data from remote side : 
<img width="1697" height="813" alt="Image" src="https://github.com/user-attachments/assets/baa6cff1-a724-4710-a8c4-4805459deffb" />
Any suggestion will be appreciated!! | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7705/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7705/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7704 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7704/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7704/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7704/events | 
	https://github.com/huggingface/datasets/pull/7704 | 3,265,730,177 | 
	PR_kwDODunzps6gwtb8 | 7,704 | 
	Fix map() example in datasets documentation: define tokenizer before use | 
	{
  "login": "Sanjaykumar030",
  "id": 183703408,
  "node_id": "U_kgDOCvMXcA",
  "avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/Sanjaykumar030",
  "html_url": "https://github.com/Sanjaykumar030",
  "followers_url": "https://api.github.com/users/Sanjaykumar030/followers",
  "following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}",
  "gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions",
  "organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs",
  "repos_url": "https://api.github.com/users/Sanjaykumar030/repos",
  "events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}",
  "received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "Hi @lhoestq, just a gentle follow-up on this doc fix PR (#7704). Let me know if any changes are needed — happy to update.\r\nHope this improvement helps users run the example without confusion!",
  "the modified file is the readme of the docs, not about map() specifically"
] | 2025-07-26T14:18:17 | 2025-08-13T13:23:18 | 2025-08-13T13:06:37 | 
	NONE | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7704",
  "html_url": "https://github.com/huggingface/datasets/pull/7704",
  "diff_url": "https://github.com/huggingface/datasets/pull/7704.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7704.patch",
  "merged_at": null
} | 
	## Problem
The current datasets.Dataset.map() example in the documentation demonstrates batched processing using a tokenizer object without defining or importing it. This causes a NameError when users copy and run the example as-is, breaking the expected seamless experience.
## Correction
This PR fixes the issue by explicitly importing and initializing the tokenizer using the Transformers library (AutoTokenizer.from_pretrained("bert-base-uncased")), making the example self-contained and runnable without errors.
This will help new users understand the workflow and apply the method correctly.
Closes #7703 
 | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7704/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7704/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7703 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7703/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7703/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7703/events | 
	https://github.com/huggingface/datasets/issues/7703 | 3,265,648,942 | 
	I_kwDODunzps7Cpdku | 7,703 | 
	[Docs] map() example uses undefined `tokenizer` — causes NameError | 
	{
  "login": "Sanjaykumar030",
  "id": 183703408,
  "node_id": "U_kgDOCvMXcA",
  "avatar_url": "https://avatars.githubusercontent.com/u/183703408?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/Sanjaykumar030",
  "html_url": "https://github.com/Sanjaykumar030",
  "followers_url": "https://api.github.com/users/Sanjaykumar030/followers",
  "following_url": "https://api.github.com/users/Sanjaykumar030/following{/other_user}",
  "gists_url": "https://api.github.com/users/Sanjaykumar030/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/Sanjaykumar030/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/Sanjaykumar030/subscriptions",
  "organizations_url": "https://api.github.com/users/Sanjaykumar030/orgs",
  "repos_url": "https://api.github.com/users/Sanjaykumar030/repos",
  "events_url": "https://api.github.com/users/Sanjaykumar030/events{/privacy}",
  "received_events_url": "https://api.github.com/users/Sanjaykumar030/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "I've submitted PR #7704 which adds documentation to clarify the behavior of `map()` when returning `None`."
] | 2025-07-26T13:35:11 | 2025-07-27T09:44:35 | null | 
	NONE | null | null | null | null | 
	## Description
The current documentation example for `datasets.Dataset.map()` demonstrates batched processing but uses a `tokenizer` object without defining or importing it. This causes an error every time it's copied.
Here is the problematic line:
```python
# process a batch of examples
>>> ds = ds.map(lambda example: tokenizer(example["text"]), batched=True)
```
This assumes the user has already set up a tokenizer, which contradicts the goal of having self-contained, copy-paste-friendly examples. 
## Problem
Users who copy and run the example as-is will encounter:
```python
NameError: name 'tokenizer' is not defined
```
This breaks the flow for users and violates HuggingFace's documentation principle that examples should "work as expected" when copied directly. 
## Proposal
Update the example to include the required tokenizer setup using the Transformers library, like so:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
ds_tokenized = ds.map(lambda example: tokenizer(example["text"]), batched=True)
```
This will help new users understand the workflow and apply the method correctly.
## Note
This PR complements ongoing improvements like #7700, which clarifies multiprocessing in .map(). My change focuses on undefined tokenizer — causes NameError
 | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7703/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7703/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7702 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7702/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7702/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7702/events | 
	https://github.com/huggingface/datasets/pull/7702 | 3,265,328,549 | 
	PR_kwDODunzps6gvdYC | 7,702 | 
	num_proc=0 behave like None, num_proc=1 uses one worker (not main process) and clarify num_proc documentation | 
	{
  "login": "tanuj-rai",
  "id": 84439872,
  "node_id": "MDQ6VXNlcjg0NDM5ODcy",
  "avatar_url": "https://avatars.githubusercontent.com/u/84439872?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/tanuj-rai",
  "html_url": "https://github.com/tanuj-rai",
  "followers_url": "https://api.github.com/users/tanuj-rai/followers",
  "following_url": "https://api.github.com/users/tanuj-rai/following{/other_user}",
  "gists_url": "https://api.github.com/users/tanuj-rai/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/tanuj-rai/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/tanuj-rai/subscriptions",
  "organizations_url": "https://api.github.com/users/tanuj-rai/orgs",
  "repos_url": "https://api.github.com/users/tanuj-rai/repos",
  "events_url": "https://api.github.com/users/tanuj-rai/events{/privacy}",
  "received_events_url": "https://api.github.com/users/tanuj-rai/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "I think we can support num_proc=0 and make it equivalent to `None` to make it simpler",
  "The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Fdatasets%2Fpr_7702). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
  "> I think we can support num_proc=0 and make it equivalent to `None` to make it simpler\r\n\r\nThank you @lhoestq for reviewing it. Please let me know if anything needs to be updated further."
] | 2025-07-26T08:19:39 | 2025-07-31T14:52:33 | 2025-07-31T14:52:33 | 
	CONTRIBUTOR | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7702",
  "html_url": "https://github.com/huggingface/datasets/pull/7702",
  "diff_url": "https://github.com/huggingface/datasets/pull/7702.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7702.patch",
  "merged_at": "2025-07-31T14:52:33"
} | 
	Fixes issue #7700
This PR makes num_proc=0 behave like None in Dataset.map(), disabling multiprocessing.
It improves UX by aligning with DataLoader(num_workers=0) behavior.
The num_proc docstring is also updated to clearly explain valid values and behavior.
@SunMarc
 | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7702/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7702/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7701 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7701/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7701/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7701/events | 
	https://github.com/huggingface/datasets/pull/7701 | 3,265,236,296 | 
	PR_kwDODunzps6gvJ83 | 7,701 | 
	Update fsspec max version to current release 2025.7.0 | 
	{
  "login": "rootAvish",
  "id": 5445560,
  "node_id": "MDQ6VXNlcjU0NDU1NjA=",
  "avatar_url": "https://avatars.githubusercontent.com/u/5445560?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/rootAvish",
  "html_url": "https://github.com/rootAvish",
  "followers_url": "https://api.github.com/users/rootAvish/followers",
  "following_url": "https://api.github.com/users/rootAvish/following{/other_user}",
  "gists_url": "https://api.github.com/users/rootAvish/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/rootAvish/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/rootAvish/subscriptions",
  "organizations_url": "https://api.github.com/users/rootAvish/orgs",
  "repos_url": "https://api.github.com/users/rootAvish/repos",
  "events_url": "https://api.github.com/users/rootAvish/events{/privacy}",
  "received_events_url": "https://api.github.com/users/rootAvish/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "@lhoestq I ran the test suite locally and while some tests were failing those failures are present on the main branch too. Could you please review and trigger the CI?",
  "The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Fdatasets%2Fpr_7701). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
  "Which release will this be available in ? I'm running into this issue with `datasets=3.6.0`"
] | 2025-07-26T06:47:59 | 2025-08-13T17:32:07 | 2025-07-28T11:58:11 | 
	CONTRIBUTOR | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7701",
  "html_url": "https://github.com/huggingface/datasets/pull/7701",
  "diff_url": "https://github.com/huggingface/datasets/pull/7701.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7701.patch",
  "merged_at": "2025-07-28T11:58:11"
} | 
	Diffusers currently asks for a max fsspec version of `2025.3.0`. This change updates it to the current latest version. This change is mainly required to resolve conflicts with other packages in an environment. In my particular case, `aider-chat` which is a part of my environment installs `2025.5.1` which is incompatible with `datasets`. | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7701/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7701/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7700 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7700/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7700/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7700/events | 
	https://github.com/huggingface/datasets/issues/7700 | 3,263,922,255 | 
	I_kwDODunzps7Ci4BP | 7,700 | 
	[doc] map.num_proc needs clarification | 
	{
  "login": "sfc-gh-sbekman",
  "id": 196988264,
  "node_id": "U_kgDOC73NaA",
  "avatar_url": "https://avatars.githubusercontent.com/u/196988264?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/sfc-gh-sbekman",
  "html_url": "https://github.com/sfc-gh-sbekman",
  "followers_url": "https://api.github.com/users/sfc-gh-sbekman/followers",
  "following_url": "https://api.github.com/users/sfc-gh-sbekman/following{/other_user}",
  "gists_url": "https://api.github.com/users/sfc-gh-sbekman/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/sfc-gh-sbekman/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/sfc-gh-sbekman/subscriptions",
  "organizations_url": "https://api.github.com/users/sfc-gh-sbekman/orgs",
  "repos_url": "https://api.github.com/users/sfc-gh-sbekman/repos",
  "events_url": "https://api.github.com/users/sfc-gh-sbekman/events{/privacy}",
  "received_events_url": "https://api.github.com/users/sfc-gh-sbekman/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[] | 2025-07-25T17:35:09 | 2025-07-25T17:39:36 | null | 
	NONE | null | null | null | null | 
	https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/main_classes#datasets.Dataset.map.num_proc
```
num_proc (int, optional, defaults to None) — Max number of processes when generating cache. Already cached
shards are loaded sequentially.
```
for batch:
```
num_proc (int, optional, defaults to None): The number of processes to use for multiprocessing. If None, no 
multiprocessing is used. This can significantly speed up batching for large datasets.
```
So what happens to `map.num_proc` - is it the same behavior as `batch.num_proc` - so only if `num_proc=None` then no multiprocessing is used?
Let's update the doc to be unambiguous.
**bonus**: we could make all of these behave similarly to `DataLoader.num_workers` - where `num_workers==0` implies no multiprocessing. I think that's the most intuitive, IMHO. 0 workers - the main process has to do all the work. `None` could be the same as `0`.
context: debugging a failing `map`
Thank you! | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7700/reactions",
  "total_count": 1,
  "+1": 1,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7700/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7699 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7699/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7699/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7699/events | 
	https://github.com/huggingface/datasets/issues/7699 | 3,261,053,171 | 
	I_kwDODunzps7CX7jz | 7,699 | 
	Broken link in documentation for "Create a video dataset" | 
	{
  "login": "cleong110",
  "id": 122366389,
  "node_id": "U_kgDOB0sptQ",
  "avatar_url": "https://avatars.githubusercontent.com/u/122366389?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/cleong110",
  "html_url": "https://github.com/cleong110",
  "followers_url": "https://api.github.com/users/cleong110/followers",
  "following_url": "https://api.github.com/users/cleong110/following{/other_user}",
  "gists_url": "https://api.github.com/users/cleong110/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/cleong110/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/cleong110/subscriptions",
  "organizations_url": "https://api.github.com/users/cleong110/orgs",
  "repos_url": "https://api.github.com/users/cleong110/repos",
  "events_url": "https://api.github.com/users/cleong110/events{/privacy}",
  "received_events_url": "https://api.github.com/users/cleong110/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "The URL is ok but it seems the webdataset website is down. There seems to be a related issue here: https://github.com/webdataset/webdataset/issues/155\n\nFeel free to ask the authors there for an update. Otherwise happy to witch the link to the mirror shared in that issue"
] | 2025-07-24T19:46:28 | 2025-07-25T15:27:47 | null | 
	NONE | null | null | null | null | 
	The link to "the [WebDataset documentation](https://webdataset.github.io/webdataset)." is broken. 
https://huggingface.co/docs/datasets/main/en/video_dataset#webdataset 
<img width="2048" height="264" alt="Image" src="https://github.com/user-attachments/assets/975dd10c-aad8-42fc-9fbc-de0e2747a326" /> | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7699/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7699/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7698 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7698/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7698/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7698/events | 
	https://github.com/huggingface/datasets/issues/7698 | 3,255,350,916 | 
	I_kwDODunzps7CCLaE | 7,698 | 
	NotImplementedError when using streaming=True in Google Colab environment | 
	{
  "login": "Aniket17200",
  "id": 100470741,
  "node_id": "U_kgDOBf0P1Q",
  "avatar_url": "https://avatars.githubusercontent.com/u/100470741?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/Aniket17200",
  "html_url": "https://github.com/Aniket17200",
  "followers_url": "https://api.github.com/users/Aniket17200/followers",
  "following_url": "https://api.github.com/users/Aniket17200/following{/other_user}",
  "gists_url": "https://api.github.com/users/Aniket17200/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/Aniket17200/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/Aniket17200/subscriptions",
  "organizations_url": "https://api.github.com/users/Aniket17200/orgs",
  "repos_url": "https://api.github.com/users/Aniket17200/repos",
  "events_url": "https://api.github.com/users/Aniket17200/events{/privacy}",
  "received_events_url": "https://api.github.com/users/Aniket17200/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "Hi, @Aniket17200, try upgrading datasets using '!pip install -U datasets'. I hope this will resolve your issue.",
  "Thank you @tanuj-rai, it's working great "
] | 2025-07-23T08:04:53 | 2025-07-23T15:06:23 | null | 
	NONE | null | null | null | null | 
	### Describe the bug
When attempting to load a large dataset (like tiiuae/falcon-refinedweb or allenai/c4) using streaming=True in a standard Google Colab notebook, the process fails with a NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet. This issue persists even after upgrading datasets and huggingface_hub and restarting the session.
### Steps to reproduce the bug
Open a new Google Colab notebook.
(Optional but recommended) Run !pip install --upgrade datasets huggingface_hub and restart the runtime.
Run the following code:
Python
from datasets import load_dataset
try:
    print("Attempting to load a stream...")
    streaming_dataset = load_dataset('tiiuae/falcon-refinedweb', streaming=True)
    print("Success!")
except Exception as e:
    print(e)
### Expected behavior
The load_dataset command should return a StreamingDataset object without raising an error, allowing iteration over the dataset.
Actual Behavior
The code fails and prints the following error traceback:
[PASTE THE FULL ERROR TRACEBACK HERE]
(Note: Copy the entire error message you received, from Traceback... to the final error line, and paste it in this section.)
### Environment info
Platform: Google Colab
datasets version: [Run !pip show datasets in Colab and paste the version here]
huggingface_hub version: [Run !pip show huggingface_hub and paste the version here]
Python version: [Run !python --version and paste the version here] | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7698/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7698/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7697 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7697/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7697/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7697/events | 
	https://github.com/huggingface/datasets/issues/7697 | 3,254,526,399 | 
	I_kwDODunzps7B_CG_ | 7,697 | 
	- | 
	{
  "login": "kakamond",
  "id": 44517413,
  "node_id": "MDQ6VXNlcjQ0NTE3NDEz",
  "avatar_url": "https://avatars.githubusercontent.com/u/44517413?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/kakamond",
  "html_url": "https://github.com/kakamond",
  "followers_url": "https://api.github.com/users/kakamond/followers",
  "following_url": "https://api.github.com/users/kakamond/following{/other_user}",
  "gists_url": "https://api.github.com/users/kakamond/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/kakamond/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/kakamond/subscriptions",
  "organizations_url": "https://api.github.com/users/kakamond/orgs",
  "repos_url": "https://api.github.com/users/kakamond/repos",
  "events_url": "https://api.github.com/users/kakamond/events{/privacy}",
  "received_events_url": "https://api.github.com/users/kakamond/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[] | 2025-07-23T01:30:32 | 2025-07-25T15:21:39 | 2025-07-25T15:21:39 | 
	NONE | null | null | null | null | 
	- | 
	{
  "login": "lhoestq",
  "id": 42851186,
  "node_id": "MDQ6VXNlcjQyODUxMTg2",
  "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/lhoestq",
  "html_url": "https://github.com/lhoestq",
  "followers_url": "https://api.github.com/users/lhoestq/followers",
  "following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
  "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
  "organizations_url": "https://api.github.com/users/lhoestq/orgs",
  "repos_url": "https://api.github.com/users/lhoestq/repos",
  "events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
  "received_events_url": "https://api.github.com/users/lhoestq/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7697/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7697/timeline | null | 
	completed | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7696 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7696/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7696/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7696/events | 
	https://github.com/huggingface/datasets/issues/7696 | 3,253,433,350 | 
	I_kwDODunzps7B63QG | 7,696 | 
	load_dataset() in 4.0.0 returns different audio samples compared to earlier versions breaking reproducibility | 
	{
  "login": "Manalelaidouni",
  "id": 25346345,
  "node_id": "MDQ6VXNlcjI1MzQ2MzQ1",
  "avatar_url": "https://avatars.githubusercontent.com/u/25346345?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/Manalelaidouni",
  "html_url": "https://github.com/Manalelaidouni",
  "followers_url": "https://api.github.com/users/Manalelaidouni/followers",
  "following_url": "https://api.github.com/users/Manalelaidouni/following{/other_user}",
  "gists_url": "https://api.github.com/users/Manalelaidouni/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/Manalelaidouni/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/Manalelaidouni/subscriptions",
  "organizations_url": "https://api.github.com/users/Manalelaidouni/orgs",
  "repos_url": "https://api.github.com/users/Manalelaidouni/repos",
  "events_url": "https://api.github.com/users/Manalelaidouni/events{/privacy}",
  "received_events_url": "https://api.github.com/users/Manalelaidouni/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "Hi ! This is because `datasets` now uses the FFmpeg-based library `torchcodec` instead of the libsndfile-based library `soundfile` to decode audio data. Those two have different decoding implementations",
  "I’m all for torchcodec, good luck with the migration!"
] | 2025-07-22T17:02:17 | 2025-07-30T14:22:21 | 2025-07-30T14:22:21 | 
	NONE | null | null | null | null | 
	### Describe the bug
In datasets 4.0.0 release, `load_dataset()` returns different audio samples compared to earlier versions, this breaks integration tests that depend on consistent sample data across different environments (first and second envs specified below).
### Steps to reproduce the bug
```python
from datasets import Audio, load_dataset
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
ds = ds.cast_column("audio", Audio(24000))
sample= ds[0]["audio"]["array"]
print(sample)
# sample in 3.6.0 
[0.00231914 0.00245417 0.00187414 ... 0.00061956 0.00101157 0.00076325]
# sample in 4.0.0 
array([0.00238037, 0.00220794, 0.00198703, ..., 0.00057983, 0.00085863,
       0.00115309], dtype=float32)
```
### Expected behavior
The same dataset should load identical samples across versions to maintain reproducibility.
### Environment info
First env:
- datasets version: 3.6.0
- Platform: Windows-10-10.0.26100-SP0
- Python: 3.11.0
Second env:
- datasets  version: 4.0.0
- Platform: Linux-6.1.123+-x86_64-with-glibc2.35
- Python: 3.11.13 | 
	{
  "login": "Manalelaidouni",
  "id": 25346345,
  "node_id": "MDQ6VXNlcjI1MzQ2MzQ1",
  "avatar_url": "https://avatars.githubusercontent.com/u/25346345?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/Manalelaidouni",
  "html_url": "https://github.com/Manalelaidouni",
  "followers_url": "https://api.github.com/users/Manalelaidouni/followers",
  "following_url": "https://api.github.com/users/Manalelaidouni/following{/other_user}",
  "gists_url": "https://api.github.com/users/Manalelaidouni/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/Manalelaidouni/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/Manalelaidouni/subscriptions",
  "organizations_url": "https://api.github.com/users/Manalelaidouni/orgs",
  "repos_url": "https://api.github.com/users/Manalelaidouni/repos",
  "events_url": "https://api.github.com/users/Manalelaidouni/events{/privacy}",
  "received_events_url": "https://api.github.com/users/Manalelaidouni/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7696/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7696/timeline | null | 
	completed | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7695 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7695/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7695/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7695/events | 
	https://github.com/huggingface/datasets/pull/7695 | 3,251,904,843 | 
	PR_kwDODunzps6gB7jS | 7,695 | 
	Support downloading specific splits in load_dataset | 
	{
  "login": "ArjunJagdale",
  "id": 142811259,
  "node_id": "U_kgDOCIMgew",
  "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/ArjunJagdale",
  "html_url": "https://github.com/ArjunJagdale",
  "followers_url": "https://api.github.com/users/ArjunJagdale/followers",
  "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
  "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
  "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
  "repos_url": "https://api.github.com/users/ArjunJagdale/repos",
  "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
  "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	closed | false | null | 
	[] | null | 
	[
  "I’ve completed the following steps to continue the partial split download support (from PR #6832):\r\n\r\nI did changes on top of what has been done by mario. Here are some of those changes: \r\n- Restored support for writing multiple split shards:\r\n\r\n- In _prepare_split_single, we now correctly replace JJJJJ and SSSSS placeholders in the fpath for job/shard IDs before creating the writer.\r\n\r\n- Added os.makedirs(os.path.dirname(path), exist_ok=True) after placeholder substitution to prevent FileNotFoundError.\r\n\r\n- Applied the fix to both split writers:\r\n\r\n       1] self._generate_examples version (used by most modules).\r\n\r\n       2] self._generate_tables version (used by IterableDatasetBuilder).\r\n\r\n- Confirmed 109/113 tests passing, meaning the general logic is working across the board.\r\n\r\nWhat’s still failing\r\n4 integration tests fail:\r\n\r\n`test_load_hub_dataset_with_single_config_in_metadata`\r\n\r\n`test_load_hub_dataset_with_two_config_in_metadata`\r\n\r\n`test_load_hub_dataset_with_metadata_config_in_parallel`\r\n\r\n`test_reload_old_cache_from_2_15`\r\n\r\nAll are due to FileNotFoundError from uncreated output paths, which I'm currently finalizing by ensuring os.makedirs() is correctly applied before every writer instantiation.\r\n\r\nI will update about these fixes after running tests!",
  "@lhoestq this was just an update",
  "The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Fdatasets%2Fpr_7695). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
  "Local DIR wasn't doing well, dk actually what happened, will PR again! Sorry :)"
] | 2025-07-22T09:33:54 | 2025-07-28T17:33:30 | 2025-07-28T17:15:45 | 
	CONTRIBUTOR | null | null | false | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/pulls/7695",
  "html_url": "https://github.com/huggingface/datasets/pull/7695",
  "diff_url": "https://github.com/huggingface/datasets/pull/7695.diff",
  "patch_url": "https://github.com/huggingface/datasets/pull/7695.patch",
  "merged_at": null
} | 
	This PR builds on #6832 by @mariosasko.
May close - #4101, #2538
Discussion - https://github.com/huggingface/datasets/pull/7648#issuecomment-3084050130
---
### Note - This PR is under work and frequent changes will be pushed. | 
	{
  "login": "ArjunJagdale",
  "id": 142811259,
  "node_id": "U_kgDOCIMgew",
  "avatar_url": "https://avatars.githubusercontent.com/u/142811259?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/ArjunJagdale",
  "html_url": "https://github.com/ArjunJagdale",
  "followers_url": "https://api.github.com/users/ArjunJagdale/followers",
  "following_url": "https://api.github.com/users/ArjunJagdale/following{/other_user}",
  "gists_url": "https://api.github.com/users/ArjunJagdale/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/ArjunJagdale/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/ArjunJagdale/subscriptions",
  "organizations_url": "https://api.github.com/users/ArjunJagdale/orgs",
  "repos_url": "https://api.github.com/users/ArjunJagdale/repos",
  "events_url": "https://api.github.com/users/ArjunJagdale/events{/privacy}",
  "received_events_url": "https://api.github.com/users/ArjunJagdale/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7695/reactions",
  "total_count": 1,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 1,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7695/timeline | null | null | null | true | 
| 
	https://api.github.com/repos/huggingface/datasets/issues/7694 | 
	https://api.github.com/repos/huggingface/datasets | 
	https://api.github.com/repos/huggingface/datasets/issues/7694/labels{/name} | 
	https://api.github.com/repos/huggingface/datasets/issues/7694/comments | 
	https://api.github.com/repos/huggingface/datasets/issues/7694/events | 
	https://github.com/huggingface/datasets/issues/7694 | 3,247,600,408 | 
	I_kwDODunzps7BknMY | 7,694 | 
	Dataset.to_json consumes excessive memory, appears to not be a streaming operation | 
	{
  "login": "ycq0125",
  "id": 49603999,
  "node_id": "MDQ6VXNlcjQ5NjAzOTk5",
  "avatar_url": "https://avatars.githubusercontent.com/u/49603999?v=4",
  "gravatar_id": "",
  "url": "https://api.github.com/users/ycq0125",
  "html_url": "https://github.com/ycq0125",
  "followers_url": "https://api.github.com/users/ycq0125/followers",
  "following_url": "https://api.github.com/users/ycq0125/following{/other_user}",
  "gists_url": "https://api.github.com/users/ycq0125/gists{/gist_id}",
  "starred_url": "https://api.github.com/users/ycq0125/starred{/owner}{/repo}",
  "subscriptions_url": "https://api.github.com/users/ycq0125/subscriptions",
  "organizations_url": "https://api.github.com/users/ycq0125/orgs",
  "repos_url": "https://api.github.com/users/ycq0125/repos",
  "events_url": "https://api.github.com/users/ycq0125/events{/privacy}",
  "received_events_url": "https://api.github.com/users/ycq0125/received_events",
  "type": "User",
  "user_view_type": "public",
  "site_admin": false
} | 
	[] | 
	open | false | null | 
	[] | null | 
	[
  "Hi ! to_json is memory efficient and writes the data by batch:\n\nhttps://github.com/huggingface/datasets/blob/d9861d86be222884dabbd534a2db770c70c9b558/src/datasets/io/json.py#L153-L159\n\nWhat memory are you mesuring ? If you are mesuring RSS, it is likely that it counts the memory mapped data of the dataset. Memory mapped data are loaded as physical memory when accessed and are automatically discarded when your OS needs more memory, and therefore doesn't OOM."
] | 2025-07-21T07:51:25 | 2025-07-25T14:42:21 | null | 
	NONE | null | null | null | null | 
	### Describe the bug
When exporting a Dataset object to a JSON Lines file using the .to_json(lines=True) method, the process consumes a very large amount of memory. The memory usage is proportional to the size of the entire Dataset object being saved, rather than being a low, constant memory operation.
This behavior is unexpected, as the JSONL format is line-oriented and ideally suited for streaming writes. This issue can easily lead to Out-of-Memory (OOM) errors when exporting large datasets, especially in memory-constrained environments like Docker containers.
<img width="1343" height="329" alt="Image" src="https://github.com/user-attachments/assets/518b4263-ad12-422d-9672-28ffe97240ce" />
### Steps to reproduce the bug
```
import os
from datasets import load_dataset, Dataset
from loguru import logger
# A public dataset to test with
REPO_ID = "adam89/TinyStoriesChinese"
SUBSET = "default"
SPLIT = "train"
NUM_ROWS_TO_LOAD = 10  # Use a reasonably large number to see the memory spike
def run_test():
    """Loads data into memory and then saves it, triggering the memory issue."""
    logger.info("Step 1: Loading data into an in-memory Dataset object...")
    # Create an in-memory Dataset object from a stream
    # This simulates having a processed dataset ready to be saved
    iterable_dataset = load_dataset(REPO_ID, name=SUBSET, split=SPLIT, streaming=True)
    limited_stream = iterable_dataset.take(NUM_ROWS_TO_LOAD)
    in_memory_dataset = Dataset.from_generator(limited_stream.__iter__)
    logger.info(f"Dataset with {len(in_memory_dataset)} rows created in memory.")
    output_path = "./test_output.jsonl"
    logger.info(f"Step 2: Saving the dataset to {output_path} using .to_json()...")
    logger.info("Please monitor memory usage during this step.")
    # This is the step that causes the massive memory allocation
    in_memory_dataset.to_json(output_path, force_ascii=False)
    logger.info("Save operation complete.")
    os.remove(output_path)
if __name__ == "__main__":
    # To see the memory usage clearly, run this script with a memory profiler:
    # python -m memray run your_script_name.py
    # python -m memray tree xxx.bin
    run_test()
```
### Expected behavior
I would expect the .to_json(lines=True) method to be a memory-efficient, streaming operation. The memory usage should remain low and relatively constant, as data is converted and written to the file line-by-line or in small batches. The memory footprint should not be proportional to the total number of rows in the in_memory_dataset.
### Environment info
datasets version:3.6.0
Python version:3.9.18
os:macOS 15.3.1 (arm64) | null | 
	{
  "url": "https://api.github.com/repos/huggingface/datasets/issues/7694/reactions",
  "total_count": 0,
  "+1": 0,
  "-1": 0,
  "laugh": 0,
  "hooray": 0,
  "confused": 0,
  "heart": 0,
  "rocket": 0,
  "eyes": 0
} | 
	https://api.github.com/repos/huggingface/datasets/issues/7694/timeline | null | null | 
	{
  "total": 0,
  "completed": 0,
  "percent_completed": 0
} | false | 
Dataset Card for GitHub Issues
Dataset Summary
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
Supported Tasks and Leaderboards
For each of the tasks tagged for this dataset, give a brief description of the tag, metrics, and suggested models (with a link to their HuggingFace implementation if available). Give a similar description of tasks that were not covered by the structured tag set (repace the task-category-tag with an appropriate other:other-task-name).
- task-category-tag: The dataset can be used to train a model for [TASK NAME], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a high/low metric name. The (model name or model class) model currently achieves the following score. [IF A LEADERBOARD IS AVAILABLE]: This task has an active leaderboard which can be found at leaderboard url and ranks models based on metric name while also reporting other metric name.
Languages
Provide a brief overview of the languages represented in the dataset. Describe relevant details about specifics of the language such as whether it is social media text, African American English,...
When relevant, please provide BCP-47 codes, which consist of a primary language subtag, with a script subtag and/or region subtag if available.
Dataset Structure
Data Instances
Provide an JSON-formatted example and brief description of a typical instance in the dataset. If available, provide a link to further examples.
{
  'example_field': ...,
  ...
}
Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
Data Fields
List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
- example_field: description of- example_field
Note that the descriptions can be initialized with the Show Markdown Data Fields output of the tagging app, you will then only need to refine the generated descriptions.
Data Splits
Describe and name the splits in the dataset if there are more than one.
Describe any criteria for splitting the data, if used. If their are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
| Tain | Valid | Test | |
|---|---|---|---|
| Input Sentences | |||
| Average Sentence Length | 
Dataset Creation
Curation Rationale
What need motivated the creation of this dataset? What are some of the reasons underlying the major choices involved in putting it together?
Source Data
This section describes the source data (e.g. news text and headlines, social media posts, translated sentences,...)
Initial Data Collection and Normalization
Describe the data collection process. Describe any criteria for data selection or filtering. List any key words or search terms used. If possible, include runtime information for the collection process.
If data was collected from other pre-existing datasets, link to source here and to their Hugging Face version.
If the data was modified or normalized after being collected (e.g. if the data is word-tokenized), describe the process and the tools used.
Who are the source language producers?
State whether the data was produced by humans or machine generated. Describe the people or systems who originally created the data.
If available, include self-reported demographic or identity information for the source data creators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was created (for example, if the producers were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Describe other people represented or mentioned in the data. Where possible, link to references for the information.
Annotations
If the dataset contains annotations which are not part of the initial data collection, describe them in the following paragraphs.
Annotation process
If applicable, describe the annotation process and any tools used, or state otherwise. Describe the amount of data annotated, if not all. Describe or reference annotation guidelines provided to the annotators. If available, provide interannotator statistics. Describe any annotation validation processes.
Who are the annotators?
If annotations were collected for the source data (such as class labels or syntactic parses), state whether the annotations were produced by humans or machine generated.
Describe the people or systems who originally created the annotations and their selection criteria if applicable.
If available, include self-reported demographic or identity information for the annotators, but avoid inferring this information. Instead state that this information is unknown. See Larson 2017 for using identity categories as a variables, particularly gender.
Describe the conditions under which the data was annotated (for example, if the annotators were crowdworkers, state what platform was used, or if the data was found, what website the data was found on). If compensation was provided, include that information here.
Personal and Sensitive Information
State whether the dataset uses identity categories and, if so, how the information is used. Describe where this information comes from (i.e. self-reporting, collecting from profiles, inferring, etc.). See Larson 2017 for using identity categories as a variables, particularly gender. State whether the data is linked to individuals and whether those individuals can be identified in the dataset, either directly or indirectly (i.e., in combination with other data).
State whether the dataset contains other data that might be considered sensitive (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history).
If efforts were made to anonymize the data, describe the anonymization process.
Considerations for Using the Data
Social Impact of Dataset
Please discuss some of the ways you believe the use of this dataset will impact society.
The statement should include both positive outlooks, such as outlining how technologies developed through its use may improve people's lives, and discuss the accompanying risks. These risks may range from making important decisions more opaque to people who are affected by the technology, to reinforcing existing harmful biases (whose specifics should be discussed in the next section), among other considerations.
Also describe in this section if the proposed dataset contains a low-resource or under-represented language. If this is the case or if this task has any impact on underserved communities, please elaborate here.
Discussion of Biases
Provide descriptions of specific biases that are likely to be reflected in the data, and state whether any steps were taken to reduce their impact.
For Wikipedia text, see for example Dinan et al 2020 on biases in Wikipedia (esp. Table 1), or Blodgett et al 2020 for a more general discussion of the topic.
If analyses have been run quantifying these biases, please add brief summaries and links to the studies here.
Other Known Limitations
If studies of the datasets have outlined other limitations of the dataset, such as annotation artifacts, please outline and cite them here.
Additional Information
Dataset Curators
List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
Licensing Information
Provide the license and link to the license webpage if available.
Citation Information
Provide the BibTex-formatted reference for the dataset. For example:
@article{article_id,
  author    = {Author List},
  title     = {Dataset Paper Title},
  journal   = {Publication Venue},
  year      = {2525}
}
If the dataset has a DOI, please provide it here.
Contributions
Thanks to @nathbns for adding this dataset.
- Downloads last month
- 5
