I’ve tried using Bucket once, but there are still some areas where the specifications might change, so I’m not entirely sure how it works…
In any case, especially when using Python, it’s probably safest to use Bucket via the HF PYTHON API whenever possible.
You probably did not miss a bucket-creation step or a “write mode” checkbox. Current Hugging Face docs say Space volumes can mount models, datasets, or storage buckets, and that only storage buckets support read-write mounts. The docs even show a bucket mounted at /data, and the CLI/docs say buckets are read-write by default unless you explicitly mount them read-only. (Hugging Face)
The first thing to correct is the test command. This:
mkdir u
does not test whether /data is writable. It tests whether your current working directory is writable. If you were in /, then it was trying to create /u, not /data/u. On a Space, creating arbitrary directories directly under / is normally not allowed for the app user. So your current evidence proves “my current directory is not writable,” but it does not yet prove “the mounted bucket at /data is not writable.”
What to test instead
Run these exact commands:
pwd
id
ls -ld /data
touch /data/__write_test__
mkdir /data/__dir_test__
python - <<'PY'
from pathlib import Path
p = Path("/data/__py_write_test__.txt")
p.write_text("ok\n")
print(p.read_text())
PY
Interpretation:
- if
touch /data/__write_test__ and mkdir /data/__dir_test__ succeed, then the bucket mount is writable and the earlier mkdir u test was just targeting the wrong place.
- if those fail with
Permission denied, then the bucket is mounted but not writable to your runtime user.
Why nobody:nogroup matters
If /data really looks like this:
drwxr-xr-x 2 nobody nogroup ... /data
then only the owner has write permission. Group and others only have read/execute. So if your Gradio Space process runs as some uid other than nobody, Linux will treat /data as effectively read-only for your app. That would explain “can list and read, cannot create files.” This is standard Unix permission logic.
What makes this suspicious is that it conflicts with Hugging Face’s documented contract for bucket mounts. Buckets are supposed to be the writable volume type in Spaces. So if /data is mounted as nobody:nogroup with mode 755 and your process is not that owner, the observed runtime behavior does not match the documented feature behavior. (Hugging Face)
Did you miss something in the setup?
Probably not.
From the public docs, the Space volume interface exposes the concepts type, source, mount_path, and read_only, and the examples for buckets are just Volume(type="bucket", source="...", mount_path="/data"). I do not see documented uid/gid/permission remapping controls for Space volumes in the current interface. So in a Gradio Space, without Dockerfile-level control, there is no documented user-side knob to say “mount this bucket as my runtime uid.” That means if the mount arrives with incompatible ownership, there may be nothing you can fix from the app code itself. That is an inference from the current documented API surface, not an explicit Hugging Face statement. (Hugging Face)
Is this kind of permission problem seen elsewhere?
Yes. Not the exact same mount-ownership case every time, but the symptom family is real.
A public TGI-on-Spaces issue shows PermissionError: [Errno 13] Permission denied: '/data' on Hugging Face Spaces, with the reporter saying the container keeps trying to write to /data and fails. That issue is not specifically about buckets, but it shows that /data permission failures on Spaces are a real public class of problem. (GitHub)
There are also older Hugging Face forum posts about Spaces failing to create directories because the runtime user lacked permission for the target path, especially in Docker Spaces or paths under /. Those cases are adjacent rather than identical, but they point in the same direction: write success depends on the runtime user matching the actual mounted path permissions. (Hugging Face Forums)
Why this may be happening now
The Spaces volume-mount feature for models, datasets, and buckets is very recent. Hugging Face’s v1.9.0 release notes from April 2, 2026 introduced “Spaces Volumes: Mount Models, Datasets, and Buckets Directly” and stated that this replaces the deprecated persistent-storage feature. That recent rollout makes a mount-permission regression plausible, because there is not yet years of field-hardening or a large backlog of solved public cases for bucket mounts in Spaces. (GitHub)
What to do next
1. Verify the actual mount behavior with absolute paths
Use:
touch /data/test.txt
mkdir /data/testdir
If these work, the problem was only the earlier shell test.
If these fail, continue below.
2. Check what the Space thinks is mounted
If you can run Python with a token that can inspect the Space, check the runtime volume config:
from huggingface_hub import HfApi
api = HfApi(token=...)
runtime = api.get_space_runtime("your-namespace/your-space")
print(runtime.volumes)
That confirms whether the Space runtime still sees a bucket mounted at /data. The docs explicitly show get_space_runtime(...).volumes for this purpose. (Hugging Face)
3. Try a mount reset
Because you do not control the mount user in a Gradio Space, the only Space-side fixes you can try are operational:
- detach the bucket
- restart or factory rebuild the Space
- reattach the bucket
- test again with
/data/... absolute paths
- if possible, try a different mount path such as
/bucket or /output
This is not guaranteed, but it is the only realistic remount/reset you can do from the Space side.
4. If it still fails, treat it as a platform-side bug or support case
At that point the clean report is:
- bucket mounted as read-write
- Space can read/list
/data
/data shows nobody:nogroup
touch /data/test.txt fails with Permission denied
- docs say bucket mounts are the writable mount type in Spaces
That is a direct mismatch between expected and observed behavior. (Hugging Face)
The practical workaround that avoids the mount entirely
If you need to ship now, the safest route is to write to the bucket through the Hugging Face API, not through the mounted filesystem.
This is fully supported. Hugging Face’s bucket docs say you can interact with buckets using the Python API or CLI, and batch_bucket_files() can upload from local paths or directly from raw bytes. The docs also note that HfFileSystem works for bucket paths, but recommend HfApi methods when possible for better performance and reliability. Authentication can be provided with HF_TOKEN, and the quickstart explicitly says HF_TOKEN is especially useful in a Space as a Space secret. (Hugging Face)
Minimal Python workaround
Set HF_TOKEN as a Space secret, then do this:
import os
import json
import tempfile
from huggingface_hub import batch_bucket_files
BUCKET_ID = "your-namespace/your-bucket"
payload = {
"status": "ok",
"message": "written from Space"
}
with tempfile.NamedTemporaryFile("w", suffix=".json", delete=False) as f:
json.dump(payload, f)
local_path = f.name
batch_bucket_files(
BUCKET_ID,
add=[(local_path, "logs/result.json")],
token=os.environ["HF_TOKEN"],
)
That matches the documented bucket upload API pattern: local file path on the left, destination path inside the bucket on the right. (Hugging Face)
Direct bytes upload
If you do not even want a temp file:
import os
from huggingface_hub import batch_bucket_files
batch_bucket_files(
"your-namespace/your-bucket",
add=[(b'{"status":"ok"}', "logs/result.json")],
token=os.environ["HF_TOKEN"],
)
The docs explicitly show raw-bytes uploads too. (Hugging Face)
The bottom line
The most likely outcomes are:
-
Best case: your test was pointed at the wrong directory.
mkdir u was not testing /data.
mkdir /data/u may work.
-
More likely if your absolute-path test fails: the bucket is attached, but the mounted directory ownership inside the Space runtime does not match the uid that your Gradio app runs under.
In that case, you probably did not misconfigure the bucket. The observed permissions simply do not match the documented promise that bucket mounts are writable. (Hugging Face)
-
Fastest reliable path forward: use batch_bucket_files() with HF_TOKEN from the Space instead of relying on /data until the mount behavior is fixed. (Hugging Face)