How should I write to bucket from a space

I have a space, and a I have a bucket. I created them from web interface. then I mounted the bucket to the space (with Read & Write permission, to /data as default) with the space setting web interface.

however, in the space, the /data folder shows it belongs to “nobody:nogroup“.

ls -l /

drwxr-xr-x.    2 nobody nogroup   0 Apr  9 12:20 data

the space code now can list and read the /data folder, but cannot write to it.

mkdir u

mkdir: cannot create directory 'u': Permission denied

did I miss something when creating and mounting them? how should I fix it?

it’s a gradio space, so I don’t have Dockerfile to control how the bucket is mounted.

1 Like

I’ve tried using Bucket once, but there are still some areas where the specifications might change, so I’m not entirely sure how it works…

In any case, especially when using Python, it’s probably safest to use Bucket via the HF PYTHON API whenever possible.


You probably did not miss a bucket-creation step or a “write mode” checkbox. Current Hugging Face docs say Space volumes can mount models, datasets, or storage buckets, and that only storage buckets support read-write mounts. The docs even show a bucket mounted at /data, and the CLI/docs say buckets are read-write by default unless you explicitly mount them read-only. (Hugging Face)

The first thing to correct is the test command. This:

mkdir u

does not test whether /data is writable. It tests whether your current working directory is writable. If you were in /, then it was trying to create /u, not /data/u. On a Space, creating arbitrary directories directly under / is normally not allowed for the app user. So your current evidence proves “my current directory is not writable,” but it does not yet prove “the mounted bucket at /data is not writable.”

What to test instead

Run these exact commands:

pwd
id
ls -ld /data
touch /data/__write_test__
mkdir /data/__dir_test__
python - <<'PY'
from pathlib import Path
p = Path("/data/__py_write_test__.txt")
p.write_text("ok\n")
print(p.read_text())
PY

Interpretation:

  • if touch /data/__write_test__ and mkdir /data/__dir_test__ succeed, then the bucket mount is writable and the earlier mkdir u test was just targeting the wrong place.
  • if those fail with Permission denied, then the bucket is mounted but not writable to your runtime user.

Why nobody:nogroup matters

If /data really looks like this:

drwxr-xr-x 2 nobody nogroup ... /data

then only the owner has write permission. Group and others only have read/execute. So if your Gradio Space process runs as some uid other than nobody, Linux will treat /data as effectively read-only for your app. That would explain “can list and read, cannot create files.” This is standard Unix permission logic.

What makes this suspicious is that it conflicts with Hugging Face’s documented contract for bucket mounts. Buckets are supposed to be the writable volume type in Spaces. So if /data is mounted as nobody:nogroup with mode 755 and your process is not that owner, the observed runtime behavior does not match the documented feature behavior. (Hugging Face)

Did you miss something in the setup?

Probably not.

From the public docs, the Space volume interface exposes the concepts type, source, mount_path, and read_only, and the examples for buckets are just Volume(type="bucket", source="...", mount_path="/data"). I do not see documented uid/gid/permission remapping controls for Space volumes in the current interface. So in a Gradio Space, without Dockerfile-level control, there is no documented user-side knob to say “mount this bucket as my runtime uid.” That means if the mount arrives with incompatible ownership, there may be nothing you can fix from the app code itself. That is an inference from the current documented API surface, not an explicit Hugging Face statement. (Hugging Face)

Is this kind of permission problem seen elsewhere?

Yes. Not the exact same mount-ownership case every time, but the symptom family is real.

A public TGI-on-Spaces issue shows PermissionError: [Errno 13] Permission denied: '/data' on Hugging Face Spaces, with the reporter saying the container keeps trying to write to /data and fails. That issue is not specifically about buckets, but it shows that /data permission failures on Spaces are a real public class of problem. (GitHub)

There are also older Hugging Face forum posts about Spaces failing to create directories because the runtime user lacked permission for the target path, especially in Docker Spaces or paths under /. Those cases are adjacent rather than identical, but they point in the same direction: write success depends on the runtime user matching the actual mounted path permissions. (Hugging Face Forums)

Why this may be happening now

The Spaces volume-mount feature for models, datasets, and buckets is very recent. Hugging Face’s v1.9.0 release notes from April 2, 2026 introduced “Spaces Volumes: Mount Models, Datasets, and Buckets Directly” and stated that this replaces the deprecated persistent-storage feature. That recent rollout makes a mount-permission regression plausible, because there is not yet years of field-hardening or a large backlog of solved public cases for bucket mounts in Spaces. (GitHub)

What to do next

1. Verify the actual mount behavior with absolute paths

Use:

touch /data/test.txt
mkdir /data/testdir

If these work, the problem was only the earlier shell test.

If these fail, continue below.

2. Check what the Space thinks is mounted

If you can run Python with a token that can inspect the Space, check the runtime volume config:

from huggingface_hub import HfApi

api = HfApi(token=...)
runtime = api.get_space_runtime("your-namespace/your-space")
print(runtime.volumes)

That confirms whether the Space runtime still sees a bucket mounted at /data. The docs explicitly show get_space_runtime(...).volumes for this purpose. (Hugging Face)

3. Try a mount reset

Because you do not control the mount user in a Gradio Space, the only Space-side fixes you can try are operational:

  • detach the bucket
  • restart or factory rebuild the Space
  • reattach the bucket
  • test again with /data/... absolute paths
  • if possible, try a different mount path such as /bucket or /output

This is not guaranteed, but it is the only realistic remount/reset you can do from the Space side.

4. If it still fails, treat it as a platform-side bug or support case

At that point the clean report is:

  • bucket mounted as read-write
  • Space can read/list /data
  • /data shows nobody:nogroup
  • touch /data/test.txt fails with Permission denied
  • docs say bucket mounts are the writable mount type in Spaces

That is a direct mismatch between expected and observed behavior. (Hugging Face)

The practical workaround that avoids the mount entirely

If you need to ship now, the safest route is to write to the bucket through the Hugging Face API, not through the mounted filesystem.

This is fully supported. Hugging Face’s bucket docs say you can interact with buckets using the Python API or CLI, and batch_bucket_files() can upload from local paths or directly from raw bytes. The docs also note that HfFileSystem works for bucket paths, but recommend HfApi methods when possible for better performance and reliability. Authentication can be provided with HF_TOKEN, and the quickstart explicitly says HF_TOKEN is especially useful in a Space as a Space secret. (Hugging Face)

Minimal Python workaround

Set HF_TOKEN as a Space secret, then do this:

import os
import json
import tempfile
from huggingface_hub import batch_bucket_files

BUCKET_ID = "your-namespace/your-bucket"

payload = {
    "status": "ok",
    "message": "written from Space"
}

with tempfile.NamedTemporaryFile("w", suffix=".json", delete=False) as f:
    json.dump(payload, f)
    local_path = f.name

batch_bucket_files(
    BUCKET_ID,
    add=[(local_path, "logs/result.json")],
    token=os.environ["HF_TOKEN"],
)

That matches the documented bucket upload API pattern: local file path on the left, destination path inside the bucket on the right. (Hugging Face)

Direct bytes upload

If you do not even want a temp file:

import os
from huggingface_hub import batch_bucket_files

batch_bucket_files(
    "your-namespace/your-bucket",
    add=[(b'{"status":"ok"}', "logs/result.json")],
    token=os.environ["HF_TOKEN"],
)

The docs explicitly show raw-bytes uploads too. (Hugging Face)

The bottom line

The most likely outcomes are:

  1. Best case: your test was pointed at the wrong directory.
    mkdir u was not testing /data.
    mkdir /data/u may work.

  2. More likely if your absolute-path test fails: the bucket is attached, but the mounted directory ownership inside the Space runtime does not match the uid that your Gradio app runs under.
    In that case, you probably did not misconfigure the bucket. The observed permissions simply do not match the documented promise that bucket mounts are writable. (Hugging Face)

  3. Fastest reliable path forward: use batch_bucket_files() with HF_TOKEN from the Space instead of relying on /data until the mount behavior is fixed. (Hugging Face)

I’m sure mkdir and touch in /data fail.

and when I execute

print(runtime)
print(runtime.volumes)

I got

SpaceRuntime(stage='APP_STARTING', hardware=None, requested_hardware='zero-a10g', sleep_time=172800, storage=None, hot_reloading=None, raw={'stage': 'APP_STARTING', 'hardware': {'current': None, 'requested': 'zero-a10g'}, 'gcTimeout': 172800, 'replicas': {'requested': 1}, 'devMode': False, 'domains': [{'domain': 'qqnyanddld-its-my-party-time.hf.space', 'stage': 'READY'}]})

Traceback (most recent call last):
  File "/home/user/app/app.py", line 36, in <module>
    eprint(runtime.volumes)
           ^^^^^^^^^^^^^^^
AttributeError: 'SpaceRuntime' object has no attribute 'volumes'

and I checked the document, volumes should exist.

this is super weird. I can run ls /data and cat /data/x successfully, which means the bucket is mounted, but I cannot view it through the SpaceRuntime endpoint.

and I still don’t understand why ls -l /data would show

drwxr-xr-x. 2 nobody nogroup 0 Apr 9 12:20 data

seems like the owner:group config is somehow wrong?

1 Like

In short, this is maybe platform-side bug or inconsistency… @meganariley @hysts


The simple reading is:

the bucket is mounted, but it is mounted with the wrong effective permissions for your app process. The docs say bucket volumes in Spaces should be mountable at paths like /data, and SpaceRuntime is documented as having a volumes field. The docs also say the raw field is the raw server response. (Hugging Face)

What your results mean

You already confirmed the important part:

  • ls /data works
  • cat /data/x works
  • touch /data/... fails
  • mkdir /data/... fails

That means:

  • the bucket is mounted
  • but your process does not have write permission on that mount

If /data shows:

drwxr-xr-x 2 nobody nogroup ... /data

then only the owner has write permission. Group and others can read and enter the directory, but cannot create files. So if your Gradio app is running as a different user, read works and write fails. That matches your symptoms exactly.

Why this looks wrong

Hugging Face’s current docs show bucket volumes mounted into Spaces, including examples that mount a bucket at /data. They also document runtime.volumes as a field on SpaceRuntime. (Hugging Face)

But your live result shows two mismatches:

1. Filesystem mismatch

The bucket is visible, but it behaves like a read-only mount for your app because of the owner/group and mode bits.

2. Runtime API mismatch

The docs say runtime.volumes exists, but your SpaceRuntime object has no volumes attribute, and your printed raw payload does not include volumes either. Since the docs define raw as the raw server response, that strongly suggests the response you got simply did not include volume metadata. (Hugging Face)

So this is not just one weird thing. It is two weird things at once.

The most likely explanation

The most likely explanation is:

  • the feature is new
  • your Space has the bucket mounted
  • but the mount ownership exposed inside the container does not match the user your app runs as
  • and the runtime metadata endpoint is also not reporting volumes correctly in your case

This fits the release history. Hugging Face added “Spaces Volumes” in huggingface_hub v1.9.0 on April 2, 2026, and then shipped follow-up fixes in v1.9.1 and v1.9.2. (GitHub)

So did you do something wrong?

Probably not.

Your evidence points more to a platform-side bug or rollout inconsistency than a user mistake:

  • the bucket is attached
  • the mount is visible
  • read works
  • write does not
  • runtime metadata is also inconsistent with the docs

One more thing to check

Check your installed huggingface_hub version inside the Space:

import huggingface_hub
print(huggingface_hub.__version__)

If it is older than 1.9.0, that can explain why runtime.volumes is missing on the client side, because volume support was added in v1.9.0. (GitHub)

But even if you upgrade, that probably will not fix the /data write failure by itself, because the filesystem permissions problem is separate.

Practical conclusion

Use this mental model:

Mounted and readable does not mean writable by your app user.

In your case, the mount exists, but the owner/group setup appears wrong for the running process. That is why /data acts like read-only from your code.

Best next move

For now, treat /data writes as broken and use the bucket API instead of the mounted path for writes. The Spaces docs say secrets become environment variables inside the Space, and bucket/volume support is now part of the Hub tooling. (Hugging Face)

Also include these exact facts in a bug report:

from huggingface_hub import HfApi
import huggingface_hub

print(huggingface_hub.__version__)
api = HfApi()
runtime = api.get_space_runtime("namespace/space")
print(runtime)
print(runtime.raw)
print(hasattr(runtime, "volumes"))

and:

id
ls -ld /data
touch /data/__write_test__
mkdir /data/__dir_test__

That is strong evidence.

In one sentence

Yes, it looks like the mount owner/group is wrong for the Space runtime user, and the missing runtime.volumes field looks like a second bug or rollout inconsistency, not something normal. (Hugging Face)

I’m unable to reproduce the problem. Please share a minimal reproducible code sample and the steps needed to reproduce it.

1 Like

very simple to reproduce, no code needed.

first step, create a new space like this

select Gradio SDK and ZeroGPU (cannot reproduce with CPU Basic ), enable dev mode.

once the space is ready, go to its setting page and mount a bucket.

then, when restarted, ssh to it, and run touch /data/x would get the permission denied error.

1 Like

upgrade huggingface-hub version would fix the 'SpaceRuntime' object has no attribute 'volumes' error. but its value is None.

the write permission issue is still there.

1 Like

Thanks for the repro. Ah, it only happens on ZeroGPU. I tested on CPU earlier, so I couldn’t reproduce it, but I’m getting the same error now. I’ll share this with the infra team. Thanks!

2 Likes