You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Sanctuaria-Gaze

Sanctuaria-Gaze is a multimodal egocentric dataset collected from visits to four architecturally and culturally significant sanctuaries in Northern Italy.
The dataset captures human gaze behavior, head motion, and visual exploration in real-world sacred environments, providing a unique resource for research on visual attention, embodied perception, and human–environment interaction.

πŸ“˜ Paper: Sanctuaria-Gaze: A Multimodal Egocentric Dataset for Human Attention Analysis in Religious Sites
πŸ§‘β€πŸ« Authors: Giuseppe Cartella, Vittorio Cuculo, Marcella Cornia, Marco Papasidero, Federico Ruozzi, Rita Cucchiara
πŸ›οΈ Published in: ACM Journal on Computing and Cultural Heritage (JOCCH), 2025


πŸ“¦ Dataset Overview

The Sanctuaria-Gaze dataset consists of multimodal recordings of 20 participants (ages 18–65) during 40 free-exploration visits to four sanctuaries.

Each participant wore Meta Project Aria Glasses, equipped with RGB, gaze, SLAM, and IMU sensors.
The recordings provide a natural visit, with participants freely moving and looking around, allowing spontaneous exploration behavior to emerge.

Recording Specifications

Modality Sensor Frequency Description
RGB Video RGB camera 15 Hz Egocentric visual stream (1408Γ—1408 px)
Gaze Eye-tracking cameras 30 Hz Timestamped gaze coordinates (x, y, confidence)
Depth & SLAM SLAM cameras 15 Hz Spatial mapping and 3D reconstruction
IMU Accelerometer / Gyroscope 1000 Hz / 800 Hz Head motion and inertial dynamics
Magnetometer Magnetic field sensor 10 Hz Orientation and spatial context
Barometer Pressure sensor 50 Hz Altitude and environmental pressure

Total duration: 4.47 hours
Average sequence length: 6.7 minutes
Total recordings: 40
Participants: 20 (divided into two balanced groups visiting two sanctuaries each)

Participant Information

Participants had normal or corrected-to-normal vision, and were selected to ensure diversity in age, gender, and religious background (Catholic vs. non-Catholic).
All procedures were approved by the Ethical Committee of the participating research institutions, in compliance with national and international standards.


πŸ“ Dataset Structure

Each data sample corresponds to a recording identified by church ID (xx) and participant ID (yy).

Sanctuaria-Gaze/
β”‚
β”œβ”€β”€ annotations/
β”‚ β”œβ”€β”€ Id01-01_annotations.csv
β”‚ β”œβ”€β”€ Id01-02_annotations.csv
β”‚ └── ...
β”‚
β”œβ”€β”€ gaze/
β”‚ β”œβ”€β”€ Id01-01_gaze.csv
β”‚ β”œβ”€β”€ Id01-02_gaze.csv
β”‚ └── ...
β”‚
β”œβ”€β”€ pointcloud/
β”‚ β”œβ”€β”€ Id01-01_pointcloud.ply
β”‚ β”œβ”€β”€ Id02-03_pointcloud.ply
β”‚ └── ...
β”‚
β”œβ”€β”€ trajectories/
β”‚ β”œβ”€β”€ Id01-01_traj.csv
β”‚ β”œβ”€β”€ Id01-02_traj.csv
β”‚ └── ...
β”‚
└── videos/
β”œβ”€β”€ Id01-01.mp4
β”œβ”€β”€ Id02-04.mp4
└── ...

Folder Descriptions

  • videos/ – egocentric RGB recordings (15 fps, 1408Γ—1408 px).
  • gaze/ – gaze coordinates and timestamps from eye-tracking cameras.
  • pointcloud/ – 3D environmental reconstructions obtained from SLAM.
  • trajectories/ – positional data capturing participants’ movements within each sanctuary.
  • annotations/ – automatically or semi-automatically derived metadata (e.g., frame-level AOIs and detected object labels).

File Naming Convention

Each file follows the pattern:

IdXX-YY_<modality>.<extension>

where:

  • XX β†’ Church ID (01–04)
  • YY β†’ Participant ID (01–10 per site)
  • <modality> β†’ one of {annotations, gaze, pointcloud, traj, mp4}

Example:

Id03-07_gaze.csv β†’ Gaze data of participant 07 at Church 03

πŸ“Š Data Fields

🟒 *_gaze.csv

Column Type Description
gaze_timestamp float Timestamp in seconds
world_index int Frame index corresponding to the RGB video
confidence float Confidence score (0–1)
norm_pos_x float Normalized horizontal gaze coordinate (0–1)
norm_pos_y float Normalized vertical gaze coordinate (0–1)

🟠 *_annotations.csv

Each annotation file contains automatically derived and gaze-conditioned annotations for every frame.

Column Type Description
frame_number int Frame index in the corresponding video
point_x float Horizontal pixel coordinate of the gaze point
point_y float Vertical pixel coordinate of the gaze point
yolo_label str Object label predicted by a YOLO-based detector (e.g., person, painting, altar)
bounding_box_max_iou list[float] Bounding box coordinates [x_min, y_min, x_max, y_max] of the object with the highest IoU with the gaze point
mask_coverage float Ratio of the annotated object mask area covered by the gaze point (higher = stronger fixation-object overlap)

🟣 *_trajectories.csv

Column Type Description
timestamp float Timestamp in seconds
pos_x, pos_y, pos_z float 3D position of the headset in world coordinates
rot_x, rot_y, rot_z, rot_w float Quaternion rotation components

πŸ”΅ *_pointcloud.ply

3D point cloud reconstruction of the environment from SLAM, aligned to headset coordinates.

⚫ *_videos.mp4

RGB video corresponding to the egocentric visual stream at 15 fps.


🧭 Use Cases

The dataset enables research on:

  • Gaze-based attention modeling in dynamic, real-world environments
  • Human–object interaction in cultural and religious spaces
  • Multimodal learning combining vision, gaze, and motion data
  • 3D attention mapping via synchronized point clouds and trajectories
  • Behavioral analysis of spatial exploration and cultural engagement

βš–οΈ Ethical Considerations

All participants provided informed consent for participation and data sharing for academic research.
All faces have been blurred using EgoBlur, and no personally identifiable information is present.
The dataset fully complies with the EU GDPR, ethical guidelines, and institutional review board (IRB) approval processes.


πŸ“œ Terms of Use and Access Agreement

Access to Sanctuaria-Gaze is gated to ensure responsible research use.
By requesting access, you agree to the following terms:

  1. You will use the dataset solely for non-commercial, academic research.
  2. You will not attempt to reconstruct or identify any individual from the blurred data.
  3. You will properly cite the accompanying paper when using the dataset:

    Cartella, G., Cuculo, V., Cornia, M., Papasidero, M., Ruozzi, F., & Cucchiara, R. (2025). Sanctuaria-Gaze: A Multimodal Egocentric Dataset for Human Attention Analysis in Religious Sites. ACM JOCCH.

  4. You will not redistribute or republish the dataset in its entirety or in part without explicit written permission from the authors.
  5. You acknowledge that the dataset is provided β€œas is”, without warranty, and that all ethical and privacy safeguards must be maintained in derivative works.

Violation of these terms may result in the revocation of access and reporting to your institution.


πŸ“š Citation

If you use the Sanctuaria-Gaze dataset in your research, please cite:

@article{cartella2025sanctuaria,
  title={Sanctuaria-Gaze: A Multimodal Egocentric Dataset for Human Attention Analysis in Religious Sites},
  author={Cartella, Giuseppe and Cuculo, Vittorio and Cornia, Marcella and Papasidero, Marco and Ruozzi, Federico and Cucchiara, Rita},
  journal={ACM Journal on Computing and Cultural Heritage (JOCCH)},
  year={2025},
  publisher={ACM},
  doi={10.1145/3769091}
}

πŸ›οΈ Acknowledgment

This work was supported by the PNRR project β€œItalian Strengthening of Esfri RI Resilience (ITSERR)”, funded by the European Union – NextGenerationEU (CUP B53C22001770006).

itserr

Downloads last month
132