Sanctuaria-Gaze
Sanctuaria-Gaze is a multimodal egocentric dataset collected from visits to four architecturally and culturally significant sanctuaries in Northern Italy.
The dataset captures human gaze behavior, head motion, and visual exploration in real-world sacred environments, providing a unique resource for research on visual attention, embodied perception, and humanβenvironment interaction.
π Paper: Sanctuaria-Gaze: A Multimodal Egocentric Dataset for Human Attention Analysis in Religious Sites
π§βπ« Authors: Giuseppe Cartella, Vittorio Cuculo, Marcella Cornia, Marco Papasidero, Federico Ruozzi, Rita Cucchiara
ποΈ Published in: ACM Journal on Computing and Cultural Heritage (JOCCH), 2025
π¦ Dataset Overview
The Sanctuaria-Gaze dataset consists of multimodal recordings of 20 participants (ages 18β65) during 40 free-exploration visits to four sanctuaries.
Each participant wore Meta Project Aria Glasses, equipped with RGB, gaze, SLAM, and IMU sensors.
The recordings provide a natural visit, with participants freely moving and looking around, allowing spontaneous exploration behavior to emerge.
Recording Specifications
| Modality | Sensor | Frequency | Description |
|---|---|---|---|
| RGB Video | RGB camera | 15 Hz | Egocentric visual stream (1408Γ1408 px) |
| Gaze | Eye-tracking cameras | 30 Hz | Timestamped gaze coordinates (x, y, confidence) |
| Depth & SLAM | SLAM cameras | 15 Hz | Spatial mapping and 3D reconstruction |
| IMU | Accelerometer / Gyroscope | 1000 Hz / 800 Hz | Head motion and inertial dynamics |
| Magnetometer | Magnetic field sensor | 10 Hz | Orientation and spatial context |
| Barometer | Pressure sensor | 50 Hz | Altitude and environmental pressure |
Total duration: 4.47 hours
Average sequence length: 6.7 minutes
Total recordings: 40
Participants: 20 (divided into two balanced groups visiting two sanctuaries each)
Participant Information
Participants had normal or corrected-to-normal vision, and were selected to ensure diversity in age, gender, and religious background (Catholic vs. non-Catholic).
All procedures were approved by the Ethical Committee of the participating research institutions, in compliance with national and international standards.
π Dataset Structure
Each data sample corresponds to a recording identified by church ID (xx) and participant ID (yy).
Sanctuaria-Gaze/
β
βββ annotations/
β βββ Id01-01_annotations.csv
β βββ Id01-02_annotations.csv
β βββ ...
β
βββ gaze/
β βββ Id01-01_gaze.csv
β βββ Id01-02_gaze.csv
β βββ ...
β
βββ pointcloud/
β βββ Id01-01_pointcloud.ply
β βββ Id02-03_pointcloud.ply
β βββ ...
β
βββ trajectories/
β βββ Id01-01_traj.csv
β βββ Id01-02_traj.csv
β βββ ...
β
βββ videos/
βββ Id01-01.mp4
βββ Id02-04.mp4
βββ ...
Folder Descriptions
videos/β egocentric RGB recordings (15 fps, 1408Γ1408 px).gaze/β gaze coordinates and timestamps from eye-tracking cameras.pointcloud/β 3D environmental reconstructions obtained from SLAM.trajectories/β positional data capturing participantsβ movements within each sanctuary.annotations/β automatically or semi-automatically derived metadata (e.g., frame-level AOIs and detected object labels).
File Naming Convention
Each file follows the pattern:
IdXX-YY_<modality>.<extension>
where:
XXβ Church ID (01β04)YYβ Participant ID (01β10 per site)<modality>β one of {annotations, gaze, pointcloud, traj, mp4}
Example:
Id03-07_gaze.csv β Gaze data of participant 07 at Church 03
π Data Fields
π’ *_gaze.csv
| Column | Type | Description |
|---|---|---|
gaze_timestamp |
float | Timestamp in seconds |
world_index |
int | Frame index corresponding to the RGB video |
confidence |
float | Confidence score (0β1) |
norm_pos_x |
float | Normalized horizontal gaze coordinate (0β1) |
norm_pos_y |
float | Normalized vertical gaze coordinate (0β1) |
π *_annotations.csv
Each annotation file contains automatically derived and gaze-conditioned annotations for every frame.
| Column | Type | Description |
|---|---|---|
frame_number |
int | Frame index in the corresponding video |
point_x |
float | Horizontal pixel coordinate of the gaze point |
point_y |
float | Vertical pixel coordinate of the gaze point |
yolo_label |
str | Object label predicted by a YOLO-based detector (e.g., person, painting, altar) |
bounding_box_max_iou |
list[float] | Bounding box coordinates [x_min, y_min, x_max, y_max] of the object with the highest IoU with the gaze point |
mask_coverage |
float | Ratio of the annotated object mask area covered by the gaze point (higher = stronger fixation-object overlap) |
π£ *_trajectories.csv
| Column | Type | Description |
|---|---|---|
timestamp |
float | Timestamp in seconds |
pos_x, pos_y, pos_z |
float | 3D position of the headset in world coordinates |
rot_x, rot_y, rot_z, rot_w |
float | Quaternion rotation components |
π΅ *_pointcloud.ply
3D point cloud reconstruction of the environment from SLAM, aligned to headset coordinates.
β« *_videos.mp4
RGB video corresponding to the egocentric visual stream at 15 fps.
π§ Use Cases
The dataset enables research on:
- Gaze-based attention modeling in dynamic, real-world environments
- Humanβobject interaction in cultural and religious spaces
- Multimodal learning combining vision, gaze, and motion data
- 3D attention mapping via synchronized point clouds and trajectories
- Behavioral analysis of spatial exploration and cultural engagement
βοΈ Ethical Considerations
All participants provided informed consent for participation and data sharing for academic research.
All faces have been blurred using EgoBlur, and no personally identifiable information is present.
The dataset fully complies with the EU GDPR, ethical guidelines, and institutional review board (IRB) approval processes.
π Terms of Use and Access Agreement
Access to Sanctuaria-Gaze is gated to ensure responsible research use.
By requesting access, you agree to the following terms:
- You will use the dataset solely for non-commercial, academic research.
- You will not attempt to reconstruct or identify any individual from the blurred data.
- You will properly cite the accompanying paper when using the dataset:
Cartella, G., Cuculo, V., Cornia, M., Papasidero, M., Ruozzi, F., & Cucchiara, R. (2025). Sanctuaria-Gaze: A Multimodal Egocentric Dataset for Human Attention Analysis in Religious Sites. ACM JOCCH.
- You will not redistribute or republish the dataset in its entirety or in part without explicit written permission from the authors.
- You acknowledge that the dataset is provided βas isβ, without warranty, and that all ethical and privacy safeguards must be maintained in derivative works.
Violation of these terms may result in the revocation of access and reporting to your institution.
π Citation
If you use the Sanctuaria-Gaze dataset in your research, please cite:
@article{cartella2025sanctuaria,
title={Sanctuaria-Gaze: A Multimodal Egocentric Dataset for Human Attention Analysis in Religious Sites},
author={Cartella, Giuseppe and Cuculo, Vittorio and Cornia, Marcella and Papasidero, Marco and Ruozzi, Federico and Cucchiara, Rita},
journal={ACM Journal on Computing and Cultural Heritage (JOCCH)},
year={2025},
publisher={ACM},
doi={10.1145/3769091}
}
ποΈ Acknowledgment
This work was supported by the PNRR project βItalian Strengthening of Esfri RI Resilience (ITSERR)β, funded by the European Union β NextGenerationEU (CUP B53C22001770006).
- Downloads last month
- 132