ramyamut commited on
Commit
c54c7d1
·
verified ·
1 Parent(s): 73fc324

Upload 4 files

Browse files
Files changed (4) hide show
  1. README.md +198 -3
  2. bibtex.bib +9 -0
  3. environment.yaml +129 -0
  4. requirements.txt +120 -0
README.md CHANGED
@@ -1,3 +1,198 @@
1
- ---
2
- license: cc-by-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # E(3)-Pose
2
+
3
+ In this repository, we present E(3)-Pose, the first symmetry-aware framework for 6-DoF object pose estimation from volumetric images that uses an E(3)-equivariant convolutional neural network (E(3)-CNN). Although we evaluate the utility of E(3)-Pose on fetal brain MRI, the proposed methods hold potential for broader applications.
4
+
5
+ <br />
6
+
7
+ ![Teaser](images/teaser.png)
8
+
9
+ <br />
10
+
11
+ We rapidly estimate pose from volumes in a two-step process that separately estimates translation and rotation:
12
+
13
+ 1. **Translation Estimation:**
14
+ * A standard segmentation U-Net localizes the object in the volume.
15
+ * The center-of-mass (CoM) of the predicted mask is the estimated origin of the canonical object coordinate frame.
16
+ 2. **Rotation Estimation:**
17
+ * We crop input volumes such that the predicted segmentation mask is scaled to 60% of the cropped dimensions.
18
+ * The E(3)-CNN takes in the cropped volume as input, and outputs an E(3)-equivariant rotation parametrization consisting of 2 vectors and 1 pseudovector.
19
+ * The output rotation is computed by choosing the pseudovector direction that ensures right-handedness, and orthonormalizing via support-vector decomposition (SVD).
20
+
21
+ Our E(3)-CNN architecture builds on prior theoretical work on [3D steerable CNNs](https://proceedings.neurips.cc/paper_files/paper/2018/file/488e4104520c6aab692863cc1dba45af-Paper.pdf)<sup>1</sup> and uses code borrowed from [e3nn-UNet](https://github.com/SCAN-NRAD/e3nn_Unet)<sup>2</sup>, which implements 3D convolutions with the [e3nn](https://e3nn.org/)<sup>3</sup> Python library for building E(3)-equivariant networks.
22
+
23
+ <br />
24
+
25
+ ![Method overview](images/method_overview.png)
26
+
27
+ <br />
28
+
29
+ Overall, E(3)-Pose outperforms state-of-the-art methods for pose estimation in fetal brain MRI volumes representative of clinical applications,
30
+ including strategies that rely on anatomical landmark detection ([Fetal-Align](https://github.com/mu40/fetal-align)<sup>4</sup>),
31
+ template registration ([FireANTs](https://github.com/rohitrango/FireANTs)<sup>5</sup> and [EquiTrack](https://github.com/BBillot/EquiTrack)<sup>6</sup>), and direct pose regression with standard CNNs ([3DPose-Net](https://github.com/SadeghMSalehi/DeepRegistration)<sup>7</sup>, [6DRep](https://www.spiedigitallibrary.org/conference-proceedings-of-spie/12464/124640T/Automatic-brain-pose-estimation-in-fetal-MRI/10.1117/12.2647613.full)<sup>8</sup>, [RbR](https://github.com/HuXiaoling/Regre4Regis)<sup>9</sup>).
32
+ See figure below for example results. Particularly, we show in our paper that regularizing network parameters to conform with physical symmetries mitigates overfitting to research-quality training datasets, and permits better generalization to out-of-distribution clinical data with pose ambiguities.
33
+
34
+ <br />
35
+
36
+ ![examples](images/examples.png)
37
+
38
+ <br />
39
+
40
+ The full article describing this method is available at:
41
+
42
+ **Equivariant Symmetry-Aware Head Pose Estimation for Fetal MRI** \
43
+ Muthukrishnan, Gagoski, Lee, Grant, Adalsteinsson, Golland, Billot \
44
+ arXiV (2025) \
45
+ [ [arxiv](https://arxiv.org/abs/2512.04890) | [bibtex](bibtex.bib) ]
46
+
47
+ ---
48
+ ### Installation
49
+
50
+ 1. Clone this repository.
51
+ 2. Edit the environment prefix in `environment.yml` and then install all dependencies:
52
+ ```
53
+ cd E3-Pose/
54
+ conda env create -f environment.yml
55
+ conda activate e3pose
56
+ pip install -r requirements.txt
57
+ ```
58
+ 3. Install [pytorch3d](https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md)
59
+ 4. If you want to use our trained model weights for fetal brain MRI, download the model weights [here](https://drive.google.com/drive/folders/1r6FVzXG9VLH-0MtMnD2hwjzdDqss1DSE?usp=sharing).
60
+ 5. If you want to train your own network on a [publicly available fetal MRI dataset](https://pubmed.ncbi.nlm.nih.gov/40800813/)<sup>10</sup>, download our manually annotated segmentations and poses [here](https://drive.google.com/file/d/1yO2o2sNNNEfcB_-ZDcVvHyCGxqk6SYyE/view?usp=sharing).
61
+
62
+ You're now ready to use E(3)-Pose!
63
+
64
+ <br />
65
+
66
+ ---
67
+ ### Usage
68
+
69
+ This repository contains all the code necessary to train and test your own networks. We provide separate scripts for training the segmentation U-Net and E(3)-CNNs, and a single script to deploy both for full rigid pose estimation.
70
+
71
+ #### Training a Segmentation U-Net for Translation Estimation
72
+
73
+ 1. Set up separate training/validation dataset directories for images and ground-truth segmentation labels, where file names between image and label directories are the same. Ensure that all image file extensions are .nii or .nii.gz.
74
+
75
+ 2. If you are training a multi-class segmentation network, ensure that the object for which you want to estimate pose has category label 1 in the ground-truth labels.
76
+
77
+ 3. Name the output directory to save all model weights and metrics during network training.
78
+
79
+ 4. To train the segmentation U-Net, run:
80
+
81
+ ```
82
+ python scripts/train_unet.py train_image_dir/ train_label_dir/ val_image_dir/ val_label_dir/ output_dir/
83
+ ```
84
+
85
+ For detailed descriptions of other arguments, run:
86
+
87
+ ```
88
+ python scripts/train_unet.py -h
89
+ ```
90
+
91
+ #### Training an E(3)-CNN for Rotation Estimation
92
+
93
+ 1. Set up separate training/validation dataset directories for images and ground-truth segmentation labels, where file names between image and label directories are the same. Ensure that all image file extensions are .nii or .nii.gz. If your segmentation labels have multiple classes, ensure that the object for which you want to estimate pose has category label 1.
94
+
95
+ 2. Set up separate CSV files for rotation annotations in training and validation datasets, in the following format:
96
+
97
+ | frame_id | rot_x | rot_y | rot_z |
98
+ |----------|-------|-------|-------|
99
+ | ... | ... | ... | ... |
100
+
101
+ where **frame_id** is the file name of the volume without the file extension, and **rot_x**, **rot_y**, **rot_z** are the Euler angles in degrees of the rotation from the volume to the canonical coordinate frame. The Euler angle rotation assumes the "xyz" ordering convention.
102
+
103
+ 3. Name the output directory to save all model weights and metrics during network training.
104
+
105
+ 4. To train the E(3)-CNN, run:
106
+
107
+ ```
108
+ python scripts/train_e3cnn.py train_image_dir/ train_label_dir/ path_to_train_annotations.csv \
109
+ val_image_dir/ val_label_dir/ path_to_val_annotations.csv \
110
+ output_dir/
111
+ ```
112
+
113
+ For detailed descriptions of other arguments, run:
114
+
115
+ ```
116
+ python scripts/train_e3cnn.py -h
117
+ ```
118
+
119
+ #### Running Rigid Pose Estimation with Trained Model Weights
120
+
121
+ 1. Set up an input directory of images (all file extensions must be .nii or .nii.gz) on which to run rigid pose estimation.
122
+
123
+ 2. Name the output directory to save all estimated poses.
124
+
125
+ 3. To estimate pose on all inputs, run:
126
+
127
+ ```
128
+ python scripts.inference.py input_image_dir/ output_dir/ path_to_segmentation_unet.ckpt path_to_e3cnn.pth
129
+ ```
130
+
131
+ For detailed descriptions of other arguments, run:
132
+
133
+ ```
134
+ python scripts/inference.py -h
135
+ ```
136
+
137
+ 4. Output poses are saved as 4x4 transform matrices in .npy format in the output directory, where file names are the same as the inputs.
138
+
139
+ <br />
140
+
141
+ ---
142
+ ### Citation/Contact
143
+
144
+ If you find this work useful for your research, please cite:
145
+
146
+ **Equivariant Symmetry-Aware Head Pose Estimation for Fetal MRI** \
147
+ Muthukrishnan, Gagoski, Lee, Grant, Adalsteinsson, Golland, Billot \
148
+ arXiV (2025) \
149
+ [ [arxiv](https://arxiv.org/abs/2512.04890) | [bibtex](bibtex.bib) ]
150
+
151
+ If you have any question regarding the usage of this code, or any suggestions to improve it, please raise an issue
152
+ (preferred) or contact us at:\
153
154
+
155
+
156
+ <br />
157
+
158
+ ---
159
+ ### References
160
+ <sup>1</sup> *3D steerable CNNs: Learning rotationally equivariant features in volumetric data* \
161
+ Weiler, Geiger, Welling, Boomsma, Cohen \
162
+ Advances in Neural Information Processing Systems, 2018
163
+
164
+ <sup>2</sup> *Leveraging SO(3)-steerable convolutions for pose-robust semantic segmentation in 3D medical data* \
165
+ Diaz, Geiger, McKinley \
166
+ Journal of Machine Learning in Biomedical Imaging, 2024
167
+
168
+ <sup>3</sup> *e3nn: Euclidean neural networks* \
169
+ Geiger and Smidt \
170
+ arXiV, 2022
171
+
172
+ <sup>4</sup> *Rapid head-pose detection for automated slice prescription of fetal-brain MRI* \
173
+ Hoffmann, Abaci Turk, Gagoski, Morgan, Wighton, Tisdall, Reuter, Adalsteinsson, Grant, Wald, van der Kouwe \
174
+ International Journal of Imaging Systems and Technology, 2021
175
+
176
+ <sup>5</sup> *FireANTs: Adaptive Riemannian optimization for multi-scale diffeomorphic registration* \
177
+ Jena, Chaudhari, Gee \
178
+ arXiV, 2024
179
+
180
+ <sup>6</sup> *SE(3)-equivariant and noise-invariant 3D rigid motion tracking in brain MRI* \
181
+ Billot, Dey, Moyer, Hoffmann, Abaci Turk, Gagoski \
182
+ IEEE Transactions on Medical Imaging, 2024
183
+
184
+ <sup>7</sup> *Real-time deep pose estimation with geodesic loss for image-to-template rigid registration* \
185
+ Salehi, Khan, Erdogmus, Gholipour \
186
+ IEEE Transactions on Medical Imaging, 2019
187
+
188
+ <sup>8</sup> *Automatic brain pose estimation in fetal MRI* \
189
+ Faghihpirayesh, Karimi, Erdogmus, Gholipour \
190
+ Proceedings of SPIE: Medical Imaging: Image Processing, 2023
191
+
192
+ <sup>9</sup> *Registration by Regression (RbR): A framework for interpretable and flexible atlas registration* \
193
+ Gopinath, Hu, Hoffmann, Puonti, Iglesias \
194
+ International Workshop on Biomedical Image Registration, 2024
195
+
196
+ <sup>10</sup> *The developing human connectome project fetal functional MRI release: Methods and data structures* \
197
+ Karolis et al \
198
+ Imaging Neuroscience, 2025
bibtex.bib ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ @misc{muthukrishnan2025equivariantsymmetryawareheadpose,
2
+ title={Equivariant Symmetry-Aware Head Pose Estimation for Fetal MRI},
3
+ author={Ramya Muthukrishnan and Borjan Gagoski and Aryn Lee and P. Ellen Grant and Elfar Adalsteinsson and Polina Golland and Benjamin Billot},
4
+ year={2025},
5
+ eprint={2512.04890},
6
+ archivePrefix={arXiv},
7
+ primaryClass={cs.CV},
8
+ url={https://arxiv.org/abs/2512.04890},
9
+ }
environment.yaml ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: e3pose
2
+ channels:
3
+ - defaults
4
+ dependencies:
5
+ - python=3.10
6
+ - pandas
7
+ - pip:
8
+ - fvcore
9
+ - iopath
10
+ - aiohttp==3.9.1
11
+ - aiosignal==1.3.1
12
+ - alabaster==0.7.16
13
+ - async-timeout==4.0.3
14
+ - attrs==23.2.0
15
+ - babel==2.16.0
16
+ - batchgenerators==0.25
17
+ - click==8.1.7
18
+ - contourpy==1.1.1
19
+ - cycler==0.12.1
20
+ - Deprecated==1.2.14
21
+ - dicom2nifti==2.4.9
22
+ - docopt==0.6.2
23
+ - docutils==0.19
24
+ - e3nn==0.5.5
25
+ - et_xmlfile==2.0.0
26
+ - exceptiongroup==1.2.2
27
+ - filelock==3.13.1
28
+ - fonttools==4.43.1
29
+ - frozenlist==1.4.1
30
+ - fsspec==2024.2.0
31
+ - humanize==4.9.0
32
+ - imageio==2.32.0
33
+ - imagesize==1.4.1
34
+ - iniconfig==2.1.0
35
+ - Jinja2==3.1.2
36
+ - joblib==1.3.2
37
+ - kiwisolver==1.4.5
38
+ - lazy_loader==0.3
39
+ - lightning==2.5.0.post0
40
+ - lightning-utilities==0.10.1
41
+ - linecache2==1.0.0
42
+ - markdown-it-py==3.0.0
43
+ - MarkupSafe==2.1.3
44
+ - matplotlib==3.10.0
45
+ - mdurl==0.1.2
46
+ - MedPy==0.4.0
47
+ - monai==1.4.0
48
+ - mpmath==1.3.0
49
+ - multidict==6.0.4
50
+ - networkx==3.2.1
51
+ - nibabel==5.1.0
52
+ - nnunet==1.7.1
53
+ - nose==1.3.7
54
+ - numpy==1.26.0
55
+ - nvidia-cublas-cu11==11.10.3.66
56
+ - nvidia-cublas-cu12==12.1.3.1
57
+ - nvidia-cuda-cupti-cu12==12.1.105
58
+ - nvidia-cuda-nvrtc-cu11==11.7.99
59
+ - nvidia-cuda-nvrtc-cu12==12.1.105
60
+ - nvidia-cuda-runtime-cu11==11.7.99
61
+ - nvidia-cuda-runtime-cu12==12.1.105
62
+ - nvidia-cudnn-cu11==8.5.0.96
63
+ - nvidia-cudnn-cu12==8.9.2.26
64
+ - nvidia-cufft-cu12==11.0.2.54
65
+ - nvidia-curand-cu12==10.3.2.106
66
+ - nvidia-cusolver-cu12==11.4.5.107
67
+ - nvidia-cusparse-cu12==12.1.0.106
68
+ - nvidia-nccl-cu12==2.19.3
69
+ - nvidia-nvjitlink-cu12==12.4.127
70
+ - nvidia-nvtx-cu12==12.1.105
71
+ - opencv-python==4.10.0.84
72
+ - openpyxl==3.1.5
73
+ - opt-einsum==3.3.0
74
+ - opt-einsum-fx==0.1.4
75
+ - packaging==23.2
76
+ - pandas==2.1.4
77
+ - patsy==1.0.1
78
+ - Pillow==10.0.1
79
+ - pluggy==1.5.0
80
+ - pydicom==2.4.3
81
+ - Pygments==2.17.2
82
+ - pykwalify==1.8.0
83
+ - pyparsing==3.1.1
84
+ - pyradiomics==3.0.1
85
+ - pytest==8.3.5
86
+ - python-dateutil==2.8.2
87
+ - python-gdcm==3.0.22
88
+ - pytorch-lightning==2.1.3
89
+ - pytz==2023.3.post1
90
+ - PyWavelets==1.6.0
91
+ - PyYAML==6.0.1
92
+ - rich==13.7.0
93
+ - ruamel.yaml==0.18.6
94
+ - ruamel.yaml.clib==0.2.8
95
+ - scikit-image==0.22.0
96
+ - scikit-learn==1.6.0
97
+ - scipy==1.11.2
98
+ - shellingham==1.5.4
99
+ - SimpleITK==2.3.1
100
+ - six==1.16.0
101
+ - snowballstemmer==2.2.0
102
+ - Sphinx==6.2.1
103
+ - sphinxcontrib-applehelp==2.0.0
104
+ - sphinxcontrib-devhelp==2.0.0
105
+ - sphinxcontrib-htmlhelp==2.1.0
106
+ - sphinxcontrib-jsmath==1.0.1
107
+ - sphinxcontrib-qthelp==2.0.0
108
+ - sphinxcontrib-serializinghtml==2.0.0
109
+ - sphstat==1.0.6
110
+ - statsmodels==0.14.4
111
+ - sympy==1.13.1
112
+ - threadpoolctl==3.2.0
113
+ - tifffile==2023.9.26
114
+ - tomli==2.2.1
115
+ - torch==2.2.1
116
+ - torchaudio==2.2.1
117
+ - torchmetrics==1.3.0.post0
118
+ - torchvision==0.17.1
119
+ - torchio==0.20.6
120
+ - traceback2==1.4.0
121
+ - triton==2.2.0
122
+ - typer==0.9.0
123
+ - typing==3.7.4.3
124
+ - typing_extensions==4.12.2
125
+ - unittest2==1.1.0
126
+ - webcolors==24.11.1
127
+ - wrapt==1.16.0
128
+ - yarl==1.9.4
129
+ prefix: /path/to/envs/e3pose
requirements.txt ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ aiohttp==3.9.1
2
+ aiosignal==1.3.1
3
+ alabaster==0.7.16
4
+ async-timeout==4.0.3
5
+ attrs==23.2.0
6
+ babel==2.16.0
7
+ batchgenerators==0.25
8
+ click==8.1.7
9
+ contourpy==1.1.1
10
+ cycler==0.12.1
11
+ Deprecated==1.2.14
12
+ dicom2nifti==2.4.9
13
+ docopt==0.6.2
14
+ docutils==0.19
15
+ e3nn==0.5.5
16
+ et_xmlfile==2.0.0
17
+ exceptiongroup==1.2.2
18
+ filelock==3.13.1
19
+ fonttools==4.43.1
20
+ frozenlist==1.4.1
21
+ fsspec==2024.2.0
22
+ humanize==4.9.0
23
+ imageio==2.32.0
24
+ imagesize==1.4.1
25
+ iniconfig==2.1.0
26
+ Jinja2==3.1.2
27
+ joblib==1.3.2
28
+ kiwisolver==1.4.5
29
+ lazy_loader==0.3
30
+ lightning==2.5.0.post0
31
+ lightning-utilities==0.10.1
32
+ linecache2==1.0.0
33
+ markdown-it-py==3.0.0
34
+ MarkupSafe==2.1.3
35
+ matplotlib==3.10.0
36
+ mdurl==0.1.2
37
+ MedPy==0.4.0
38
+ monai==1.4.0
39
+ mpmath==1.3.0
40
+ multidict==6.0.4
41
+ networkx==3.2.1
42
+ nibabel==5.1.0
43
+ nnunet==1.7.1
44
+ nose==1.3.7
45
+ numpy==1.26.0
46
+ nvidia-cublas-cu11==11.10.3.66
47
+ nvidia-cublas-cu12==12.1.3.1
48
+ nvidia-cuda-cupti-cu12==12.1.105
49
+ nvidia-cuda-nvrtc-cu11==11.7.99
50
+ nvidia-cuda-nvrtc-cu12==12.1.105
51
+ nvidia-cuda-runtime-cu11==11.7.99
52
+ nvidia-cuda-runtime-cu12==12.1.105
53
+ nvidia-cudnn-cu11==8.5.0.96
54
+ nvidia-cudnn-cu12==8.9.2.26
55
+ nvidia-cufft-cu12==11.0.2.54
56
+ nvidia-curand-cu12==10.3.2.106
57
+ nvidia-cusolver-cu12==11.4.5.107
58
+ nvidia-cusparse-cu12==12.1.0.106
59
+ nvidia-nccl-cu12==2.19.3
60
+ nvidia-nvjitlink-cu12==12.4.127
61
+ nvidia-nvtx-cu12==12.1.105
62
+ opencv-python==4.10.0.84
63
+ openpyxl==3.1.5
64
+ opt-einsum==3.3.0
65
+ opt-einsum-fx==0.1.4
66
+ packaging==23.2
67
+ pandas==2.1.4
68
+ patsy==1.0.1
69
+ Pillow==10.0.1
70
+ pluggy==1.5.0
71
+ pydicom==2.4.3
72
+ Pygments==2.17.2
73
+ pykwalify==1.8.0
74
+ pyparsing==3.1.1
75
+ pyradiomics==3.0.1
76
+ pytest==8.3.5
77
+ python-dateutil==2.8.2
78
+ python-gdcm==3.0.22
79
+ pytorch-lightning==2.1.3
80
+ pytz==2023.3.post1
81
+ PyWavelets==1.6.0
82
+ PyYAML==6.0.1
83
+ rich==13.7.0
84
+ ruamel.yaml==0.18.6
85
+ ruamel.yaml.clib==0.2.8
86
+ scikit-image==0.22.0
87
+ scikit-learn==1.6.0
88
+ scipy==1.11.2
89
+ shellingham==1.5.4
90
+ SimpleITK==2.3.1
91
+ six==1.16.0
92
+ snowballstemmer==2.2.0
93
+ Sphinx==6.2.1
94
+ sphinxcontrib-applehelp==2.0.0
95
+ sphinxcontrib-devhelp==2.0.0
96
+ sphinxcontrib-htmlhelp==2.1.0
97
+ sphinxcontrib-jsmath==1.0.1
98
+ sphinxcontrib-qthelp==2.0.0
99
+ sphinxcontrib-serializinghtml==2.0.0
100
+ sphstat==1.0.6
101
+ statsmodels==0.14.4
102
+ sympy==1.13.1
103
+ threadpoolctl==3.2.0
104
+ tifffile==2023.9.26
105
+ tomli==2.2.1
106
+ --extra-index-url https://download.pytorch.org/whl/cu121
107
+ torch==2.2.1
108
+ torchaudio==2.2.1
109
+ torchmetrics==1.3.0.post0
110
+ torchvision==0.17.1
111
+ torchio==0.20.6
112
+ traceback2==1.4.0
113
+ triton==2.2.0
114
+ typer==0.9.0
115
+ typing==3.7.4.3
116
+ typing_extensions==4.12.2
117
+ unittest2==1.1.0
118
+ webcolors==24.11.1
119
+ wrapt==1.16.0
120
+ yarl==1.9.4