mjpyeon commited on
Commit
f10756c
·
verified ·
1 Parent(s): 512618e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -80
README.md CHANGED
@@ -1,81 +1,83 @@
1
- ---
2
- license: other
3
- license_name: exaonepath
4
- license_link: LICENSE
5
- tags:
6
- - lg-ai
7
- - EXAONE-Path-2.0
8
- - pathology
9
- ---
10
-
11
- # EXAONE Path 2.0
12
-
13
- ## Introduction
14
- In digital pathology, whole-slide images (WSIs) are often difficult to handle due to their gigapixel scale, so most approaches train patch encoders via self-supervised learning (SSL) and then aggregate the patch-level embeddings via multiple instance learning (MIL) or slide encoders for downstream tasks.
15
- However, patch-level SSL may overlook complex domain-specific features that are essential for biomarker prediction, such as mutation status and molecular characteristics, as SSL methods rely only on basic augmentations selected for natural image domains on small patch-level area.
16
- Moreover, SSL methods remain less data efficient than fully supervised approaches, requiring extensive computational resources and datasets to achieve competitive performance.
17
- To address these limitations, we present EXAONE Path 2.0, a pathology foundation model that learns patch-level representations under direct slide-level supervision.
18
- Using only 35k WSIs for training, EXAONE Path 2.0 achieves state-of-the-art average performance across 10 biomarker prediction tasks, demonstrating remarkable data efficiency.
19
-
20
- ## Quickstart
21
- Load EXAONE Path 2.0 and extract features.
22
-
23
- ### 1. Prerequisites ###
24
- - NVIDIA GPU with 12GB+ VRAM
25
- - Python 3.12+
26
-
27
- Note: This implementation requires NVIDIA GPU and drivers. The provided environment setup specifically uses CUDA-enabled PyTorch, making NVIDIA GPU mandatory for running the model.
28
-
29
- ### 2. Setup Python environment ###
30
- ```bash
31
- git clone https://github.com/LG-AI-EXAONE/EXAONE-Path-2.0.git
32
- cd EXAONE-Path-2.0
33
- pip install -r requirements.txt
34
- ```
35
-
36
- ### 3. Load the model & Inference
37
- ```python
38
- from exaonepath import EXAONEPathV20
39
-
40
- hf_token = "YOUR_HUGGING_FACE_ACCESS_TOKEN"
41
- model = EXAONEPathV20.from_pretrained("LGAI-EXAONE/EXAONE-Path-2.0", use_auth_token=hf_token)
42
-
43
- svs_path = "samples/sample.svs"
44
- patch_features = model(svs_path)[0]
45
- ```
46
-
47
- ## Model Performance Comparison
48
-
49
- | **Benchmarks** | **TITAN** | **PRISM** | **CHIEF** | **Prov-GigaPath** | **UNI2-h** | **EXAONE Path 1.0** | **EXAONE Path 2.0** |
50
- |---|---|---|---|---|---|---|---|
51
- | LUAD-TMB-USA1 | 0.690 | 0.645 | 0.650 | 0.674 | 0.669 | 0.692 | 0.664 |
52
- | LUAD-EGFR-USA1 | 0.754 | 0.815 | 0.784 | 0.709 | 0.827 | 0.784 | 0.853 |
53
- | LUAD-KRAS-USA2 | 0.541 | 0.623 | 0.468 | 0.511 | 0.469 | 0.527 | 0.645 |
54
- | CRC-MSI-KOR | 0.937 | 0.943 | 0.927 | 0.954 | 0.981 | 0.972 | 0.938 |
55
- | BRCA-TP53-CPTAC | 0.788 | 0.842 | 0.788 | 0.739 | 0.808 | 0.766 | 0.757 |
56
- | BRCA-PIK3CA-CPTAC | 0.758 | 0.893 | 0.702 | 0.735 | 0.857 | 0.735 | 0.804 |
57
- | RCC-PBRM1-CPTAC | 0.638 | 0.557 | 0.513 | 0.527 | 0.501 | 0.526 | 0.583 |
58
- | RCC-BAP1-CPTAC | 0.719 | 0.769 | 0.731 | 0.697 | 0.716 | 0.719 | 0.807 |
59
- | COAD-KRAS-CPTAC | 0.764 | 0.744 | 0.699 | 0.815 | 0.943 | 0.767 | 0.912 |
60
- | COAD-TP53-CPTAC | 0.889 | 0.816 | 0.701 | 0.712 | 0.783 | 0.819 | 0.875 |
61
- | **Average** | 0.748 | 0.765 | 0.696 | 0.707 | 0.755 | 0.731 | **0.784** |
62
-
63
- <br>
64
-
65
-
66
- ## License
67
- The model is licensed under [EXAONEPath AI Model License Agreement 1.0 - NC](./LICENSE)
68
-
69
- <!-- ## Citation
70
- If you find EXAONE Path 2.0 useful, please cite it using this BibTeX:
71
- ```
72
- @article{yun2024exaonepath,
73
- title={EXAONE Path 2.0 Techincal Report},
74
- author={Yun, Juseung and Hu, Yi and Kim, Jinhyung and Jang, Jongseong and Lee, Soonyoung},
75
- journal={arXiv preprint arXiv:2408.00380},
76
- year={2024}
77
- }
78
- ``` -->
79
-
80
- ## Contact
 
 
81
  LG AI Research Technical Support: <a href="mailto:[email protected]">[email protected]</a>
 
1
+ ---
2
+ license: other
3
+ license_name: exaonepath
4
+ license_link: LICENSE
5
+ tags:
6
+ - lg-ai
7
+ - EXAONE-Path-2.0
8
+ - pathology
9
+ ---
10
+
11
+ # EXAONE Path 2.0
12
+
13
+ ## Introduction
14
+ In digital pathology, whole-slide images (WSIs) are often difficult to handle due to their gigapixel scale, so most approaches train patch encoders via self-supervised learning (SSL) and then aggregate the patch-level embeddings via multiple instance learning (MIL) or slide encoders for downstream tasks.
15
+ However, patch-level SSL may overlook complex domain-specific features that are essential for biomarker prediction, such as mutation status and molecular characteristics, as SSL methods rely only on basic augmentations selected for natural image domains on small patch-level area.
16
+ Moreover, SSL methods remain less data efficient than fully supervised approaches, requiring extensive computational resources and datasets to achieve competitive performance.
17
+ To address these limitations, we present EXAONE Path 2.0, a pathology foundation model that learns patch-level representations under direct slide-level supervision.
18
+ Using only 35k WSIs for training, EXAONE Path 2.0 achieves state-of-the-art average performance across 10 biomarker prediction tasks, demonstrating remarkable data efficiency.
19
+
20
+ ## Quickstart
21
+ Load EXAONE Path 2.0 and extract features.
22
+
23
+ ### 1. Prerequisites ###
24
+ - NVIDIA GPU with 12GB+ VRAM
25
+ - Python 3.12+
26
+
27
+ Note: This implementation requires NVIDIA GPU and drivers. The provided environment setup specifically uses CUDA-enabled PyTorch, making NVIDIA GPU mandatory for running the model.
28
+
29
+ ### 2. Setup Python environment ###
30
+ ```bash
31
+ git clone https://github.com/LG-AI-EXAONE/EXAONE-Path-2.0.git
32
+ cd EXAONE-Path-2.0
33
+ pip install -r requirements.txt
34
+ ```
35
+
36
+ ### 3. Load the model & Inference
37
+ ```python
38
+ from exaonepath import EXAONEPathV20
39
+
40
+ hf_token = "YOUR_HUGGING_FACE_ACCESS_TOKEN"
41
+ model = EXAONEPathV20.from_pretrained("LGAI-EXAONE/EXAONE-Path-2.0", use_auth_token=hf_token)
42
+
43
+ svs_path = "samples/sample.svs"
44
+ patch_features = model(svs_path)[0]
45
+ ```
46
+
47
+ ## Model Performance Comparison
48
+
49
+ Performance of EXAONE Path 2.0 on 10 slide-level benchmarks (AUROC scores):
50
+
51
+ | **Benchmarks** | **TITAN** | **PRISM** | **CHIEF** | **Prov-GigaPath** | **UNI2-h** | **EXAONE Path 1.0** | **EXAONE Path 2.0** |
52
+ |---|---|---|---|---|---|---|---|
53
+ | LUAD-TMB-USA1 | 0.690 | 0.645 | 0.650 | 0.674 | 0.669 | 0.692 | 0.664 |
54
+ | LUAD-EGFR-USA1 | 0.754 | 0.815 | 0.784 | 0.709 | 0.827 | 0.784 | 0.853 |
55
+ | LUAD-KRAS-USA2 | 0.541 | 0.623 | 0.468 | 0.511 | 0.469 | 0.527 | 0.645 |
56
+ | CRC-MSI-KOR | 0.937 | 0.943 | 0.927 | 0.954 | 0.981 | 0.972 | 0.938 |
57
+ | BRCA-TP53-CPTAC | 0.788 | 0.842 | 0.788 | 0.739 | 0.808 | 0.766 | 0.757 |
58
+ | BRCA-PIK3CA-CPTAC | 0.758 | 0.893 | 0.702 | 0.735 | 0.857 | 0.735 | 0.804 |
59
+ | RCC-PBRM1-CPTAC | 0.638 | 0.557 | 0.513 | 0.527 | 0.501 | 0.526 | 0.583 |
60
+ | RCC-BAP1-CPTAC | 0.719 | 0.769 | 0.731 | 0.697 | 0.716 | 0.719 | 0.807 |
61
+ | COAD-KRAS-CPTAC | 0.764 | 0.744 | 0.699 | 0.815 | 0.943 | 0.767 | 0.912 |
62
+ | COAD-TP53-CPTAC | 0.889 | 0.816 | 0.701 | 0.712 | 0.783 | 0.819 | 0.875 |
63
+ | **Average** | 0.748 | 0.765 | 0.696 | 0.707 | 0.755 | 0.731 | **0.784** |
64
+
65
+ <br>
66
+
67
+
68
+ ## License
69
+ The model is licensed under [EXAONEPath AI Model License Agreement 1.0 - NC](./LICENSE)
70
+
71
+ <!-- ## Citation
72
+ If you find EXAONE Path 2.0 useful, please cite it using this BibTeX:
73
+ ```
74
+ @article{yun2024exaonepath,
75
+ title={EXAONE Path 2.0 Techincal Report},
76
+ author={Yun, Juseung and Hu, Yi and Kim, Jinhyung and Jang, Jongseong and Lee, Soonyoung},
77
+ journal={arXiv preprint arXiv:2408.00380},
78
+ year={2024}
79
+ }
80
+ ``` -->
81
+
82
+ ## Contact
83
  LG AI Research Technical Support: <a href="mailto:[email protected]">[email protected]</a>