Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,57 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc0-1.0
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc0-1.0
|
| 3 |
+
---
|
| 4 |
+
## Dataset Description
|
| 5 |
+
|
| 6 |
+
This dataset contains refined Panoptils manual regions and bootstrapped nuclei labels for panoptic segmentation model training. All cases (total 369) with incomplete or insufficient annotations have been excluded to ensure high-quality annotations. The segmentation masks have been algorithmically post-processed to fill excessive background regions present in the original annotations, resulting in more contiguous tissue regions. Each sample includes the original image, instance segmentation, semantic segmentation, and cell type masks serialized in a Parquet file for efficient access.
|
| 7 |
+
|
| 8 |
+
```python
|
| 9 |
+
import pandas as pd
|
| 10 |
+
import numpy as np
|
| 11 |
+
from PIL import Image
|
| 12 |
+
import io
|
| 13 |
+
|
| 14 |
+
# Load the Parquet file
|
| 15 |
+
df = pd.read_parquet("/path/to/panoptils_refined.parquet")
|
| 16 |
+
|
| 17 |
+
# Decode PNG bytes to numpy array
|
| 18 |
+
def png_bytes_to_array(png_bytes):
|
| 19 |
+
img = Image.open(io.BytesIO(png_bytes))
|
| 20 |
+
return np.array(img)
|
| 21 |
+
|
| 22 |
+
# Example: decode the first row
|
| 23 |
+
row = df.iloc[0]
|
| 24 |
+
image = png_bytes_to_array(row["image"])
|
| 25 |
+
inst_mask = png_bytes_to_array(row["inst"])
|
| 26 |
+
type_mask = png_bytes_to_array(row["type"])
|
| 27 |
+
sem_mask = png_bytes_to_array(row["sem"])
|
| 28 |
+
|
| 29 |
+
fig, axes = plt.subplots(1, 4, figsize=(20, 5))
|
| 30 |
+
axes[0].imshow(im_arr)
|
| 31 |
+
axes[1].imshow(label2rgb(inst_arr, bg_label=0))
|
| 32 |
+
axes[2].imshow(label2rgb(types_arr, bg_label=0))
|
| 33 |
+
axes[3].imshow(label2rgb(sem_arr, bg_label=0))
|
| 34 |
+
|
| 35 |
+
for ax in axes:
|
| 36 |
+
ax.axis("off")
|
| 37 |
+
|
| 38 |
+
plt.tight_layout()
|
| 39 |
+
plt.show()
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
**Note:** The original fold splits are not viable in this dataset since many images were excluded during curation. However, you can easily split the data for training and validation using the `slide_name` and `hospital` columns to avoid data leakage. Here is an example using `sklearn`'s `GroupShuffleSplit` to split by slide:
|
| 43 |
+
|
| 44 |
+
```python
|
| 45 |
+
from sklearn.model_selection import GroupShuffleSplit
|
| 46 |
+
|
| 47 |
+
gss = GroupShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
|
| 48 |
+
train_idx, val_idx = next(gss.split(df, groups=df["slide_name"]))
|
| 49 |
+
|
| 50 |
+
df_train = df.iloc[train_idx]
|
| 51 |
+
df_val = df.iloc[val_idx]
|
| 52 |
+
|
| 53 |
+
print("Train samples:", len(df_train))
|
| 54 |
+
print("Validation samples:", len(df_val))
|
| 55 |
+
```
|
| 56 |
+
|
| 57 |
+
You can also use the `hospital` column as the grouping key if you want to split by hospital.
|