--- license: cc-by-nc-nd-4.0 language: - en pipeline_tag: image-feature-extraction library_name: timm --- # Model Card for StainNet `StainNet` is a lightweight foundation model for special staining histology images. Arxiv preprint paper: [https://arxiv.org/abs/2512.10326] The model is a Vision Transformer Small/16 with DINO [1] self-supervised pre-training on 1,418,938 patch images from 20,231 special staining whole slide images (WSIs) in HISTAI [2]. ## Using StainNet to extract features from special staining pathology image ```python import timm import torch import torchvision.transforms as transforms model = timm.create_model('hf_hub:JWonderLand/StainNet', pretrained=True) preprocess = transforms.Compose([ transforms.Resize(224, interpolation=transforms.InterpolationMode.BICUBIC), transforms.ToTensor(), transforms.Normalize(mean=(0.5, 0.5, 0.5), std=(0.5, 0.5, 0.5)), ]) model = model.to('cuda') model.eval() input = torch.randn([1, 3, 224, 224]).cuda() with torch.no_grad(): output = model(input) # [1, 384] ``` ## Citation If `StainNet` is helpful to you, please cite our work. ``` @misc{li2025stainnet, title={StainNet: A Special Staining Self-Supervised Vision Transformer for Computational Pathology}, author={Jiawen Li and Jiali Hu and Xitong Ling and Yongqiang Lv and Yuxuan Chen and Yizhi Wang and Tian Guan and Yifei Liu and Yonghong He}, year={2025}, eprint={2512.10326}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2512.10326}, } ``` ## References [1] Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., & Joulin, A. (2021). Emerging properties in self-supervised vision transformers. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 9650-9660). [2] Nechaev, D., Pchelnikov, A., & Ivanova, E. (2025). HISTAI: An Open-Source, Large-Scale Whole Slide Image Dataset for Computational Pathology. arXiv preprint arXiv:2505.12120.