hassonofer commited on
Commit
a4f6b1d
·
verified ·
1 Parent(s): 85e57c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +113 -3
README.md CHANGED
@@ -1,3 +1,113 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - image-classification
4
+ - birder
5
+ - pytorch
6
+ library_name: birder
7
+ license: apache-2.0
8
+ ---
9
+
10
+ # Model Card for vit_parallel_s16_18x2_ls_avg_data2vec-intermediate-il-all
11
+
12
+ A ViT Parallel s16 18x2 image classification model. The model follows a three-stage training process: first, data2vec pretraining, next intermediate training on a large-scale dataset containing diverse bird species from around the world, finally fine-tuned specifically on the `il-all` dataset. The dataset, encompassing all relevant bird species found in Israel, including rarities.
13
+
14
+ The species list is derived from data available at <https://www.israbirding.com/checklist/>.
15
+
16
+ ## Model Details
17
+
18
+ - **Model Type:** Image classification and detection backbone
19
+ - **Model Stats:**
20
+ - Params (M): 64.6
21
+ - Input image size: 384 x 384
22
+ - **Dataset:** il-all (550 classes)
23
+ - Intermediate training involved ~8000 species from all over the world
24
+
25
+ - **Papers:**
26
+ - Three things everyone should know about Vision Transformers: <https://arxiv.org/abs/2203.09795>
27
+ - data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language: <https://arxiv.org/abs/2202.03555>
28
+
29
+ ## Model Usage
30
+
31
+ ### Image Classification
32
+
33
+ ```python
34
+ import birder
35
+ from birder.inference.classification import infer_image
36
+
37
+ (net, model_info) = birder.load_pretrained_model("vit_parallel_s16_18x2_ls_avg_data2vec-intermediate-il-all", inference=True)
38
+
39
+ # Get the image size the model was trained on
40
+ size = birder.get_size_from_signature(model_info.signature)
41
+
42
+ # Create an inference transform
43
+ transform = birder.classification_transform(size, model_info.rgb_stats)
44
+
45
+ image = "path/to/image.jpeg" # or a PIL image, must be loaded in RGB format
46
+ (out, _) = infer_image(net, image, transform)
47
+ # out is a NumPy array with shape of (1, 550), representing class probabilities.
48
+ ```
49
+
50
+ ### Image Embeddings
51
+
52
+ ```python
53
+ import birder
54
+ from birder.inference.classification import infer_image
55
+
56
+ (net, model_info) = birder.load_pretrained_model("vit_parallel_s16_18x2_ls_avg_data2vec-intermediate-il-all", inference=True)
57
+
58
+ # Get the image size the model was trained on
59
+ size = birder.get_size_from_signature(model_info.signature)
60
+
61
+ # Create an inference transform
62
+ transform = birder.classification_transform(size, model_info.rgb_stats)
63
+
64
+ image = "path/to/image.jpeg" # or a PIL image
65
+ (out, embedding) = infer_image(net, image, transform, return_embedding=True)
66
+ # embedding is a NumPy array with shape of (1, 384)
67
+ ```
68
+
69
+ ### Detection Feature Map
70
+
71
+ ```python
72
+ from PIL import Image
73
+ import birder
74
+
75
+ (net, model_info) = birder.load_pretrained_model("vit_parallel_s16_18x2_ls_avg_data2vec-intermediate-il-all", inference=True)
76
+
77
+ # Get the image size the model was trained on
78
+ size = birder.get_size_from_signature(model_info.signature)
79
+
80
+ # Create an inference transform
81
+ transform = birder.classification_transform(size, model_info.rgb_stats)
82
+
83
+ image = Image.open("path/to/image.jpeg")
84
+ features = net.detection_features(transform(image).unsqueeze(0))
85
+ # features is a dict (stage name -> torch.Tensor)
86
+ print([(k, v.size()) for k, v in features.items()])
87
+ # Output example:
88
+ # [('neck', torch.Size([1, 384, 24, 24]))]
89
+ ```
90
+
91
+ ## Citation
92
+
93
+ ```bibtex
94
+ @misc{touvron2022thingsknowvisiontransformers,
95
+ title={Three things everyone should know about Vision Transformers},
96
+ author={Hugo Touvron and Matthieu Cord and Alaaeldin El-Nouby and Jakob Verbeek and Hervé Jégou},
97
+ year={2022},
98
+ eprint={2203.09795},
99
+ archivePrefix={arXiv},
100
+ primaryClass={cs.CV},
101
+ url={https://arxiv.org/abs/2203.09795},
102
+ }
103
+
104
+ @misc{https://doi.org/10.48550/arxiv.2202.03555,
105
+ title={data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language},
106
+ author={Alexei Baevski and Wei-Ning Hsu and Qiantong Xu and Arun Babu and Jiatao Gu and Michael Auli},
107
+ year={2022},
108
+ eprint={2202.03555},
109
+ archivePrefix={arXiv},
110
+ primaryClass={cs.LG},
111
+ url={https://arxiv.org/abs/2202.03555},
112
+ }
113
+ ```