jemartin commited on
Commit
d61a391
·
verified ·
1 Parent(s): 5d5a396

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +113 -0
README.md ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ model_name: caffenet-12-int8.onnx
5
+ tags:
6
+ - validated
7
+ - vision
8
+ - classification
9
+ - caffenet
10
+ ---
11
+ <!--- SPDX-License-Identifier: BSD-3-Clause -->
12
+
13
+ # CaffeNet
14
+
15
+ |Model |Download |Download (with sample test data)| ONNX version |Opset version|Top-1 accuracy (%)|Top-5 accuracy (%)|
16
+ | ------------- | ------------- | ------------- | ------------- | ------------- |------------- | ------------- |
17
+ |CaffeNet| [238 MB](model/caffenet-3.onnx) | [244 MB](model/caffenet-3.tar.gz) | 1.1 | 3| | |
18
+ |CaffeNet| [238 MB](model/caffenet-6.onnx) | [244 MB](model/caffenet-6.tar.gz) | 1.1.2 | 6| | |
19
+ |CaffeNet| [238 MB](model/caffenet-7.onnx) | [244 MB](model/caffenet-7.tar.gz) | 1.2 | 7| | |
20
+ |CaffeNet| [238 MB](model/caffenet-8.onnx) | [244 MB](model/caffenet-8.tar.gz) | 1.3 | 8| | |
21
+ |CaffeNet| [238 MB](model/caffenet-9.onnx) | [244 MB](model/caffenet-9.tar.gz) | 1.4 | 9| | |
22
+ |CaffeNet| [233 MB](model/caffenet-12.onnx) | [216 MB](model/caffenet-12.tar.gz) | 1.9 | 12|56.27 |79.52 |
23
+ |CaffeNet-int8| [58 MB](model/caffenet-12-int8.onnx) | [39 MB](model/caffenet-12-int8.tar.gz) | 1.9 | 12| 56.22|79.52 |
24
+ |CaffeNet-qdq| [59 MB](model/caffenet-12-qdq.onnx) | [44 MB](model/caffenet-12-qdq.tar.gz) | 1.9 | 12| 56.25|79.45 |
25
+ > Compared with the fp32 CaffeNet, int8 CaffeNet's Top-1 accuracy drop ratio is 0.09%, Top-5 accuracy drop ratio is 0.13% and performance improvement is 3.08x.
26
+ >
27
+ > **Note**
28
+ >
29
+ > Different preprocess methods will lead to different accuracies, the accuracy in table depends on this specific [preprocess method](https://github.com/intel/neural-compressor/blob/master/examples/onnxrt/image_recognition/onnx_model_zoo/caffenet/quantization/ptq/main.py).
30
+ >
31
+ > The performance depends on the test hardware. Performance data here is collected with Intel® Xeon® Platinum 8280 Processor, 1s 4c per instance, CentOS Linux 8.3, data batch size is 1.
32
+
33
+ ## Description
34
+ CaffeNet a variant of AlexNet.
35
+ AlexNet is the name of a convolutional neural network for classification,
36
+ which competed in the ImageNet Large Scale Visual Recognition Challenge in 2012.
37
+
38
+ Differences:
39
+ - not training with the relighting data-augmentation;
40
+ - the order of pooling and normalization layers is switched (in CaffeNet, pooling is done before normalization).
41
+
42
+ ### Dataset
43
+ [ILSVRC2012](http://www.image-net.org/challenges/LSVRC/2012/)
44
+
45
+ ## Source
46
+ Caffe BVLC CaffeNet ==> Caffe2 CaffeNet ==> ONNX CaffeNet
47
+
48
+ ## Model input and output
49
+ ### Input
50
+ ```
51
+ data_0: float[1, 3, 224, 224]
52
+ ```
53
+ ### Output
54
+ ```
55
+ prob_1: float[1, 1000]
56
+ ```
57
+ ### Pre-processing steps
58
+ ### Post-processing steps
59
+ ### Sample test data
60
+ random generated sampe test data:
61
+ - test_data_set_0
62
+ - test_data_set_1
63
+ - test_data_set_2
64
+ - test_data_set_3
65
+ - test_data_set_4
66
+ - test_data_set_5
67
+
68
+ ## Results/accuracy on test set
69
+ This model is snapshot of iteration 310,000.
70
+ The best validation performance during training was iteration
71
+ 313,000 with validation accuracy 57.412% and loss 1.82328.
72
+ This model obtains a top-1 accuracy 57.4% and a top-5 accuracy
73
+ 80.4% on the validation set, using just the center crop.
74
+ (Using the average of 10 crops, (4 + 1 center) * 2 mirror,
75
+ should obtain a bit higher accuracy still.)
76
+
77
+ ## Quantization
78
+ CaffeNet-int8 and CaffeNet-qdq are obtained by quantizing fp32 CaffeNet model. We use [Intel® Neural Compressor](https://github.com/intel/neural-compressor) with onnxruntime backend to perform quantization. View the [instructions](https://github.com/intel/neural-compressor/blob/master/examples/onnxrt/image_recognition/onnx_model_zoo/caffenet/quantization/ptq/README.md) to understand how to use Intel® Neural Compressor for quantization.
79
+
80
+ ### Environment
81
+ onnx: 1.9.0
82
+ onnxruntime: 1.8.0
83
+
84
+ ### Prepare model
85
+ ```shell
86
+ wget https://github.com/onnx/models/raw/main/vision/classification/caffenet/model/caffenet-12.onnx
87
+ ```
88
+
89
+ ### Model quantize
90
+ Make sure to specify the appropriate dataset path in the configuration file.
91
+ ```bash
92
+ bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
93
+ --config=caffenet.yaml \
94
+ --data_path=/path/to/imagenet \
95
+ --label_path=/path/to/imagenet/label \
96
+ --output_model=path/to/save
97
+ ```
98
+
99
+ ## References
100
+ * [ImageNet Classification with Deep Convolutional Neural Networks](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)
101
+
102
+ * [Intel® Neural Compressor](https://github.com/intel/neural-compressor)
103
+
104
+ ## Contributors
105
+ * [mengniwang95](https://github.com/mengniwang95) (Intel)
106
+ * [yuwenzho](https://github.com/yuwenzho) (Intel)
107
+ * [airMeng](https://github.com/airMeng) (Intel)
108
+ * [ftian1](https://github.com/ftian1) (Intel)
109
+ * [hshen14](https://github.com/hshen14) (Intel)
110
+
111
+ ## License
112
+ [BSD-3](LICENSE)
113
+