jemartin's picture
Upload README.md with huggingface_hub
739b492 verified
---
language: en
license: apache-2.0
model_name: version-RFB-320-int8.onnx
tags:
- validated
- vision
- body_analysis
- ultraface
---
<!--- SPDX-License-Identifier: MIT -->
# Ultra-lightweight face detection model
## Description
This model is a lightweight facedetection model designed for edge computing devices.
## Model
| Model | Download | Download (with sample test data) | ONNX version | Opset version |
| ------------- | ------------- | ------------- | ------------- | ------------- |
|version-RFB-320| [1.21 MB](models/version-RFB-320.onnx) | [1.92 MB](models/version-RFB-320.tar.gz) | 1.4 | 9 |
|version-RFB-640| [1.51 MB](models/version-RFB-640.onnx) | [4.59 MB](models/version-RFB-640.tar.gz) | 1.4 | 9 |
|version-RFB-320-int8| [0.44 MB](models/version-RFB-320-int8.onnx) | [1.2 MB](models/version-RFB-320-int8.tar.gz) | 1.14 | 12 |
### Dataset
The training set is the VOC format data set generated by using the cleaned widerface labels provided by [Retinaface](https://arxiv.org/pdf/1905.00641.pdf) in conjunction with the widerface [dataset](http://shuoyang1213.me/WIDERFACE/).
### Source
You can find the source code [here](https://github.com/Linzaer/Ultra-Light-Fast-Generic-Face-Detector-1MB).
### Demo
Run [demo.py](demo.py) python scripts example.
## Inference
### Input
Input tensor is `1 x 3 x height x width` with mean values `127, 127, 127` and scale factor `1.0 / 128`. Input image have to be previously converted to `RGB` format and resized to `320 x 240` pixels for **version-RFB-320** model (or `640 x 480` for **version-RFB-640** model).
### Preprocessing
Given a path `image_path` to the image you would like to score:
```python
image = cv2.cvtColor(orig_image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (320, 240))
image_mean = np.array([127, 127, 127])
image = (image - image_mean) / 128
image = np.transpose(image, [2, 0, 1])
image = np.expand_dims(image, axis=0)
image = image.astype(np.float32)
```
### Output
The model outputs two arrays `(1 x 4420 x 2)` and `(1 x 4420 x 4)` of scores and boxes.
### Postprocessing
In postprocessing, threshold filtration and [non-max suppression](dependencies/box_utils.py) are applied to the scores and boxes arrays.
## Quantization
version-RFB-320-int8 is obtained by quantizing fp32 version-RFB-320 model. We use [Intel® Neural Compressor](https://github.com/intel/neural-compressor) with onnxruntime backend to perform quantization. View the [instructions](https://github.com/intel/neural-compressor/blob/master/examples/onnxrt/body_analysis/onnx_model_zoo/ultraface/quantization/ptq_static/README.md) to understand how to use Intel® Neural Compressor for quantization.
### Prepare Model
Download model from [ONNX Model Zoo](https://github.com/onnx/models).
```shell
wget https://github.com/onnx/models/raw/main/vision/body_analysis/ultraface/models/version-RFB-320.onnx
```
Convert opset version to 12 for more quantization capability.
```python
import onnx
from onnx import version_converter
model = onnx.load('version-RFB-320.onnx')
model = version_converter.convert_version(model, 12)
onnx.save_model(model, 'version-RFB-320-12.onnx')
```
### Model quantize
```bash
cd neural-compressor/examples/onnxrt/body_analysis/onnx_model_zoo/ultraface/quantization/ptq_static
bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
--dataset_location=/path/to/data \
--output_model=path/to/save
```
## Contributors
* [asiryan](https://github.com/asiryan)
* [yuwenzho](https://github.com/yuwenzho) (Intel)
* [ftian1](https://github.com/ftian1) (Intel)
* [hshen14](https://github.com/hshen14) (Intel)
## License
MIT