Convert detectors to factory pattern, ability to set different model for each detector (#4635)

* refactor detectors

* move create_detector and DetectorTypeEnum

* fixed code formatting

* add detector model config models

* fix detector unit tests

* adjust SharedMemory size to largest detector model shape

* fix detector model config defaults

* enable auto-discovery of detectors

* simplify config

* simplify config changes further

* update detectors docs; detect detector configs dynamic

* add suggested changes

* remove custom detector doc

* fix grammar, adjust device defaults
This commit is contained in:
Dennis George
2022-12-15 07:12:52 -06:00
committed by GitHub
parent 43c2761308
commit 420bcd7aa0
15 changed files with 404 additions and 229 deletions

View File

@@ -3,11 +3,38 @@ id: detectors
title: Detectors
---
By default, Frigate will use a single CPU detector. If you have a Coral, you will need to configure your detector devices in the config file. When using multiple detectors, they run in dedicated processes, but pull from a common queue of requested detections across all cameras.
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, and `openvino`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
Frigate supports `edgetpu` and `cpu` as detector types. The device value should be specified according to the [Documentation for the TensorFlow Lite Python API](https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api).
**Note**: There is not yet support for Nvidia GPUs to perform object detection with tensorflow. It can be used for ffmpeg decoding, but not object detection.
**Note**: There is no support for Nvidia GPUs to perform object detection with tensorflow. It can be used for ffmpeg decoding, but not object detection.
## CPU Detector (not recommended)
The CPU detector type runs a TensorFlow Lite model utilizing the CPU without hardware acceleration. It is recommended to use a hardware accelerated detector type instead for better performance. To configure a CPU based detector, set the `"type"` attribute to `"cpu"`.
The number of threads used by the interpreter can be specified using the `"num_threads"` attribute, and defaults to `3.`
A TensorFlow Lite model is provided in the container at `/cpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
```yaml
detectors:
cpu1:
type: cpu
num_threads: 3
model:
path: "/custom_model.tflite"
cpu2:
type: cpu
num_threads: 3
```
When using CPU detectors, you can add one CPU detector per camera. Adding more detectors than the number of cameras should not improve performance.
## Edge-TPU Detector
The EdgeTPU detector type runs a TensorFlow Lite model utilizing the Google Coral delegate for hardware acceleration. To configure an EdgeTPU detector, set the `"type"` attribute to `"edgetpu"`.
The EdgeTPU device can be specified using the `"device"` attribute according to the [Documentation for the TensorFlow Lite Python API](https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api). If not set, the delegate will use the first device it finds.
A TensorFlow Lite model is provided in the container at `/edgetpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
### Single USB Coral
@@ -16,6 +43,8 @@ detectors:
coral:
type: edgetpu
device: usb
model:
path: "/custom_model.tflite"
```
### Multiple USB Corals
@@ -64,38 +93,33 @@ detectors:
device: pci
```
### CPU Detectors (not recommended)
## OpenVINO Detector
The OpenVINO detector type runs an OpenVINO IR model on Intel CPU, GPU and VPU hardware. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`.
The OpenVINO device to be used is specified using the `"device"` attribute according to the naming conventions in the [Device Documentation](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Working_with_devices.html). Other supported devices could be `AUTO`, `CPU`, `GPU`, `MYRIAD`, etc. If not specified, the default OpenVINO device will be selected by the `AUTO` plugin.
OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. A supported Intel platform is required to use the `GPU` device with OpenVINO. The `MYRIAD` device may be run on any platform, including Arm devices. For detailed system requirements, see [OpenVINO System Requirements](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html)
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model. Use the model configuration shown below when using the OpenVINO detector.
```yaml
detectors:
cpu1:
type: cpu
num_threads: 3
cpu2:
type: cpu
num_threads: 3
```
When using CPU detectors, you can add a CPU detector per camera. Adding more detectors than the number of cameras should not improve performance.
## OpenVINO
The OpenVINO detector allows Frigate to run an OpenVINO IR model on Intel CPU, GPU and VPU hardware.
### OpenVINO Devices
The OpenVINO detector supports the Intel-supplied device plugins and can specify one or more devices in the configuration. See OpenVINO's device naming conventions in the [Device Documentation](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Working_with_devices.html) for more detail. Other supported devices could be `AUTO`, `CPU`, `GPU`, `MYRIAD`, etc.
```yaml
detectors:
ov_detector:
ov:
type: openvino
device: GPU
device: AUTO
model:
path: /openvino-model/ssdlite_mobilenet_v2.xml
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
```
OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. A supported Intel platform is required to use the GPU device with OpenVINO. The `MYRIAD` device may be run on any platform, including Arm devices. For detailed system requirements, see [OpenVINO System Requirements](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html)
#### Intel NCS2 VPU and Myriad X Setup
### Intel NCS2 VPU and Myriad X Setup
Intel produces a neural net inference accelleration chip called Myriad X. This chip was sold in their Neural Compute Stick 2 (NCS2) which has been discontinued. If intending to use the MYRIAD device for accelleration, additional setup is required to pass through the USB device. The host needs a udev rule installed to handle the NCS2 device.
@@ -123,18 +147,3 @@ device_cgroup_rules:
volumes:
- /dev/bus/usb:/dev/bus/usb
```
### OpenVINO Models
The included model for an OpenVINO detector comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model. Use the model configuration shown below when using the OpenVINO detector.
```yaml
model:
path: /openvino-model/ssdlite_mobilenet_v2.xml
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
```

View File

@@ -74,15 +74,13 @@ mqtt:
# Optional: Detectors configuration. Defaults to a single CPU detector
detectors:
# Required: name of the detector
coral:
detector_name:
# Required: type of the detector
# Valid values are 'edgetpu' (requires device property below) `openvino` (see Detectors documentation), and 'cpu'.
type: edgetpu
# Optional: Edgetpu or OpenVino device name
device: usb
# Optional: num_threads value passed to the tflite.Interpreter (default: shown below)
# This value is only used for CPU types
num_threads: 3
# Frigate provided types include 'cpu', 'edgetpu', and 'openvino' (default: shown below)
# Additional detector types can also be plugged in.
# Detectors may require additional configuration.
# Refer to the Detectors configuration page for more information.
type: cpu
# Optional: Database configuration
database: