Compare commits

..

28 Commits

Author SHA1 Message Date
Blake Blackshear
0dbf909ca6 try and further improve caching (#4947) 2023-01-07 08:07:56 -06:00
Pavels Veretennikovs
47ac5ed522 fix: preset-http-jpeg-generic reference (#4946) 2023-01-07 07:21:18 -06:00
Blake Blackshear
ec7aaa18ab try and avoid caching extra large tensorrt layers (#4942) 2023-01-06 19:58:35 -06:00
Ryan G
bee965df06 Fix integration link in the installation docs (#4937)
The link to the home assistant integration documentation was missing the leading slash which caused the path to be appended to the `/frigate` path of this page.
2023-01-06 19:32:23 -06:00
Nicolas Mowen
543cad5497 Only set colors for enabled objects (#4936)
* Only create colormap for enabled labels

* Fix assigning
2023-01-06 19:31:54 -06:00
Nicolas Mowen
d9c45a76fe Don't recheck erroring hwaccel in http either (#4935)
* Don't recheck erroring hwaccel in http either

* Send error instead of empty for known erroring hwaccel

* Formatting
2023-01-06 19:31:25 -06:00
AML225
417a42b0b3 Update installation.md (#4871)
Mounting the configuration file with the ":ro" flag will prevent users from editing config in new v12.0 UI.
2023-01-06 07:03:48 -06:00
Nate Meyer
8ac3114f9a Cleanup Detector labelmap (#4932)
* Add missing labels to default labelmap.  Fill any holes with "unknown".  Remove unique labelmap for tensorrt.

* Replace "truck" with "car" on Openvino labelmap
2023-01-06 07:03:16 -06:00
Nicolas Mowen
740d932848 Add ffmpeg presets docs and update nvidia-smi docs (#4928)
* Add tables for ffmpeg presets and how to use them

* Make it clear that ffmepg processes may not show when nvidia-smi is run inside the container

* Add specific example of mixed input arg presets

* Update docs/docs/configuration/ffmpeg_presets.md

Co-authored-by: Nate Meyer <Nate.Devel@gmail.com>

* typos

Co-authored-by: Nate Meyer <Nate.Devel@gmail.com>
Co-authored-by: Blake Blackshear <blakeb@blakeshome.com>
2023-01-06 07:01:53 -06:00
Nate Meyer
e645c8e007 Update TensorRT Docs (#4920)
* Remove branch from URL to tensorrt_models.sh

* Reword to make TensorRT model singular

* Add note about installing nvidia docker runtime and compatible drivers
2023-01-06 06:52:49 -06:00
Nicolas Mowen
9ee367d9e9 Fix Other Stats Access Too (#4917) 2023-01-06 06:51:58 -06:00
Blake Blackshear
8410788e99 add information about frigate plus to docs (#4919)
* add information about frigate plus to docs

* Update docs/docs/integrations/plus.md

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2023-01-06 06:51:40 -06:00
Blake Blackshear
12235acd75 Build nginx with tmpfs (#4933)
* Update Dockerfile

* Update Dockerfile

* refactor into script and to be consistent

Co-authored-by: Sergey Krashevich <svk@svk.su>
2023-01-06 06:48:41 -06:00
yayitazale
ba5cffac55 Update index.md (#4915)
Change the RTMP restream to RTSP
2023-01-05 06:30:35 -06:00
Nicolas Mowen
64ab6580dc Send blank hwaccel-error cache so logs will show when loading the stats page manually (#4912) 2023-01-05 06:27:57 -06:00
Nicolas Mowen
0a3295aa5c Rewrite encoding logic and cleanup vaapi presets (#4898)
* Remove duplicated vaapi presets

* Move encoding to string with inputs and outputs

* Formatting

* Fix formatting

* Fix typo

* Remove vaapi encoder
2023-01-04 18:16:11 -06:00
Nicolas Mowen
ffa98a138b Don't keep attempting gpu usage stats after failure (#4904)
* Don't log intel gpu top errors

* Keep list of errored hwaccel and don't send again

* Can log on first time

* Formatting & mypy
2023-01-04 18:12:51 -06:00
Nicolas Mowen
5e71d95cb1 Docs updates (#4903)
* Make note that Firefox does not work with MSE

* Add restream recommendation for mjpeg
2023-01-04 18:11:50 -06:00
Rob-Powell
9fd13aad11 check stream specific hwaccel_args for gpu stats (#4869)
* check stream specific hwaccel_args for gpu stats

* fix indentation

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* check special chars for linter

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2023-01-04 07:37:42 -06:00
Nicolas Mowen
ebef38e408 Fix href for cameras direct link (#4875) 2023-01-03 19:32:56 -06:00
Nicolas Mowen
b6592c67d1 Add None option to zones & sub labels (#4886)
* Add None option to zones

* Catch blank sub labels too
2023-01-03 19:29:25 -06:00
Nicolas Mowen
d547680116 Only replace topic (#4884) 2023-01-03 19:25:15 -06:00
Nicolas Mowen
bc5aa1141a Set host as blank by default (#4880) 2023-01-03 19:24:53 -06:00
Nicolas Mowen
ea7d1aabba Ability to set different codec for restream and use go2rtc hardware (#4876)
* Add video codec to restream config

* Add handling of encode engine and video codec

* Add test for video encoding

* Set in main configuration docs as well

* Add example to restream docs

* Put back patch
2023-01-03 19:24:34 -06:00
Nicolas Mowen
760d65b214 Don't fail to load when cameras stats are not available (#4877) 2023-01-03 19:23:56 -06:00
Nicolas Mowen
ceab294840 Catch case where args are a string but not preset (#4864)
* Catch case where args are a string but not preset

* Fix formatting
2023-01-02 18:32:12 -06:00
Nicolas Mowen
abc40f2581 only return stderr if return code is not 0 (#4863) 2023-01-02 17:31:59 -06:00
Felipe Santos
dc738e9be7 Upgrade go2rtc from v0.1-rc.5 to v0.1-rc.6 (#4860) 2023-01-02 17:31:18 -06:00
31 changed files with 492 additions and 321 deletions

View File

@@ -51,4 +51,3 @@ jobs:
tags: |
ghcr.io/blakeblackshear/frigate:${{ github.ref_name }}-${{ env.SHORT_SHA }}-tensorrt
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -18,57 +18,16 @@ WORKDIR /rootfs
FROM base AS nginx
ARG DEBIAN_FRONTEND
ARG NGINX_VERSION=1.22.1
ARG VOD_MODULE_VERSION=1.30
ARG SECURE_TOKEN_MODULE_VERSION=1.4
ARG RTMP_MODULE_VERSION=1.2.1
RUN cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list \
&& sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list \
&& apt-get update
RUN apt-get -yqq build-dep nginx
RUN apt-get -yqq install --no-install-recommends ca-certificates wget \
&& update-ca-certificates -f \
&& mkdir /tmp/nginx \
&& wget https://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz \
&& tar -zxf nginx-${NGINX_VERSION}.tar.gz -C /tmp/nginx --strip-components=1 \
&& rm nginx-${NGINX_VERSION}.tar.gz \
&& mkdir /tmp/nginx-vod-module \
&& wget https://github.com/kaltura/nginx-vod-module/archive/refs/tags/${VOD_MODULE_VERSION}.tar.gz \
&& tar -zxf ${VOD_MODULE_VERSION}.tar.gz -C /tmp/nginx-vod-module --strip-components=1 \
&& rm ${VOD_MODULE_VERSION}.tar.gz \
# Patch MAX_CLIPS to allow more clips to be added than the default 128
&& sed -i 's/MAX_CLIPS (128)/MAX_CLIPS (1080)/g' /tmp/nginx-vod-module/vod/media_set.h \
&& mkdir /tmp/nginx-secure-token-module \
&& wget https://github.com/kaltura/nginx-secure-token-module/archive/refs/tags/${SECURE_TOKEN_MODULE_VERSION}.tar.gz \
&& tar -zxf ${SECURE_TOKEN_MODULE_VERSION}.tar.gz -C /tmp/nginx-secure-token-module --strip-components=1 \
&& rm ${SECURE_TOKEN_MODULE_VERSION}.tar.gz \
&& mkdir /tmp/nginx-rtmp-module \
&& wget https://github.com/arut/nginx-rtmp-module/archive/refs/tags/v${RTMP_MODULE_VERSION}.tar.gz \
&& tar -zxf v${RTMP_MODULE_VERSION}.tar.gz -C /tmp/nginx-rtmp-module --strip-components=1 \
&& rm v${RTMP_MODULE_VERSION}.tar.gz
WORKDIR /tmp/nginx
RUN ./configure --prefix=/usr/local/nginx \
--with-file-aio \
--with-http_sub_module \
--with-http_ssl_module \
--with-threads \
--add-module=../nginx-vod-module \
--add-module=../nginx-secure-token-module \
--add-module=../nginx-rtmp-module \
--with-cc-opt="-O3 -Wno-error=implicit-fallthrough"
RUN make && make install
RUN rm -rf /usr/local/nginx/html /usr/local/nginx/conf/*.default
# bind /var/cache/apt to tmpfs to speed up nginx build
RUN --mount=type=tmpfs,target=/tmp --mount=type=tmpfs,target=/var/cache/apt \
--mount=type=bind,source=docker/build_nginx.sh,target=/deps/build_nginx.sh \
/deps/build_nginx.sh
FROM wget AS go2rtc
ARG TARGETARCH
WORKDIR /rootfs/usr/local/go2rtc/bin
RUN wget -qO go2rtc "https://github.com/AlexxIT/go2rtc/releases/download/v0.1-rc.5/go2rtc_linux_${TARGETARCH}" \
RUN wget -qO go2rtc "https://github.com/AlexxIT/go2rtc/releases/download/v0.1-rc.6/go2rtc_linux_${TARGETARCH}" \
&& chmod +x go2rtc
@@ -132,7 +91,8 @@ RUN wget -qO cpu_model.tflite https://github.com/google-coral/test_data/raw/rele
COPY labelmap.txt .
# Copy OpenVino model
COPY --from=ov-converter /models/public/ssdlite_mobilenet_v2/FP16 openvino-model
RUN wget -q https://github.com/openvinotoolkit/open_model_zoo/raw/master/data/dataset_classes/coco_91cl_bkgr.txt -O openvino-model/coco_91cl_bkgr.txt
RUN wget -q https://github.com/openvinotoolkit/open_model_zoo/raw/master/data/dataset_classes/coco_91cl_bkgr.txt -O openvino-model/coco_91cl_bkgr.txt && \
sed -i 's/truck/car/g' openvino-model/coco_91cl_bkgr.txt
@@ -184,6 +144,11 @@ RUN pip3 install -r requirements.txt
COPY requirements-wheels.txt /requirements-wheels.txt
RUN pip3 wheel --wheel-dir=/wheels -r requirements-wheels.txt
# Make this a separate target so it can be built/cached optionally
FROM wheels as trt-wheels
ARG DEBIAN_FRONTEND
ARG TARGETARCH
# Add TensorRT wheels to another folder
COPY requirements-tensorrt.txt /requirements-tensorrt.txt
RUN mkdir -p /trt-wheels && pip3 wheel --wheel-dir=/trt-wheels -r requirements-tensorrt.txt
@@ -303,11 +268,11 @@ COPY --from=rootfs / /
# Frigate w/ TensorRT Support as separate image
FROM frigate AS frigate-tensorrt
RUN --mount=type=bind,from=wheels,source=/trt-wheels,target=/deps/trt-wheels \
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
pip3 install -U /deps/trt-wheels/*.whl
# Dev Container w/ TRT
FROM devcontainer AS devcontainer-trt
RUN --mount=type=bind,from=wheels,source=/trt-wheels,target=/deps/trt-wheels \
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
pip3 install -U /deps/trt-wheels/*.whl

50
docker/build_nginx.sh Executable file
View File

@@ -0,0 +1,50 @@
#!/bin/bash
set -euxo pipefail
NGINX_VERSION="1.22.1"
VOD_MODULE_VERSION="1.30"
SECURE_TOKEN_MODULE_VERSION="1.4"
RTMP_MODULE_VERSION="1.2.1"
cp /etc/apt/sources.list /etc/apt/sources.list.d/sources-src.list
sed -i 's|deb http|deb-src http|g' /etc/apt/sources.list.d/sources-src.list
apt-get update
apt-get -yqq build-dep nginx
apt-get -yqq install --no-install-recommends ca-certificates wget
update-ca-certificates -f
mkdir /tmp/nginx
wget -nv https://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz
tar -zxf nginx-${NGINX_VERSION}.tar.gz -C /tmp/nginx --strip-components=1
rm nginx-${NGINX_VERSION}.tar.gz
mkdir /tmp/nginx-vod-module
wget -nv https://github.com/kaltura/nginx-vod-module/archive/refs/tags/${VOD_MODULE_VERSION}.tar.gz
tar -zxf ${VOD_MODULE_VERSION}.tar.gz -C /tmp/nginx-vod-module --strip-components=1
rm ${VOD_MODULE_VERSION}.tar.gz
# Patch MAX_CLIPS to allow more clips to be added than the default 128
sed -i 's/MAX_CLIPS (128)/MAX_CLIPS (1080)/g' /tmp/nginx-vod-module/vod/media_set.h
mkdir /tmp/nginx-secure-token-module
wget https://github.com/kaltura/nginx-secure-token-module/archive/refs/tags/${SECURE_TOKEN_MODULE_VERSION}.tar.gz
tar -zxf ${SECURE_TOKEN_MODULE_VERSION}.tar.gz -C /tmp/nginx-secure-token-module --strip-components=1
rm ${SECURE_TOKEN_MODULE_VERSION}.tar.gz
mkdir /tmp/nginx-rtmp-module
wget -nv https://github.com/arut/nginx-rtmp-module/archive/refs/tags/v${RTMP_MODULE_VERSION}.tar.gz
tar -zxf v${RTMP_MODULE_VERSION}.tar.gz -C /tmp/nginx-rtmp-module --strip-components=1
rm v${RTMP_MODULE_VERSION}.tar.gz
cd /tmp/nginx
./configure --prefix=/usr/local/nginx \
--with-file-aio \
--with-http_sub_module \
--with-http_ssl_module \
--with-threads \
--add-module=../nginx-vod-module \
--add-module=../nginx-secure-token-module \
--add-module=../nginx-rtmp-module \
--with-cc-opt="-O3 -Wno-error=implicit-fallthrough"
make -j$(nproc) && make install
rm -rf /usr/local/nginx/html /usr/local/nginx/conf/*.default

View File

@@ -32,6 +32,3 @@ do
python3 onnx_to_tensorrt.py -m ${model}
cp /tensorrt_demos/yolo/${model}.trt ${OUTPUT_FOLDER}/${model}.trt;
done
# Download Labelmap
wget -q https://github.com/openvinotoolkit/open_model_zoo/raw/master/data/dataset_classes/coco_91cl.txt -O ${OUTPUT_FOLDER}/coco_91cl.txt

View File

@@ -11,18 +11,22 @@ This page makes use of presets of FFmpeg args. For more information on presets,
## MJPEG Cameras
The input and output parameters need to be adjusted for MJPEG cameras
Note that mjpeg cameras require encoding the video into h264 for recording, and restream roles. This will use significantly more CPU than if the cameras supported h264 feeds directly. It is recommended to use the restream role to create an h264 restream and then use that as the source for ffmpeg.
```yaml
input_args: preset-http-mjpeg-generic
```
Note that mjpeg cameras require encoding the video into h264 for recording, and rtmp roles. This will use significantly more CPU than if the cameras supported h264 feeds directly.
```yaml
output_args:
record: preset-record-mjpeg
rtmp: preset-rtmp-mjpeg
mjpeg_cam:
ffmpeg:
inputs:
- path: rtsp://localhost:8554/mjpeg_cam
roles:
- detect
- record
- path: {your_mjpeg_stream_url}
roles:
- restream
restream:
enabled: true
video_encoding: h264
```
## JPEG Stream Cameras

View File

@@ -159,6 +159,8 @@ The TensorRT detector uses the 11.x series of CUDA libraries which have minor ve
> **TODO:** NVidia claims support on compute 3.5 and 3.7, but marks it as deprecated. This would have some, but not all, Kepler GPUs as possibly working. This needs testing before making any claims of support.
To use the TensorRT detector, make sure your host system has the [nvidia-container-runtime](https://docs.docker.com/config/containers/resource_constraints/#access-an-nvidia-gpu) installed to pass through the GPU to the container and the host system has a compatible driver installed for your GPU.
There are improved capabilities in newer GPU architectures that TensorRT can benefit from, such as INT8 operations and Tensor cores. The features compatible with your hardware will be optimized when the model is converted to a trt file. Currently the script provided for generating the model provides a switch to enable/disable FP16 operations. If you wish to use newer features such as INT8 optimization, more work is required.
#### Compatibility References:
@@ -171,13 +173,13 @@ There are improved capabilities in newer GPU architectures that TensorRT can ben
### Generate Models
The models used for TensorRT must be preprocessed on the same hardware platform that they will run on. This means that each user must run additional setup to generate these model files for the TensorRT library. A script is provided that will build several common models.
The model used for TensorRT must be preprocessed on the same hardware platform that they will run on. This means that each user must run additional setup to generate a model file for the TensorRT library. A script is provided that will build several common models.
To generate the model files, create a new folder to save the models, download the script, and launch a docker container that will run the script.
To generate model files, create a new folder to save the models, download the script, and launch a docker container that will run the script.
```bash
mkdir trt-models
wget https://raw.githubusercontent.com/blakeblackshear/frigate/nvidia-detector/docker/tensorrt_models.sh
wget https://raw.githubusercontent.com/blakeblackshear/frigate/docker/tensorrt_models.sh
chmod +x tensorrt_models.sh
docker run --gpus=all --rm -it -v `pwd`/trt-models:/tensorrt_models -v `pwd`/tensorrt_models.sh:/tensorrt_models.sh nvcr.io/nvidia/tensorrt:22.07-py3 /tensorrt_models.sh
```
@@ -226,7 +228,6 @@ detectors:
model:
path: /trt-models/yolov7-tiny-416.trt
labelmap_path: /trt-models/coco_91cl.txt
input_tensor: nchw
input_pixel_format: rgb
width: 416

View File

@@ -5,6 +5,71 @@ title: FFmpeg presets
Some presets of FFmpeg args are provided by default to make the configuration easier. All presets can be seen in [this file](https://github.com/blakeblackshear/frigate/blob/master/frigate/ffmpeg_presets.py).
<!--
TODO: Use [markdown-magic](https://github.com/DavidWells/markdown-magic) to generate this list from the source code.
-->
### Hwaccel Presets
It is highly recommended to use hwaccel presets in the config. These presets not only replace the longer args, but they also give frigate hints of what hardware is available and allows frigate to make other optimizations using the GPU such as when encoding the birdseye restream or when scaling a stream that has a size different than the native stream size.
See [the hwaccel docs](/configuration/hardware_acceleration.md) for more info on how to setup hwaccel for your GPU / iGPU.
| Preset | Usage | Other Notes |
| --------------------- | ---------------------------- | ----------------------------------------------------- |
| preset-rpi-32-h264 | 32 bit Rpi with h264 stream | |
| preset-rpi-64-h264 | 64 bit Rpi with h264 stream | |
| preset-vaapi | Intel & AMD VAAPI | Check hwaccel docs to ensure correct driver is chosen |
| preset-intel-qsv-h264 | Intel QSV with h264 stream | If issues occur recommend using vaapi preset instead |
| preset-intel-qsv-h265 | Intel QSV with h265 stream | If issues occur recommend using vaapi preset instead |
| preset-nvidia-h264 | Nvidia GPU with h264 stream | |
| preset-nvidia-h265 | Nvidia GPU with h265 stream | |
| preset-nvidia-mjpeg | Nvidia GPU with mjpeg stream | Recommend restreaming mjpeg and using nvidia-h264 |
### Input Args Presets
Input args presets help make the config more readable and handle use cases for different types of streams to ensure maximum compatibility.
See [the camera specific docs](/configuration/camera_specific.md) for more info on non-standard cameras and recommendations for using them in frigate.
| Preset | Usage | Other Notes |
| ------------------------- | ----------------------- | --------------------------------------------------- |
| preset-http-jpeg-generic | HTTP Live Jpeg | Recommend restreaming live jpeg instead |
| preset-http-mjpeg-generic | HTTP Mjpeg Stream | Recommend restreaming mjpeg stream instead |
| preset-http-reolink | Reolink HTTP-FLV Stream | Only for reolink http, not when restreaming as rtsp |
| preset-rtmp-generic | RTMP Stream | |
| preset-rtsp-generic | RTSP Stream | This is the default when nothing is specified |
| preset-rtsp-udp | RTSP Stream via UDP | Use when camera is UDP only |
| preset-rtsp-blue-iris | Blue Iris RTSP Stream | Use when consuming a stream from Blue Iris |
:::caution
It is important to be mindful of input args when using restream because you can have a mix of protocols. `http` and `rtmp` presets cannot be used with `rtsp` streams. For example, when using a reolink cam with the rtsp restream as a source for record the preset-http-reolink will cause a crash. In this case presets will need to be set at the stream level. See the example below.
:::
```yaml
cameras:
reolink_cam:
ffmpeg:
inputs:
- path: http://192.168.0.139/flv?port=1935&app=bcs&stream=channel0_ext.bcs&user=admin&password={FRIGATE_CAM_PASSWORD}
input_args: preset-http-reolink
roles:
- detect
- path: rtsp://192.168.0.10:8554/garage
input_args: preset-rtsp-generic
roles:
- record
- path: http://192.168.0.139/flv?port=1935&app=bcs&stream=channel0_main.bcs&user=admin&password={FRIGATE_CAM_PASSWORD}
roles:
- restream
```
### Output Args Presets
Output args presets help make the config more readable and handle use cases for different types of streams to ensure consistent recordings.
| Preset | Usage | Other Notes |
| --------------------------- | --------------------------------- | --------------------------------------------- |
| preset-record-generic | Record WITHOUT audio | This is the default when nothing is specified |
| preset-record-generic-audio | Record WITH audio | Use this to enable audio in recordings |
| preset-record-mjpeg | Record an mjpeg stream | Recommend restreaming mjpeg stream instead |
| preset-record-jpeg | Record live jpeg | Recommend restreaming live jpeg instead |
| preset-record-ubiquiti | Record ubiquiti stream with audio | Recordings with ubiquiti non-standard audio |

View File

@@ -25,7 +25,7 @@ ffmpeg:
```yaml
ffmpeg:
hwaccel_args: preset-intel-vaapi
hwaccel_args: preset-vaapi
```
**NOTICE**: With some of the processors, like the J4125, the default driver `iHD` doesn't seem to work correctly for hardware acceleration. You may need to change the driver to `i965` by adding the following environment variable `LIBVA_DRIVER_NAME=i965` to your docker-compose file or [in the frigate.yml for HA OS users](advanced.md#environment_vars).
@@ -42,7 +42,7 @@ ffmpeg:
```yaml
ffmpeg:
hwaccel_args: preset-amd-vaapi
hwaccel_args: preset-vaapi
```
### NVIDIA GPU
@@ -93,9 +93,15 @@ ffmpeg:
```
If everything is working correctly, you should see a significant improvement in performance.
Verify that hardware decoding is working by running `docker exec -it frigate nvidia-smi`, which should show the ffmpeg
Verify that hardware decoding is working by running `nvidia-smi`, which should show the ffmpeg
processes:
:::note
nvidia-smi may not show ffmpeg processes when run inside the container [due to docker limitations](https://github.com/NVIDIA/nvidia-docker/issues/179#issuecomment-645579458)
:::
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 455.38 Driver Version: 455.38 CUDA Version: 11.1 |

View File

@@ -356,6 +356,10 @@ restream:
enabled: True
# Optional: Force audio compatibility with browsers (default: shown below)
force_audio: True
# Optional: Video encoding to be used. By default the codec will be copied but
# it can be switched to another or an MJPEG stream can be encoded and restreamed
# as h264 (default: shown below)
video_encoding: "copy"
# Optional: Restream birdseye via RTSP (default: shown below)
# NOTE: Enabling this will set birdseye to run 24/7 which may increase CPU usage somewhat.
birdseye: False

View File

@@ -9,11 +9,11 @@ Frigate has different live view options, some of which require [restream](restre
Live view options can be selected while viewing the live stream. The options are:
| Source | Latency | Frame Rate | Resolution | Audio | Requires Restream | Other Limitations |
| ------ | ------- | -------------------------------------- | -------------- | ---------------------------- | ----------------- | --------------------- |
| jsmpeg | low | same as `detect -> fps`, capped at 10 | same as detect | no | no | none |
| mse | low | native | native | yes (depends on audio codec) | yes | not supported on iOS |
| webrtc | lowest | native | native | yes (depends on audio codec) | yes | requires extra config |
| Source | Latency | Frame Rate | Resolution | Audio | Requires Restream | Other Limitations |
| ------ | ------- | -------------------------------------- | -------------- | ---------------------------- | ----------------- | -------------------------------- |
| jsmpeg | low | same as `detect -> fps`, capped at 10 | same as detect | no | no | none |
| mse | low | native | native | yes (depends on audio codec) | yes | not supported on iOS or Firefox |
| webrtc | lowest | native | native | yes (depends on audio codec) | yes | requires extra config |
### WebRTC extra configuration:
@@ -38,4 +38,4 @@ See https://github.com/AlexxIT/go2rtc#module-webrtc for more details
```yaml
volumes:
- /path/to/your/go2rtc.yaml:/config/frigate-go2rtc.yaml:ro
```
```

View File

@@ -15,6 +15,22 @@ Different live view technologies (ex: MSE, WebRTC) support different audio codec
Birdseye RTSP restream can be enabled at `restream -> birdseye` and accessed at `rtsp://<frigate_host>:8554/birdseye`. Enabling the restream will cause birdseye to run 24/7 which may increase CPU usage somewhat.
#### Changing Restream Codec
Generally it is recommended to let the codec from the camera be copied. But there may be some cases where h265 needs to be transcoded as h264 or an MJPEG stream can be encoded and restreamed as h264. In this case the encoding will need to be set, if any hardware acceleration presets are set then that will be used to encode the stream.
```yaml
ffmpeg:
hwaccel_args: your-hwaccel-preset # <- highly recommended so the GPU is used
cameras:
mjpeg_cam:
ffmpeg:
...
restream:
video_encoding: h264
```
### RTMP (Deprecated)
In previous Frigate versions RTMP was used for re-streaming. RTMP has disadvantages however including being incompatible with H.265, high bitrates, and certain audio codecs. RTMP is deprecated and it is recommended to move to the new restream role.

View File

@@ -15,7 +15,7 @@ Use of a [Google Coral Accelerator](https://coral.ai/products/) is optional, but
- Object detection with TensorFlow runs in separate processes for maximum FPS
- Communicates over MQTT for easy integration into other systems
- Recording with retention based on detected objects
- Re-streaming via RTMP to reduce the number of connections to your camera
- Re-streaming via RTSP to reduce the number of connections to your camera
- A dynamic combined camera view of all tracked cameras.
## Screenshots

View File

@@ -3,7 +3,7 @@ id: installation
title: Installation
---
Frigate is a Docker container that can be run on any Docker host including as a [HassOS Addon](https://www.home-assistant.io/addons/). Note that a Home Assistant Addon is **not** the same thing as the integration. The [integration](integrations/home-assistant) is required to integrate Frigate into Home Assistant.
Frigate is a Docker container that can be run on any Docker host including as a [HassOS Addon](https://www.home-assistant.io/addons/). Note that a Home Assistant Addon is **not** the same thing as the integration. The [integration](/integrations/home-assistant) is required to integrate Frigate into Home Assistant.
## Dependencies
@@ -38,7 +38,7 @@ services:
frigate:
...
volumes:
- /path/to/your/config.yml:/config/config.yml:ro
- /path/to/your/config.yml:/config/config.yml
- /path/to/your/storage:/media/frigate
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
@@ -55,7 +55,7 @@ services:
frigate:
...
volumes:
- /path/to/your/config.yml:/config/config.yml:ro
- /path/to/your/config.yml:/config/config.yml
- /path/to/network/storage:/media/frigate
- /path/to/local/disk:/db
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
@@ -111,7 +111,7 @@ services:
- /dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
volumes:
- /etc/localtime:/etc/localtime:ro
- /path/to/your/config.yml:/config/config.yml:ro
- /path/to/your/config.yml:/config/config.yml
- /path/to/your/storage:/media/frigate
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
@@ -135,7 +135,7 @@ docker run -d \
--device /dev/dri/renderD128 \
--shm-size=64m \
-v /path/to/your/storage:/media/frigate \
-v /path/to/your/config.yml:/config/config.yml:ro \
-v /path/to/your/config.yml:/config/config.yml \
-v /etc/localtime:/etc/localtime:ro \
-e FRIGATE_RTSP_PASSWORD='password' \
-p 5000:5000 \

View File

@@ -0,0 +1,48 @@
---
id: plus
title: Frigate+
---
:::info
Frigate+ is under active development and currently only offers the ability to submit your examples with annotations. Models will be available after enough examples are submitted to train a robust model. It is free to create an account and upload your examples.
:::
Frigate+ offers models trained from scratch and specifically designed for the way Frigate NVR analyzes video footage. They offer higher accuracy with less resources. By uploading your own labeled examples, your model can be uniquely tuned for accuracy in your specific conditions. After tuning, performance is evaluated against a broad dataset and real world examples submitted by other Frigate+ users to prevent overfitting.
Custom models also include a more relevant set of objects for security cameras such as person, face, car, license plate, delivery truck, package, dog, cat, deer, and more. Interested in detecting an object unique to you? Upload examples to incorporate your own objects without worrying that you are reducing the accuracy of other object types in the model.
## Setup
### Create an account
Free accounts can be created at [https://plus.frigate.video](https://plus.frigate.video).
### Generate an API key
Once logged in, you can generate an API key for Frigate in Settings.
![API key](/img/plus-api-key-min.png)
### Set your API key
In Frigate, you can set the `PLUS_API_KEY` environment variable to enable the `SEND TO FRIGATE+` buttons on the events page. You can set it in your Docker Compose file or in your Docker run command. Home Assistant Addon users can set it under Settings > Addons > Frigate NVR > Configuration > Options (be sure to toggle the "Show unused optional configuration options" switch).
:::caution
You cannot use the `environment_vars` section of your configuration file to set this environment variable.
:::
### Submit examples
Once your API key is configured, you can submit examples directly from the events page in Frigate using the `SEND TO FRIGATE+` button.
![Send To Plus](/img/send-to-plus.png)
### Annotate and verify
You can view all of your submitted images at [https://plus.frigate.video](https://plus.frigate.video). Annotations can be added by clicking an image.
![Annotate](/img/annotate.png)

View File

@@ -33,6 +33,7 @@ module.exports = {
"configuration/ffmpeg_presets",
],
Integrations: [
"integrations/plus",
"integrations/home-assistant",
"integrations/api",
"integrations/mqtt",

BIN
docs/static/img/annotate.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

BIN
docs/static/img/plus-api-key-min.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

BIN
docs/static/img/send-to-plus.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

View File

@@ -85,7 +85,7 @@ class MqttClient(Communicator): # type: ignore[misc]
self, client: mqtt.Client, userdata: Any, message: mqtt.MQTTMessage
) -> None:
self._dispatcher(
message.topic.replace(f"{self.mqtt_config.topic_prefix}/", ""),
message.topic.replace(f"{self.mqtt_config.topic_prefix}/", "", 1),
message.payload.decode(),
)

View File

@@ -66,7 +66,7 @@ class UIConfig(FrigateBaseModel):
class MqttConfig(FrigateBaseModel):
enabled: bool = Field(title="Enable MQTT Communication.", default=True)
host: str = Field(title="MQTT Host")
host: str = Field(default="", title="MQTT Host")
port: int = Field(default=1883, title="MQTT Port")
topic_prefix: str = Field(default="frigate", title="MQTT Topic Prefix")
client_id: str = Field(default="frigate", title="MQTT Client ID")
@@ -514,8 +514,17 @@ class JsmpegStreamConfig(FrigateBaseModel):
quality: int = Field(default=8, ge=1, le=31, title="Live camera view quality.")
class RestreamCodecEnum(str, Enum):
copy = "copy"
h264 = "h264"
h265 = "h265"
class RestreamConfig(FrigateBaseModel):
enabled: bool = Field(default=True, title="Restreaming enabled.")
video_encoding: RestreamCodecEnum = Field(
default=RestreamCodecEnum.copy, title="Method for encoding the restream."
)
force_audio: bool = Field(
default=True, title="Force audio compatibility with the browser."
)
@@ -953,6 +962,14 @@ class FrigateConfig(FrigateBaseModel):
camera_config.create_ffmpeg_cmds()
config.cameras[name] = camera_config
# get list of unique enabled labels for tracking
enabled_labels = set(config.objects.track)
for _, camera in config.cameras.items():
enabled_labels.update(camera.objects.track)
config.model.create_colormap(enabled_labels)
for key, detector in config.detectors.items():
detector_config: DetectorConfig = parse_obj_as(DetectorConfig, detector)
if detector_config.model is None:

View File

@@ -55,11 +55,13 @@ class ModelConfig(BaseModel):
**load_labels(config.get("labelmap_path", "/labelmap.txt")),
**config.get("labelmap", {}),
}
cmap = plt.cm.get_cmap("tab10", len(self._merged_labelmap.keys()))
self._colormap = {}
for key, val in self._merged_labelmap.items():
def create_colormap(self, enabled_labels: set[str]) -> None:
"""Get a list of colors for enabled labels."""
cmap = plt.cm.get_cmap("tab10", len(enabled_labels))
for key, val in enumerate(enabled_labels):
self._colormap[val] = tuple(int(round(255 * c)) for c in cmap(key)[:3])
class Config:

View File

@@ -12,7 +12,7 @@ _user_agent_args = [
PRESETS_HW_ACCEL_DECODE = {
"preset-rpi-32-h264": ["-c:v", "h264_v4l2m2m"],
"preset-rpi-64-h264": ["-c:v", "h264_v4l2m2m"],
"preset-intel-vaapi": [
"preset-vaapi": [
"-hwaccel_flags",
"allow_profile_mismatch",
"-hwaccel",
@@ -42,16 +42,6 @@ PRESETS_HW_ACCEL_DECODE = {
"-c:v",
"hevc_qsv",
],
"preset-amd-vaapi": [
"-hwaccel_flags",
"allow_profile_mismatch",
"-hwaccel",
"vaapi",
"-hwaccel_device",
"/dev/dri/renderD128",
"-hwaccel_output_format",
"vaapi",
],
"preset-nvidia-h264": [
"-hwaccel",
"cuda",
@@ -85,7 +75,7 @@ PRESETS_HW_ACCEL_DECODE = {
}
PRESETS_HW_ACCEL_SCALE = {
"preset-intel-vaapi": [
"preset-vaapi": [
"-vf",
"fps={},scale_vaapi=w={}:h={},hwdownload,format=yuv420p",
"-f",
@@ -103,12 +93,6 @@ PRESETS_HW_ACCEL_SCALE = {
"-f",
"rawvideo",
],
"preset-amd-vaapi": [
"-vf",
"fps={},scale_vaapi=w={}:h={},hwdownload,format=yuv420p",
"-f",
"rawvideo",
],
"preset-nvidia-h264": [
"-vf",
"fps={},scale_cuda=w={}:h={}:format=nv12,hwdownload,format=nv12,format=yuv420p",
@@ -130,104 +114,20 @@ PRESETS_HW_ACCEL_SCALE = {
}
PRESETS_HW_ACCEL_ENCODE = {
"preset-intel-vaapi": [
"-c:v",
"h264_vaapi",
"-g",
"50",
"-bf",
"0",
"-profile:v",
"high",
"-level:v",
"4.1",
"-sei:v",
"0",
],
"preset-intel-qsv-h264": [
"-c:v",
"h264_qsv",
"-g",
"50",
"-bf",
"0",
"-profile:v",
"high",
"-level:v",
"4.1",
"-async_depth:v",
"1",
],
"preset-intel-qsv-h265": [
"-c:v",
"h264_qsv",
"-g",
"50",
"-bf",
"0",
"-profile:v",
"high",
"-level:v",
"4.1",
"-async_depth:v",
"1",
],
"preset-amd-vaapi": [
"-c:v",
"h264_vaapi",
"-g",
"50",
"-bf",
"0",
"-profile:v",
"high",
"-level:v",
"4.1",
"-sei:v",
"0",
],
"preset-nvidia-h264": [
"-c:v",
"h264_nvenc",
"-g",
"50",
"-profile:v",
"high",
"-level:v",
"auto",
"-preset:v",
"p2",
"-tune:v",
"ll",
],
"preset-nvidia-h265": [
"-c:v",
"h264_nvenc",
"-g",
"50",
"-profile:v",
"high",
"-level:v",
"auto",
"-preset:v",
"p2",
"-tune:v",
"ll",
],
"default": [
"-c:v",
"libx264",
"-g",
"50",
"-profile:v",
"high",
"-level:v",
"4.1",
"-preset:v",
"superfast",
"-tune:v",
"zerolatency",
],
"preset-intel-qsv-h264": "ffmpeg -hide_banner {0} -c:v h264_qsv -g 50 -bf 0 -profile:v high -level:v 4.1 -async_depth:v 1 {1}",
"preset-intel-qsv-h265": "ffmpeg -hide_banner {0} -c:v h264_qsv -g 50 -bf 0 -profile:v high -level:v 4.1 -async_depth:v 1 {1}",
"preset-nvidia-h264": "ffmpeg -hide_banner {0} -c:v h264_nvenc -g 50 -profile:v high -level:v auto -preset:v p2 -tune:v ll {1}",
"preset-nvidia-h265": "ffmpeg -hide_banner {0} -c:v h264_nvenc -g 50 -profile:v high -level:v auto -preset:v p2 -tune:v ll {1}",
"default": "ffmpeg -hide_banner {0} -c:v libx264 -g 50 -profile:v high -level:v 4.1 -preset:v superfast -tune:v zerolatency {1}",
}
PRESETS_HW_ACCEL_GO2RTC_ENGINE = {
"preset-intel-vaapi": "vaapi",
"preset-intel-qsv-h264": "vaapi", # go2rtc doesn't support qsv
"preset-intel-qsv-h265": "vaapi",
"preset-amd-vaapi": "vaapi",
"preset-nvidia-h264": "cuda",
"preset-nvidia-h265": "cuda",
}
@@ -247,7 +147,7 @@ def parse_preset_hardware_acceleration_scale(
height: int,
) -> list[str]:
"""Return the correct scaling preset or default preset if none is set."""
if not isinstance(arg, str):
if not isinstance(arg, str) or " " in arg:
scale = PRESETS_HW_ACCEL_SCALE["default"].copy()
scale[1] = str(fps)
scale[3] = f"{width}x{height}"
@@ -259,12 +159,23 @@ def parse_preset_hardware_acceleration_scale(
return scale
def parse_preset_hardware_acceleration_encode(arg: Any) -> list[str]:
def parse_preset_hardware_acceleration_encode(arg: Any, input: str, output: str) -> str:
"""Return the correct scaling preset or default preset if none is set."""
if not isinstance(arg, str):
return PRESETS_HW_ACCEL_ENCODE["default"]
return PRESETS_HW_ACCEL_ENCODE["default"].format(input, output)
return PRESETS_HW_ACCEL_ENCODE.get(arg, PRESETS_HW_ACCEL_ENCODE["default"])
return PRESETS_HW_ACCEL_ENCODE.get(arg, PRESETS_HW_ACCEL_ENCODE["default"]).format(
input,
output,
)
def parse_preset_hardware_acceleration_go2rtc_engine(arg: Any) -> list[str]:
"""Return the correct engine for the preset otherwise returns None."""
if not isinstance(arg, str):
return None
return PRESETS_HW_ACCEL_GO2RTC_ENGINE.get(arg)
PRESETS_INPUT = {
@@ -392,7 +303,7 @@ def parse_preset_input(arg: Any, detect_fps: int) -> list[str]:
if not isinstance(arg, str):
return None
if arg == "preset-jpeg-generic":
if arg == "preset-http-jpeg-generic":
input = PRESETS_INPUT[arg].copy()
input[1] = str(detect_fps)
return input

View File

@@ -76,6 +76,7 @@ def create_app(
app.storage_maintainer = storage_maintainer
app.plus_api = plus_api
app.camera_error_image = None
app.hwaccel_errors = []
app.register_blueprint(bp)
@@ -761,7 +762,11 @@ def version():
@bp.route("/stats")
def stats():
stats = stats_snapshot(current_app.frigate_config, current_app.stats_tracking)
stats = stats_snapshot(
current_app.frigate_config,
current_app.stats_tracking,
current_app.hwaccel_errors,
)
return jsonify(stats)
@@ -861,7 +866,9 @@ def latest_frame(camera_name):
@bp.route("/recordings/storage", methods=["GET"])
def get_recordings_storage_usage():
recording_stats = stats_snapshot(
current_app.frigate_config, current_app.stats_tracking
current_app.frigate_config,
current_app.stats_tracking,
current_app.hwaccel_errors,
)["service"]["storage"][RECORD_DIR]
total_mb = recording_stats["total"]
@@ -1250,10 +1257,10 @@ def vainfo():
{
"return_code": vainfo.returncode,
"stderr": vainfo.stderr.decode("unicode_escape").strip()
if vainfo.stderr.decode()
if vainfo.returncode != 0
else "",
"stdout": vainfo.stdout.decode("unicode_escape").strip()
if vainfo.stdout.decode()
if vainfo.returncode == 0
else "",
}
)

View File

@@ -3,18 +3,33 @@
import logging
import requests
from frigate.util import escape_special_characters
from frigate.config import FrigateConfig
from typing import Optional
from frigate.config import FrigateConfig, RestreamCodecEnum
from frigate.const import BIRDSEYE_PIPE
from frigate.ffmpeg_presets import parse_preset_hardware_acceleration_encode
from frigate.ffmpeg_presets import (
parse_preset_hardware_acceleration_encode,
parse_preset_hardware_acceleration_go2rtc_engine,
)
from frigate.util import escape_special_characters
logger = logging.getLogger(__name__)
def get_manual_go2rtc_stream(camera_url: str) -> str:
def get_manual_go2rtc_stream(
camera_url: str, codec: RestreamCodecEnum, engine: Optional[str]
) -> str:
"""Get a manual stream for go2rtc."""
return f"ffmpeg:{camera_url}#video=copy#audio=aac#audio=opus"
if codec == RestreamCodecEnum.copy:
return f"ffmpeg:{camera_url}#video=copy#audio=aac#audio=opus"
if engine:
return (
f"ffmpeg:{camera_url}#video={codec}#hardware={engine}#audio=aac#audio=opus"
)
return f"ffmpeg:{camera_url}#video={codec}#audio=aac#audio=opus"
class RestreamApi:
@@ -41,13 +56,17 @@ class RestreamApi:
else:
# go2rtc only supports rtsp for direct relay, otherwise ffmpeg is used
self.relays[cam_name] = get_manual_go2rtc_stream(
escape_special_characters(input.path)
escape_special_characters(input.path),
camera.restream.video_encoding,
parse_preset_hardware_acceleration_go2rtc_engine(
self.config.ffmpeg.hwaccel_args
),
)
if self.config.restream.birdseye:
self.relays[
"birdseye"
] = f"exec:ffmpeg -hide_banner -f rawvideo -pix_fmt yuv420p -video_size {self.config.birdseye.width}x{self.config.birdseye.height} -r 10 -i {BIRDSEYE_PIPE} {' '.join(parse_preset_hardware_acceleration_encode(self.config.ffmpeg.hwaccel_args))} -rtsp_transport tcp -f rtsp {{output}}"
] = f"exec:{parse_preset_hardware_acceleration_encode(self.config.ffmpeg.hwaccel_args, f'-f rawvideo -pix_fmt yuv420p -video_size {self.config.birdseye.width}x{self.config.birdseye.height} -r 10 -i {BIRDSEYE_PIPE}', '-rtsp_transport tcp -f rtsp {output}')}"
for name, path in self.relays.items():
params = {"src": path, "name": name}

View File

@@ -84,13 +84,15 @@ def get_temperatures() -> dict[str, float]:
return temps
def get_processing_stats(config: FrigateConfig, stats: dict[str, str]) -> None:
def get_processing_stats(
config: FrigateConfig, stats: dict[str, str], hwaccel_errors: list[str]
) -> None:
"""Get stats for cpu / gpu."""
async def run_tasks() -> None:
await asyncio.wait(
[
asyncio.create_task(set_gpu_stats(config, stats)),
asyncio.create_task(set_gpu_stats(config, stats, hwaccel_errors)),
asyncio.create_task(set_cpu_stats(stats)),
]
)
@@ -109,7 +111,9 @@ async def set_cpu_stats(all_stats: dict[str, Any]) -> None:
all_stats["cpu_usages"] = cpu_stats
async def set_gpu_stats(config: FrigateConfig, all_stats: dict[str, Any]) -> None:
async def set_gpu_stats(
config: FrigateConfig, all_stats: dict[str, Any], hwaccel_errors: list[str]
) -> None:
"""Parse GPUs from hwaccel args and use for stats."""
hwaccel_args = []
@@ -122,10 +126,22 @@ async def set_gpu_stats(config: FrigateConfig, all_stats: dict[str, Any]) -> Non
if args and args not in hwaccel_args:
hwaccel_args.append(args)
for stream_input in camera.ffmpeg.inputs:
args = stream_input.hwaccel_args
if isinstance(args, list):
args = " ".join(args)
if args and args not in hwaccel_args:
hwaccel_args.append(args)
stats: dict[str, dict] = {}
for args in hwaccel_args:
if "cuvid" in args or "nvidia" in args:
if args in hwaccel_errors:
# known erroring args should automatically return as error
stats["error-gpu"] = {"gpu": -1, "mem": -1}
elif "cuvid" in args or "nvidia" in args:
# nvidia GPU
nvidia_usage = get_nvidia_gpu_stats()
@@ -135,6 +151,7 @@ async def set_gpu_stats(config: FrigateConfig, all_stats: dict[str, Any]) -> Non
stats[name] = nvidia_usage
else:
stats["nvidia-gpu"] = {"gpu": -1, "mem": -1}
hwaccel_errors.append(args)
elif "qsv" in args:
# intel QSV GPU
intel_usage = get_intel_gpu_stats()
@@ -143,6 +160,7 @@ async def set_gpu_stats(config: FrigateConfig, all_stats: dict[str, Any]) -> Non
stats["intel-qsv"] = intel_usage
else:
stats["intel-qsv"] = {"gpu": -1, "mem": -1}
hwaccel_errors.append(args)
elif "vaapi" in args:
driver = os.environ.get(DRIVER_ENV_VAR)
@@ -154,6 +172,7 @@ async def set_gpu_stats(config: FrigateConfig, all_stats: dict[str, Any]) -> Non
stats["amd-vaapi"] = amd_usage
else:
stats["amd-vaapi"] = {"gpu": -1, "mem": -1}
hwaccel_errors.append(args)
else:
# intel VAAPI GPU
intel_usage = get_intel_gpu_stats()
@@ -162,6 +181,7 @@ async def set_gpu_stats(config: FrigateConfig, all_stats: dict[str, Any]) -> Non
stats["intel-vaapi"] = intel_usage
else:
stats["intel-vaapi"] = {"gpu": -1, "mem": -1}
hwaccel_errors.append(args)
elif "v4l2m2m" in args or "rpi" in args:
# RPi v4l2m2m is currently not able to get usage stats
stats["rpi-v4l2m2m"] = {"gpu": -1, "mem": -1}
@@ -171,7 +191,7 @@ async def set_gpu_stats(config: FrigateConfig, all_stats: dict[str, Any]) -> Non
def stats_snapshot(
config: FrigateConfig, stats_tracking: StatsTrackingTypes
config: FrigateConfig, stats_tracking: StatsTrackingTypes, hwaccel_errors: list[str]
) -> dict[str, Any]:
"""Get a snapshot of the current stats that are being tracked."""
camera_metrics = stats_tracking["camera_metrics"]
@@ -210,7 +230,7 @@ def stats_snapshot(
}
stats["detection_fps"] = round(total_detection_fps, 2)
get_processing_stats(config, stats)
get_processing_stats(config, stats, hwaccel_errors)
stats["service"] = {
"uptime": (int(time.time()) - stats_tracking["started"]),
@@ -246,10 +266,13 @@ class StatsEmitter(threading.Thread):
self.stats_tracking = stats_tracking
self.dispatcher = dispatcher
self.stop_event = stop_event
self.hwaccel_errors: list[str] = []
def run(self) -> None:
time.sleep(10)
while not self.stop_event.wait(self.config.mqtt.stats_interval):
stats = stats_snapshot(self.config, self.stats_tracking)
stats = stats_snapshot(
self.config, self.stats_tracking, self.hwaccel_errors
)
self.dispatcher.publish("stats", json.dumps(stats), retain=False)
logger.info(f"Exiting watchdog...")

View File

@@ -45,7 +45,9 @@ class TestRestream(TestCase):
}
@patch("frigate.restream.requests")
def test_rtsp_stream(self, mock_requests) -> None:
def test_rtsp_stream(
self, mock_request
) -> None: # need to ensure restream doesn't try to call API
"""Test that the normal rtsp stream is sent plainly."""
frigate_config = FrigateConfig(**self.config)
restream = RestreamApi(frigate_config)
@@ -53,13 +55,28 @@ class TestRestream(TestCase):
assert restream.relays["back"].startswith("rtsp")
@patch("frigate.restream.requests")
def test_http_stream(self, mock_requests) -> None:
def test_http_stream(
self, mock_request
) -> None: # need to ensure restream doesn't try to call API
"""Test that the http stream is sent via ffmpeg."""
frigate_config = FrigateConfig(**self.config)
restream = RestreamApi(frigate_config)
restream.add_cameras()
assert not restream.relays["front"].startswith("rtsp")
@patch("frigate.restream.requests")
def test_restream_codec_change(
self, mock_request
) -> None: # need to ensure restream doesn't try to call API
"""Test that the http stream is sent via ffmpeg."""
self.config["cameras"]["front"]["restream"]["video_encoding"] = "h265"
self.config["ffmpeg"] = {"hwaccel_args": "preset-nvidia-h264"}
frigate_config = FrigateConfig(**self.config)
restream = RestreamApi(frigate_config)
restream.add_cameras()
assert "#hardware=cuda" in restream.relays["front"]
assert "#video=h265" in restream.relays["front"]
if __name__ == "__main__":
main(verbosity=2)

View File

@@ -706,15 +706,17 @@ def load_labels(path, encoding="utf-8"):
Dictionary mapping indices to labels.
"""
with open(path, "r", encoding=encoding) as f:
labels = {index: "unknown" for index in range(91)}
lines = f.readlines()
if not lines:
return {}
if lines[0].split(" ", maxsplit=1)[0].isdigit():
pairs = [line.split(" ", maxsplit=1) for line in lines]
return {int(index): label.strip() for index, label in pairs}
labels.update({int(index): label.strip() for index, label in pairs})
else:
return {index: line.strip() for index, line in enumerate(lines)}
labels.update({index: line.strip() for index, line in enumerate(lines)})
return labels
def clean_camera_user_pass(line: str) -> str:

View File

@@ -9,6 +9,7 @@
8 boat
9 traffic light
10 fire hydrant
11 street sign
12 stop sign
13 parking meter
14 bench
@@ -22,8 +23,11 @@
22 bear
23 zebra
24 giraffe
25 hat
26 backpack
27 umbrella
28 shoe
29 eye glasses
30 handbag
31 tie
32 suitcase
@@ -38,6 +42,7 @@
41 surfboard
42 tennis racket
43 bottle
44 plate
45 wine glass
46 cup
47 fork
@@ -58,8 +63,12 @@
62 couch
63 potted plant
64 bed
65 mirror
66 dining table
67 window
68 desk
69 toilet
70 door
71 tv
72 laptop
73 mouse
@@ -71,10 +80,12 @@
79 toaster
80 sink
81 refrigerator
82 blender
83 book
84 clock
85 vase
86 scissors
87 teddy bear
88 hair drier
89 toothbrush
89 toothbrush
90 hair brush

View File

@@ -46,7 +46,7 @@ function Camera({ name }) {
const href = `/cameras/${name}`;
const buttons = useMemo(() => {
return [
{ name: 'Events', href: `/events?camera=${name}` },
{ name: 'Events', href: `/events?cameras=${name}` },
{ name: 'Recordings', href: `/recording/${name}` },
];
}, [name]);

View File

@@ -107,19 +107,22 @@ export default function Events({ path, ...props }) {
const filterValues = useMemo(
() => ({
cameras: Object.keys(config?.cameras || {}),
zones: Object.values(config?.cameras || {})
.reduce((memo, camera) => {
memo = memo.concat(Object.keys(camera?.zones || {}));
return memo;
}, [])
.filter((value, i, self) => self.indexOf(value) === i),
zones: [
...Object.values(config?.cameras || {})
.reduce((memo, camera) => {
memo = memo.concat(Object.keys(camera?.zones || {}));
return memo;
}, [])
.filter((value, i, self) => self.indexOf(value) === i),
'None',
],
labels: Object.values(config?.cameras || {})
.reduce((memo, camera) => {
memo = memo.concat(camera?.objects?.track || []);
return memo;
}, config?.objects?.track || [])
.filter((value, i, self) => self.indexOf(value) === i),
sub_labels: Object.values(allSubLabels || []),
sub_labels: (allSubLabels || []).length > 0 ? [...Object.values(allSubLabels), "None"] : [],
}),
[config, allSubLabels]
);
@@ -159,12 +162,12 @@ export default function Events({ path, ...props }) {
// don't remove all if only one option
if (currentItems.length > 1) {
currentItems.splice(currentItems.indexOf(item), 1);
items = currentItems.join(",");
items = currentItems.join(',');
} else {
items = ["all"];
items = ['all'];
}
} else {
let currentItems = searchParams[name].length > 0 ? searchParams[name].split(",") : [];
let currentItems = searchParams[name].length > 0 ? searchParams[name].split(',') : [];
if (currentItems.includes(item)) {
// don't remove the last item in the filter list
@@ -172,12 +175,12 @@ export default function Events({ path, ...props }) {
currentItems.splice(currentItems.indexOf(item), 1);
}
items = currentItems.join(",");
} else if ((currentItems.length + 1) == filterValues[name].length) {
items = ["all"];
items = currentItems.join(',');
} else if (currentItems.length + 1 == filterValues[name].length) {
items = ['all'];
} else {
currentItems.push(item);
items = currentItems.join(",");
items = currentItems.join(',');
}
}
@@ -301,47 +304,46 @@ export default function Events({ path, ...props }) {
title="Cameras"
options={filterValues.cameras}
selection={searchParams.cameras}
onToggle={(item) => onToggleNamedFilter("cameras", item)}
onShowAll={() => onFilter("cameras", ["all"])}
onSelectSingle={(item) => onFilter("cameras", item)}
onToggle={(item) => onToggleNamedFilter('cameras', item)}
onShowAll={() => onFilter('cameras', ['all'])}
onSelectSingle={(item) => onFilter('cameras', item)}
/>
<MultiSelect
className="basis-1/5 cursor-pointer rounded dark:bg-slate-800"
title="Labels"
options={filterValues.labels}
selection={searchParams.labels}
onToggle={(item) => onToggleNamedFilter("labels", item) }
onShowAll={() => onFilter("labels", ["all"])}
onSelectSingle={(item) => onFilter("labels", item)}
onToggle={(item) => onToggleNamedFilter('labels', item)}
onShowAll={() => onFilter('labels', ['all'])}
onSelectSingle={(item) => onFilter('labels', item)}
/>
<MultiSelect
className="basis-1/5 cursor-pointer rounded dark:bg-slate-800"
title="Zones"
options={filterValues.zones}
selection={searchParams.zones}
onToggle={(item) => onToggleNamedFilter("zones", item) }
onShowAll={() => onFilter("zones", ["all"])}
onSelectSingle={(item) => onFilter("zones", item)}
onToggle={(item) => onToggleNamedFilter('zones', item)}
onShowAll={() => onFilter('zones', ['all'])}
onSelectSingle={(item) => onFilter('zones', item)}
/>
{
filterValues.sub_labels.length > 0 && (
<MultiSelect
className="basis-1/5 cursor-pointer rounded dark:bg-slate-800"
title="Sub Labels"
options={filterValues.sub_labels}
selection={searchParams.sub_labels}
onToggle={(item) => onToggleNamedFilter("sub_labels", item) }
onShowAll={() => onFilter("sub_labels", ["all"])}
onSelectSingle={(item) => onFilter("sub_labels", item)}
/>
)}
{filterValues.sub_labels.length > 0 && (
<MultiSelect
className="basis-1/5 cursor-pointer rounded dark:bg-slate-800"
title="Sub Labels"
options={filterValues.sub_labels}
selection={searchParams.sub_labels}
onToggle={(item) => onToggleNamedFilter('sub_labels', item)}
onShowAll={() => onFilter('sub_labels', ['all'])}
onSelectSingle={(item) => onFilter('sub_labels', item)}
/>
)}
<StarRecording
className="h-10 w-10 text-yellow-300 cursor-pointer ml-auto"
onClick={() => onFilter("favorites", searchParams.favorites ? 0 : 1)}
onClick={() => onFilter('favorites', searchParams.favorites ? 0 : 1)}
fill={searchParams.favorites == 1 ? 'currentColor' : 'none'}
/>
<div ref={datePicker} className="ml-right">
<CalendarIcon
className="h-8 w-8 cursor-pointer"

View File

@@ -203,54 +203,58 @@ export default function System() {
)}
<Heading size="lg">Cameras</Heading>
<div data-testid="cameras" className="grid grid-cols-1 3xl:grid-cols-3 md:grid-cols-2 gap-4">
{cameraNames.map((camera) => (
<div key={camera} className="dark:bg-gray-800 shadow-md hover:shadow-lg rounded-lg transition-shadow">
<div className="capitalize text-lg flex justify-between p-4">
<Link href={`/cameras/${camera}`}>{camera.replaceAll('_', ' ')}</Link>
<Button onClick={(e) => onHandleFfprobe(camera, e)}>ffprobe</Button>
{!cameras ? (
<ActivityIndicator />
) : (
<div data-testid="cameras" className="grid grid-cols-1 3xl:grid-cols-3 md:grid-cols-2 gap-4">
{cameraNames.map((camera) => (
<div key={camera} className="dark:bg-gray-800 shadow-md hover:shadow-lg rounded-lg transition-shadow">
<div className="capitalize text-lg flex justify-between p-4">
<Link href={`/cameras/${camera}`}>{camera.replaceAll('_', ' ')}</Link>
<Button onClick={(e) => onHandleFfprobe(camera, e)}>ffprobe</Button>
</div>
<div className="p-2">
<Table className="w-full">
<Thead>
<Tr>
<Th>Process</Th>
<Th>P-ID</Th>
<Th>fps</Th>
<Th>Cpu %</Th>
<Th>Memory %</Th>
</Tr>
</Thead>
<Tbody>
<Tr key="capture" index="0">
<Td>Capture</Td>
<Td>{cameras[camera]['capture_pid'] || '- '}</Td>
<Td>{cameras[camera]['process_fps'] || '- '}</Td>
<Td>{cpu_usages[cameras[camera]['capture_pid']]?.['cpu'] || '- '}%</Td>
<Td>{cpu_usages[cameras[camera]['capture_pid']]?.['mem'] || '- '}%</Td>
</Tr>
<Tr key="detect" index="1">
<Td>Detect</Td>
<Td>{cameras[camera]['pid'] || '- '}</Td>
<Td>
{cameras[camera]['detection_fps']} ({cameras[camera]['skipped_fps']} skipped)
</Td>
<Td>{cpu_usages[cameras[camera]['pid']]?.['cpu'] || '- '}%</Td>
<Td>{cpu_usages[cameras[camera]['pid']]?.['mem'] || '- '}%</Td>
</Tr>
<Tr key="ffmpeg" index="2">
<Td>ffmpeg</Td>
<Td>{cameras[camera]['ffmpeg_pid'] || '- '}</Td>
<Td>{cameras[camera]['camera_fps'] || '- '}</Td>
<Td>{cpu_usages[cameras[camera]['ffmpeg_pid']]?.['cpu'] || '- '}%</Td>
<Td>{cpu_usages[cameras[camera]['ffmpeg_pid']]?.['mem'] || '- '}%</Td>
</Tr>
</Tbody>
</Table>
</div>
</div>
<div className="p-2">
<Table className="w-full">
<Thead>
<Tr>
<Th>Process</Th>
<Th>P-ID</Th>
<Th>fps</Th>
<Th>Cpu %</Th>
<Th>Memory %</Th>
</Tr>
</Thead>
<Tbody>
<Tr key="capture" index="0">
<Td>Capture</Td>
<Td>{cameras[camera]['capture_pid'] || "- "}</Td>
<Td>{cameras[camera]['process_fps'] || "- "}</Td>
<Td>{cpu_usages[cameras[camera]['capture_pid']]?.['cpu'] || "- "}%</Td>
<Td>{cpu_usages[cameras[camera]['capture_pid']]?.['mem'] || "- "}%</Td>
</Tr>
<Tr key="detect" index="1">
<Td>Detect</Td>
<Td>{cameras[camera]['pid'] || "- "}</Td>
<Td>
{cameras[camera]['detection_fps']} ({cameras[camera]['skipped_fps']} skipped)
</Td>
<Td>{cpu_usages[cameras[camera]['pid']]?.['cpu'] || "- "}%</Td>
<Td>{cpu_usages[cameras[camera]['pid']]?.['mem'] || "- "}%</Td>
</Tr>
<Tr key="ffmpeg" index="2">
<Td>ffmpeg</Td>
<Td>{cameras[camera]['ffmpeg_pid'] || "- "}</Td>
<Td>{cameras[camera]['camera_fps'] || "- "}</Td>
<Td>{cpu_usages[cameras[camera]['ffmpeg_pid']]?.['cpu'] || "- "}%</Td>
<Td>{cpu_usages[cameras[camera]['ffmpeg_pid']]?.['mem'] || "- "}%</Td>
</Tr>
</Tbody>
</Table>
</div>
</div>
))}
</div>
))}
</div>
)}
<p>System stats update automatically every {config.mqtt.stats_interval} seconds.</p>
</Fragment>