Compare commits

..

105 Commits

Author SHA1 Message Date
Blake Blackshear
7b4e510b95 fix initial switch state 2021-01-20 21:56:43 -06:00
Blake Blackshear
bb4f79cdfe handle exception when frame isnt in cache 2021-01-20 21:56:43 -06:00
Paul Armstrong
e32e69c2d0 feat(web): AutoUpdatingCameraImage to replace MJPEG feed 2021-01-20 21:15:25 -06:00
Paul Armstrong
a71ae053e4 fix(web): set default path to cameras view 2021-01-20 06:46:25 -06:00
Blake Blackshear
fcc9cd56cc update index.js to use baseUrl 2021-01-19 21:31:17 -06:00
Blake Blackshear
b981a3110b first pass at subfilter for ingress support 2021-01-19 19:58:42 -06:00
Paul Armstrong
2da50cc538 fix(web): dark mode text color fixes
fixes #544
2021-01-19 18:02:08 -06:00
Blake Blackshear
cb4a0aa594 ensure error message with missing config is printed 2021-01-19 18:00:26 -06:00
Blake Blackshear
52da1fddc7 update notification example 2021-01-19 07:41:45 -06:00
Blake Blackshear
14645ce4f8 fix mqtt switch handling 2021-01-19 07:41:17 -06:00
Blake Blackshear
97ce7f3028 initialize detection correctly from config 2021-01-19 07:40:51 -06:00
Blake Blackshear
3b5302f6ea update wheels version 2021-01-19 06:19:28 -06:00
Blake Blackshear
74eb16f213 pin numpy 2021-01-19 06:16:44 -06:00
Paul Armstrong
a3d6bf214c feat(web): layout & auto-update debug page 2021-01-18 12:57:09 -06:00
Paul Armstrong
16121ffd00 fix(web): ensure button bg colors show in prod builds 2021-01-18 11:39:42 -06:00
Blake Blackshear
91628bd5d8 fix zone config 2021-01-18 06:38:26 -06:00
Blake Blackshear
b10b64bf57 no longer need special aarch64 wheels build 2021-01-17 08:18:54 -06:00
Blake Blackshear
749c34be9f versioning wheels image 2021-01-16 20:03:42 -06:00
Blake Blackshear
8cfdfab985 move wheels to build container 2021-01-16 19:56:21 -06:00
Paul Armstrong
ef25f8a31e fix(web): mask zone editor to handle object filter masks
Includes additional handlers for adding/removing masks, as well as click to copy configs

fixes #523
2021-01-16 19:09:18 -06:00
Paul Armstrong
2a0551a08a feat(web): hash build files to avoid cache issues 2021-01-16 19:09:18 -06:00
Paul Armstrong
0b80419f15 fix(web): ensure mask editing works in firefox 2021-01-16 19:09:18 -06:00
Blake Blackshear
0dc81117aa docs updates for notification changes 2021-01-16 19:09:18 -06:00
Blake Blackshear
49b29d72a7 rename snapshot endpoint to thumbnail 2021-01-16 19:09:18 -06:00
Blake Blackshear
21ece238ff mqtt tweaks for switches 2021-01-16 19:09:18 -06:00
Blake Blackshear
f6ba3f2daa allow summary data to be filtered 2021-01-16 19:09:18 -06:00
Blake Blackshear
bb0d3cb59a update readme 2021-01-16 19:09:18 -06:00
Blake Blackshear
ca9b6d6c5c snapshots config typo 2021-01-16 19:09:18 -06:00
Blake Blackshear
3103ad2bfe update object filters to inherit like motion settings 2021-01-16 19:09:18 -06:00
Blake Blackshear
eab3998ad0 remove support for image masks 2021-01-16 19:09:18 -06:00
Blake Blackshear
a3dfd3a8e0 don't fallback to the CPU
fixes #381
2021-01-16 19:09:18 -06:00
Blake Blackshear
f1c3087775 add change type to events topic
#476
2021-01-16 19:09:18 -06:00
Blake Blackshear
1be91ed3f2 ensure each camera has a detect role set 2021-01-16 19:09:18 -06:00
Blake Blackshear
fd83c4f229 add detection enable to config
fixes #482
2021-01-16 19:09:18 -06:00
Blake Blackshear
de99221ad5 add env vars to config
fixes #509
2021-01-16 19:09:18 -06:00
Blake Blackshear
6892ce56ac enable and disable detection via mqtt 2021-01-16 19:09:18 -06:00
Blake Blackshear
41cea6f62e move setproctitle to prebuilt wheel location 2021-01-16 19:09:18 -06:00
Blake Blackshear
4bbffa97df switch to docker based web builds 2021-01-16 19:09:18 -06:00
Blake Blackshear
614f8abfef handle null thumbnail data 2021-01-16 19:09:18 -06:00
Blake Blackshear
14289b5fd1 add mask as object filter 2021-01-16 19:09:18 -06:00
Blake Blackshear
4164beff1c add object masks and move moton mask 2021-01-16 19:09:18 -06:00
Blake Blackshear
9b3ab486de add missing global shapshots config 2021-01-16 19:09:18 -06:00
Patrick Decat
232a49814a Add missing migrations in docker images 2021-01-16 19:09:18 -06:00
Paul Armstrong
6c61f0b135 fix(web): ensure postcss and postcss-cli are marked as deps 2021-01-16 19:09:18 -06:00
Patrick Decat
c572cec253 Fix Makefile to ignore gpg signatures in commits 2021-01-16 19:09:18 -06:00
Paul Armstrong
d4941f2a5f feat!: web user interface 2021-01-16 19:09:18 -06:00
Blake Blackshear
bf5ec2f65f try to cleanup some migration logging 2021-01-16 19:09:18 -06:00
Blake Blackshear
f8e21584b6 add retention settings for snapshots 2021-01-16 19:09:18 -06:00
Blake Blackshear
3cba83f84b init variables on camera state 2021-01-16 19:09:18 -06:00
Blake Blackshear
dcb4255d7e handle process exit exceptions 2021-01-16 19:09:18 -06:00
Blake Blackshear
9fc3c0dc2f store has_clip and has_snapshot on events 2021-01-16 19:09:18 -06:00
Blake Blackshear
a78830b48e add database migrations 2021-01-16 19:09:18 -06:00
Nat Morris
949fbadcdc Set titles for forked processes 2021-01-16 19:09:18 -06:00
Nat Morris
12c9e63b13 New stats module, refactor stats generation out of http module.
StatsEmitter thread to send stats to MQTT every 60 seconds by default, optional stats_interval config value.

New service stats attribute, containing uptime in seconds and version.
2021-01-16 19:09:18 -06:00
Blake Blackshear
157b230702 turn off snapshots via mqtt 2021-01-16 19:09:18 -06:00
Blake Blackshear
c69299d659 enable turning clips on and off via mqtt 2021-01-16 19:09:18 -06:00
Blake Blackshear
285d630770 cleanup save_Clips/clips inconsistency 2021-01-16 19:09:18 -06:00
Blake Blackshear
b9318092f4 add jpg snapshots to disk and clean up config 2021-01-16 19:09:18 -06:00
Paul Armstrong
905c361d52 fix: ensure timestamp is drawn above mask 2021-01-13 06:55:10 -06:00
Leonardo Merza
4443abbc49 add notes for Blue Iris RTSP support 2020-12-31 08:36:03 -06:00
yllar
dabb36ad93 Update README.md
change tmpfs size from 100MB to 1GB
2020-12-31 08:33:31 -06:00
kluszczyn
2bc8736fd9 Recordings - fix expire_file 2020-12-22 09:58:26 -05:00
Blake Blackshear
e9b3b09cc2 add clips endpoint to readme 2020-12-22 09:58:26 -05:00
Blake Blackshear
ca337c32b4 better mask error handling 2020-12-22 09:58:26 -05:00
Blake Blackshear
24b8bd7c85 fix tmpfs 2020-12-22 09:58:26 -05:00
Blake Blackshear
3ad75a441d remove redundant error output 2020-12-20 08:04:54 -06:00
Blake Blackshear
f006e9be8d use CACHE_DIR constant 2020-12-20 08:04:54 -06:00
Blake Blackshear
03f3ba8008 enable mounting tmpfs volume on start 2020-12-20 08:04:54 -06:00
Blake Blackshear
96a44eb7bf docs and issue template 2020-12-20 07:37:44 -06:00
Blake Blackshear
006782fe3d update process clip for latest changes 2020-12-20 07:37:44 -06:00
Blake Blackshear
ff3e95bbf7 publish event updates on zone change 2020-12-20 07:37:44 -06:00
Blake Blackshear
4b95a37e65 readme updates 2020-12-20 07:37:44 -06:00
Blake Blackshear
38c661b3a8 handle scenario with empty cache 2020-12-20 07:37:44 -06:00
Blake Blackshear
0d6e4f6a66 add qsv support to amd64 image 2020-12-20 07:37:44 -06:00
Blake Blackshear
1ad2219f1c add num_threads fixes #322 2020-12-20 07:37:44 -06:00
Blake Blackshear
dfcdd289c3 optimize clips fixes #299 2020-12-20 07:37:44 -06:00
Blake Blackshear
32f5f2cca9 add post_capture option 2020-12-20 07:37:44 -06:00
Blake Blackshear
24bfe9f3e8 re-crop to the object rather than the region 2020-12-20 07:37:44 -06:00
Blake Blackshear
004667dc99 allow runtime drawing settings for mjpeg and latest 2020-12-20 07:37:44 -06:00
Blake Blackshear
9d785dc781 allow the mask to be a list of masks 2020-12-20 07:37:44 -06:00
Blake Blackshear
cbba5a7af0 adding version endpoint 2020-12-20 07:37:44 -06:00
Blake Blackshear
29b29ee349 configurable motion and detect settings 2020-12-20 07:37:44 -06:00
Blake Blackshear
9ad53e09af update gitignore 2020-12-20 07:37:44 -06:00
Blake Blackshear
c9278991c9 fix test 2020-12-20 07:37:44 -06:00
Blake Blackshear
729de48934 switch default threshold to .7 2020-12-20 07:37:44 -06:00
Blake Blackshear
7476bff5fb allow process clips to output a csv of scores 2020-12-20 07:37:44 -06:00
Blake Blackshear
1e9eae8d9a allow db path to be customized 2020-12-20 07:37:44 -06:00
Blake Blackshear
8113a53381 add telegram example 2020-12-20 07:37:44 -06:00
Blake Blackshear
72833686f1 fix process clip 2020-12-20 07:37:44 -06:00
Blake Blackshear
096c21f105 handle empty string args 2020-12-20 07:37:44 -06:00
Blake Blackshear
181f66357b allow region to extend beyond the frame 2020-12-20 07:37:44 -06:00
tubalainen
a54fbc483c Updated file
ref: https://github.com/blakeblackshear/frigate/issues/373
2020-12-12 10:38:02 -06:00
Blake Blackshear
92d5a002d3 swap width and height to reduce confusion 2020-12-10 19:22:03 -06:00
Blake Blackshear
f9184903d7 updating compose example to reduce confusion 2020-12-10 19:02:08 -06:00
Blake Blackshear
91cde6ce7b allow defining model shape and switch to mobiledet as default model 2020-12-09 07:22:26 -06:00
Blake Blackshear
186a4587c7 add model dimensions to config 2020-12-09 07:22:26 -06:00
Patrick Decat
6049acb1f3 Document beta addon host 2020-12-08 07:25:13 -06:00
Blake Blackshear
2d2ebf313c make shm consistent with compose 2020-12-08 07:24:37 -06:00
tubalainen
3d329dcb52 Updated docker command line...
...to correspond with 0.8.0 feature set.
2020-12-08 07:24:37 -06:00
Blake Blackshear
06854fc34f readme cleanup fixes #332 2020-12-07 18:00:12 -06:00
Blake Blackshear
e01e14d866 handle and warn if roles dont match enabled features 2020-12-07 08:07:35 -06:00
Blake Blackshear
3dfd251ebb camera recommendations 2020-12-07 07:36:29 -06:00
Blake Blackshear
dcea807f77 catch all psutil errors 2020-12-07 07:16:48 -06:00
Blake Blackshear
87d83ff33a clarify height width and fps 2020-12-07 07:16:28 -06:00
Blake Blackshear
1d31cbdf0d readme updates 2020-12-06 14:25:28 -06:00
71 changed files with 12050 additions and 920 deletions

View File

@@ -3,4 +3,5 @@ docs/
.gitignore
debug
config/
*.pyc
*.pyc
.git

View File

@@ -1,6 +1,6 @@
---
name: Bug report
about: Create a report to help us improve
name: Bug report or Support request
about: ''
title: ''
labels: ''
assignees: ''
@@ -8,10 +8,10 @@ assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
A clear and concise description of what your issue is.
**Version of frigate**
What version are you using?
Output from `/version`
**Config file**
Include your full config file wrapped in triple back ticks.
@@ -19,14 +19,14 @@ Include your full config file wrapped in triple back ticks.
config here
```
**Logs**
**Frigate container logs**
```
Include relevant log output here
```
**Frigate debug stats**
```
Output from frigate's /debug/stats endpoint
**Frigate stats**
```json
Output from frigate's /stats endpoint
```
**FFprobe from your camera**
@@ -41,6 +41,7 @@ If applicable, add screenshots to help explain your problem.
**Computer Hardware**
- OS: [e.g. Ubuntu, Windows]
- Install method: [e.g. Addon, Docker Compose, Docker Command]
- Virtualization: [e.g. Proxmox, Virtualbox]
- Coral Version: [e.g. USB, PCIe, None]
- Network Setup: [e.g. Wired, WiFi]

11
.gitignore vendored
View File

@@ -1,4 +1,11 @@
*.pyc
.DS_Store
*.pyc
debug
.vscode
config/config.yml
config/config.yml
models
*.mp4
*.db
frigate/version.py
web/build
web/node_modules

View File

@@ -1,49 +1,59 @@
default_target: amd64_frigate
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
version:
echo "VERSION='0.8.0-$(COMMIT_HASH)'" > frigate/version.py
web:
docker build --tag frigate-web --file docker/Dockerfile.web web/
amd64_wheels:
docker build --tag blakeblackshear/frigate-wheels:amd64 --file docker/Dockerfile.wheels .
docker build --tag blakeblackshear/frigate-wheels:1.0.1-amd64 --file docker/Dockerfile.wheels .
amd64_ffmpeg:
docker build --tag blakeblackshear/frigate-ffmpeg:1.0.0-amd64 --file docker/Dockerfile.ffmpeg.amd64 .
docker build --tag blakeblackshear/frigate-ffmpeg:1.1.0-amd64 --file docker/Dockerfile.ffmpeg.amd64 .
amd64_frigate:
docker build --tag frigate-base --build-arg ARCH=amd64 --file docker/Dockerfile.base .
amd64_frigate: version web
docker build --tag frigate-base --build-arg ARCH=amd64 --build-arg FFMPEG_VERSION=1.1.0 --build-arg WHEELS_VERSION=1.0.1 --file docker/Dockerfile.base .
docker build --tag frigate --file docker/Dockerfile.amd64 .
amd64_all: amd64_wheels amd64_ffmpeg amd64_frigate
amd64nvidia_wheels:
docker build --tag blakeblackshear/frigate-wheels:amd64nvidia --file docker/Dockerfile.wheels .
docker build --tag blakeblackshear/frigate-wheels:1.0.1-amd64nvidia --file docker/Dockerfile.wheels .
amd64nvidia_ffmpeg:
docker build --tag blakeblackshear/frigate-ffmpeg:1.0.0-amd64nvidia --file docker/Dockerfile.ffmpeg.amd64nvidia .
amd64nvidia_frigate:
docker build --tag frigate-base --build-arg ARCH=amd64nvidia --file docker/Dockerfile.base .
amd64nvidia_frigate: version web
docker build --tag frigate-base --build-arg ARCH=amd64nvidia --build-arg FFMPEG_VERSION=1.0.0 --build-arg WHEELS_VERSION=1.0.1 --file docker/Dockerfile.base .
docker build --tag frigate --file docker/Dockerfile.amd64nvidia .
amd64nvidia_all: amd64nvidia_wheels amd64nvidia_ffmpeg amd64nvidia_frigate
aarch64_wheels:
docker build --tag blakeblackshear/frigate-wheels:aarch64 --file docker/Dockerfile.wheels.aarch64 .
docker build --tag blakeblackshear/frigate-wheels:1.0.1-aarch64 --file docker/Dockerfile.wheels .
aarch64_ffmpeg:
docker build --tag blakeblackshear/frigate-ffmpeg:1.0.0-aarch64 --file docker/Dockerfile.ffmpeg.aarch64 .
aarch64_frigate:
docker build --tag frigate-base --build-arg ARCH=aarch64 --file docker/Dockerfile.base .
aarch64_frigate: version web
docker build --tag frigate-base --build-arg ARCH=aarch64 --build-arg FFMPEG_VERSION=1.0.0 --build-arg WHEELS_VERSION=1.0.1 --file docker/Dockerfile.base .
docker build --tag frigate --file docker/Dockerfile.aarch64 .
armv7_all: armv7_wheels armv7_ffmpeg armv7_frigate
armv7_wheels:
docker build --tag blakeblackshear/frigate-wheels:armv7 --file docker/Dockerfile.wheels .
docker build --tag blakeblackshear/frigate-wheels:1.0.1-armv7 --file docker/Dockerfile.wheels .
armv7_ffmpeg:
docker build --tag blakeblackshear/frigate-ffmpeg:1.0.0-armv7 --file docker/Dockerfile.ffmpeg.armv7 .
armv7_frigate:
docker build --tag frigate-base --build-arg ARCH=armv7 --file docker/Dockerfile.base .
armv7_frigate: version web
docker build --tag frigate-base --build-arg ARCH=armv7 --build-arg FFMPEG_VERSION=1.0.0 --build-arg WHEELS_VERSION=1.0.1 --file docker/Dockerfile.base .
docker build --tag frigate --file docker/Dockerfile.armv7 .
armv7_all: armv7_wheels armv7_ffmpeg armv7_frigate
.PHONY: web

363
README.md
View File

@@ -33,16 +33,23 @@ Use of a [Google Coral Accelerator](https://coral.ai/products/) is optional, but
- [Object Filters](#object-filters)
- [Masks](#masks)
- [Zones](#zones)
- [Recording Clips](#recording-clips)
- [24/7 Recordings](#247-recordings)
- [RTMP Streams](#rtmp-streams)
- [Recording Clips (clips)](#recording-clips)
- [Snapshots (snapshots)](#snapshots)
- [24/7 Recordings (record)](#247-recordings)
- [RTMP Streams (rtmp)](#rtmp-streams)
- [Integration with HomeAssistant](#integration-with-homeassistant)
- [Web UI](#web-ui)
- [MQTT Topics](#mqtt-topics)
- [HTTP Endpoints](#http-endpoints)
- [Custom Models](#custom-models)
- [Troubleshooting](#troubleshooting)
## Recommended Hardware
### Cameras
Cameras that output H.264 video and AAC audio will offer the most compatibility with all features of Frigate and HomeAssistant. It is also helpful if your camera supports multiple substreams to allow different resolutions to be used for detection, streaming, clips, and recordings without re-encoding.
### Computer
|Name|Inference Speed|Notes|
|----|---------------|-----|
|Atomic Pi|16ms|Good option for a dedicated low power board with a small number of cameras. Can leverage Intel QuickSync for stream decoding.|
@@ -56,9 +63,14 @@ Use of a [Google Coral Accelerator](https://coral.ai/products/) is optional, but
[Back to top](#documentation)
## Installing
Frigate is a Docker container that can be run on any Docker host including as a [HassOS Addon](https://www.home-assistant.io/addons/). See instructions below for installing the HassOS addon.
For HomeAssistant users, there is also a [custom component (aka integration)](https://github.com/blakeblackshear/frigate-hass-integration). This custom component adds tighter integration with HomeAssistant by automatically setting up camera entities, sensors, media browser for clips and recordings, and a public API to simplify notifications.
Note that HassOS Addons and custom components are different things. If you are already running Frigate with Docker directly, you do not need the Addon since the Addon would run another instance of Frigate.
### HassOS Addon
HassOS users can install via the addon repository. Frigate requires that an MQTT server be running.
HassOS users can install via the addon repository. Frigate requires an MQTT server.
1. Navigate to Supervisor > Add-on Store > Repositories
1. Add https://github.com/blakeblackshear/frigate-hass-addons
1. Setup your configuration in the `Configuration` tab
@@ -69,16 +81,19 @@ Make sure you choose the right image for your architecture:
|Arch|Image Name|
|-|-|
|amd64|blakeblackshear/frigate:stable-amd64|
|amd64nvidia|blakeblackshear/frigate:stable-amd64nvidia|
|armv7|blakeblackshear/frigate:stable-armv7|
|aarch64|blakeblackshear/frigate:stable-aarch64|
It is recommended to run with docker-compose:
```yaml
version: "3.6"
services:
frigate:
container_name: frigate
restart: unless-stopped
privileged: true
image: blakeblackshear/frigate:stable-amd64
image: blakeblackshear/frigate:0.8.0-beta2-amd64
volumes:
- /dev/bus/usb:/dev/bus/usb
- /etc/localtime:/etc/localtime:ro
@@ -88,18 +103,12 @@ It is recommended to run with docker-compose:
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 100000000
size: 1000000000
ports:
- "5000:5000"
- "1935:1935" # RTMP feeds
environment:
FRIGATE_RTSP_PASSWORD: "password"
healthcheck:
test: ["CMD", "wget" , "-q", "-O-", "http://localhost:5000"]
interval: 30s
timeout: 10s
retries: 5
start_period: 3m
```
If you can't use docker compose, you can run the container with something similar to this:
@@ -107,12 +116,16 @@ If you can't use docker compose, you can run the container with something simila
docker run --rm \
--name frigate \
--privileged \
--mount type=tmpfs,target=/tmp/cache,tmpfs-size=1000000000 \
-v /dev/bus/usb:/dev/bus/usb \
-v <path_to_config_dir>:/config:ro \
-v <path_to_directory_for_clips>:/media/frigate/clips \
-v <path_to_directory_for_recordings>:/media/frigate/recordings \
-v <path_to_config>:/config:ro \
-v /etc/localtime:/etc/localtime:ro \
-p 5000:5000 \
-e FRIGATE_RTSP_PASSWORD='password' \
blakeblackshear/frigate:stable-amd64
-p 5000:5000 \
-p 1935:1935 \
blakeblackshear/frigate:0.8.0-beta2-amd64
```
### Kubernetes
@@ -165,11 +178,13 @@ cameras:
roles:
- detect
- rtmp
height: 720
width: 1280
height: 720
fps: 5
```
Here are all the configuration options:
Here are all configuration options.
**Please do not copy all of this as your starting configuration. Optional configuration options should not be included in your config unless you need to change from the default values.**
```yaml
# Optional: Logging configuration
logger:
@@ -179,8 +194,19 @@ logger:
logs:
frigate.mqtt: error
# Optional: Environment variables
# This section can be used to set environment variables for those unable to modify the environment
# of the container (ie. within Hass.io)
environment_vars:
EXAMPLE_VAR: value
# Optional: database configuration
database:
# Optional: database path
# This may need to be in a custom location if network storage is used for clips
path: /media/frigate/clips/frigate.db
# Optional: detectors configuration
# USB Coral devices will be auto detected with CPU fallback
detectors:
# Required: name of the detector
coral:
@@ -189,6 +215,16 @@ detectors:
type: edgetpu
# Optional: device name as defined here: https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api
device: usb
# Optional: num_threads value passed to the tflite.Interpreter (default: shown below)
# This value is only used for CPU types
num_threads: 3
# Optional: model configuration
model:
# Required: height of the trained model
height: 320
# Required: width of the trained model
width: 320
# Required: mqtt configuration
mqtt:
@@ -208,13 +244,29 @@ mqtt:
# NOTE: Environment variables that begin with 'FRIGATE_' may be referenced in {}.
# eg. password: '{FRIGATE_MQTT_PASSWORD}'
password: password
# Optional: interval in seconds for publishing stats (default: shown below)
stats_interval: 60
# Optional: Global configuration for the jpg snapshots written to the clips directory for each event
snapshots:
# Optional: Retention settings (default: shown below)
retain:
# Required: Default retention days (default: shown below)
default: 10
# Optional: Per object retention days
objects:
person: 15
# Optional: Global configuration for saving clips
save_clips:
clips:
# Optional: Maximum length of time to retain video during long events. (default: shown below)
# NOTE: If an object is being tracked for longer than this amount of time, the cache
# will begin to expire and the resulting clip will be the last x seconds of the event.
max_seconds: 300
# Optional: size of tmpfs mount to create for cache files (default: not set)
# mount -t tmpfs -o size={tmpfs_cache_size} tmpfs /tmp/cache
# Notice: If you have mounted a tmpfs volume through docker, this value should not be set in your config
tmpfs_cache_size: 256m
# Optional: Retention settings for clips (default: shown below)
retain:
# Required: Default retention days (default: shown below)
@@ -261,7 +313,39 @@ objects:
# Optional: minimum score for the object to initiate tracking (default: shown below)
min_score: 0.5
# Optional: minimum decimal percentage for tracked object's computed score to be considered a true positive (default: shown below)
threshold: 0.85
threshold: 0.7
# Optional: Global motion detection config. These may also be defined at the camera level.
# ADVANCED: Most users will not need to set these values in their config
motion:
# Optional: The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. (default: shown below)
# Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive.
# The value should be between 1 and 255.
threshold: 25
# Optional: Minimum size in pixels in the resized motion image that counts as motion
# Increasing this value will prevent smaller areas of motion from being detected. Decreasing will make motion detection more sensitive to smaller
# moving objects.
contour_area: 100
# Optional: Alpha value passed to cv2.accumulateWeighted when averaging the motion delta across multiple frames (default: shown below)
# Higher values mean the current frame impacts the delta a lot, and a single raindrop may register as motion.
# Too low and a fast moving person wont be detected as motion.
delta_alpha: 0.2
# Optional: Alpha value passed to cv2.accumulateWeighted when averaging frames to determine the background (default: shown below)
# Higher values mean the current frame impacts the average a lot, and a new object will be averaged into the background faster.
# Low values will cause things like moving shadows to be detected as motion for longer.
# https://www.geeksforgeeks.org/background-subtraction-in-an-image-using-concept-of-running-average/
frame_alpha: 0.2
# Optional: Height of the resized motion frame (default: 1/6th of the original frame height)
# This operates as an efficient blur alternative. Higher values will result in more granular motion detection at the expense of higher CPU usage.
# Lower values result in less CPU, but small changes may not register as motion.
frame_height: 180
# Optional: Global detection settings. These may also be defined at the camera level.
# ADVANCED: Most users will not need to set these values in their config
detect:
# Optional: Number of frames without a detection before frigate considers an object to be gone. (default: double the frame rate)
max_disappeared: 10
# Required: configuration section for cameras
cameras:
@@ -275,6 +359,8 @@ cameras:
# NOTE: Environment variables that begin with 'FRIGATE_' may be referenced in {}
- path: rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
# Required: list of roles for this stream. valid values are: detect,record,clips,rtmp
# NOTICE: In addition to assigning the record, clips, and rtmp roles,
# they must also be enabled in the camera config.
roles:
- detect
- rtmp
@@ -294,32 +380,25 @@ cameras:
# Optional: camera specific output args (default: inherit)
output_args:
# Required: height of the frame
# NOTE: Recommended to set this value, but frigate will attempt to autodetect.
height: 720
# Required: width of the frame
# NOTE: Recommended to set this value, but frigate will attempt to autodetect.
# Required: width of the frame for the input with the detect role
width: 1280
# Optional: desired fps for your camera
# Required: height of the frame for the input with the detect role
height: 720
# Optional: desired fps for your camera for the input with the detect role
# NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
# Frigate will attempt to autodetect if not specified.
fps: 5
# Optional: motion mask
# NOTE: see docs for more detailed info on creating masks
mask: poly,0,900,1080,900,1080,1920,0,1920
# Optional: camera level motion config
motion:
# Optional: motion mask
# NOTE: see docs for more detailed info on creating masks
mask: 0,900,1080,900,1080,1920,0,1920
# Optional: timeout for highest scoring image before allowing it
# to be replaced by a newer image. (default: shown below)
best_image_timeout: 60
# Optional: camera specific mqtt settings
mqtt:
# Optional: crop the camera frame to the detection region of the object (default: False)
crop_to_region: True
# Optional: resize the image before publishing over mqtt
snapshot_height: 175
# Optional: zones for this camera
zones:
# Required: name of the zone
@@ -335,17 +414,28 @@ cameras:
person:
min_area: 5000
max_area: 100000
threshold: 0.8
threshold: 0.7
# Optional: Camera level detect settings
detect:
# Optional: enables detection for the camera (default: True)
# This value can be set via MQTT and will be updated in startup based on retained value
enabled: True
# Optional: Number of frames without a detection before frigate considers an object to be gone. (default: double the frame rate)
max_disappeared: 10
# Optional: save clips configuration
# NOTE: This feature does not work if you have added "-vsync drop" in your input params.
# This will only work for camera feeds that can be copied into the mp4 container format without
# encoding such as h264. It may not work for some types of streams.
save_clips:
clips:
# Required: enables clips for the camera (default: shown below)
# This value can be set via MQTT and will be updated in startup based on retained value
enabled: False
# Optional: Number of seconds before the event to include in the clips (default: shown below)
pre_capture: 30
pre_capture: 5
# Optional: Number of seconds after the event to include in the clips (default: shown below)
post_capture: 5
# Optional: Objects to save clips for. (default: all tracked objects)
objects:
- person
@@ -369,21 +459,43 @@ cameras:
# Required: Enable the live stream (default: True)
enabled: True
# Optional: Configuration for the snapshots in the debug view and mqtt
# Optional: Configuration for the jpg snapshots written to the clips directory for each event
snapshots:
# Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below)
# This value can be set via MQTT and will be updated in startup based on retained value
enabled: False
# Optional: print a timestamp on the snapshots (default: shown below)
show_timestamp: True
# Optional: draw zones on the debug mjpeg feed (default: shown below)
draw_zones: False
# Optional: draw bounding boxes on the mqtt snapshots (default: shown below)
draw_bounding_boxes: True
# Optional: crop the snapshot to the detection region (default: shown below)
crop_to_region: True
# Optional: height to resize the snapshot to (default: shown below)
# NOTE: 175px is optimized for thumbnails in the homeassistant media browser
timestamp: False
# Optional: draw bounding box on the snapshots (default: shown below)
bounding_box: False
# Optional: crop the snapshot (default: shown below)
crop: False
# Optional: height to resize the snapshot to (default: original size)
height: 175
# Optional: Camera override for retention settings (default: global values)
retain:
# Required: Default retention days (default: shown below)
default: 10
# Optional: Per object retention days
objects:
person: 15
# Optional: Camera level object filters config. If defined, this is used instead of the global config.
# Optional: Configuration for the jpg snapshots published via MQTT
mqtt:
# Optional: Enable publishing snapshot via mqtt for camera (default: shown below)
# NOTE: Only applies to publishing image data to MQTT via 'frigate/<camera_name>/<object_name>/snapshot'.
# All other messages will still be published.
enabled: True
# Optional: print a timestamp on the snapshots (default: shown below)
timestamp: True
# Optional: draw bounding box on the snapshots (default: shown below)
bounding_box: True
# Optional: crop the snapshot (default: shown below)
crop: True
# Optional: height to resize the snapshot to (default: shown below)
height: 270
# Optional: Camera level object filters config.
objects:
track:
- person
@@ -393,7 +505,10 @@ cameras:
min_area: 5000
max_area: 100000
min_score: 0.5
threshold: 0.85
threshold: 0.7
# Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
# Checks based on the bottom center of the bounding box of the object
mask: 0,0,1000,0,1000,200,0,200
```
[Back to top](#documentation)
@@ -424,8 +539,8 @@ cameras:
roles:
- clips
- record
height: 720
width: 1280
height: 720
fps: 5
```
@@ -433,7 +548,7 @@ cameras:
[Back to top](#documentation)
## Optimizing Performance
- **Google Coral**: It is strongly recommended to use a Google Coral, but Frigate will fall back to CPU in the event one is not found. Offloading TensorFlow to the Google Coral is an order of magnitude faster and will reduce your CPU load dramatically. A $60 device will outperform $2000 CPU.
- **Google Coral**: It is strongly recommended to use a Google Coral, but Frigate will fall back to CPU in the event one is not found. Offloading TensorFlow to the Google Coral is an order of magnitude faster and will reduce your CPU load dramatically. A $60 device will outperform $2000 CPU. Frigate should work with any supported Coral device from https://coral.ai
- **Resolution**: For the `detect` input, choose a camera resolution where the smallest object you want to detect barely fits inside a 300x300px square. The model used by Frigate is trained on 300x300px images, so you will get worse performance and no improvement in accuracy by using a larger resolution since Frigate resizes the area where it is looking for objects to 300x300 anyway.
- **FPS**: 5 frames per second should be adequate. Higher frame rates will require more CPU usage without improving detections or accuracy. Reducing the frame rate on your camera will have the greatest improvement on system resources.
- **Hardware Acceleration**: Make sure you configure the `hwaccel_args` for your hardware. They provide a significant reduction in CPU usage if they are available.
@@ -442,7 +557,8 @@ cameras:
### FFmpeg Hardware Acceleration
Frigate works on Raspberry Pi 3b/4 and x86 machines. It is recommended to update your configuration to enable hardware accelerated decoding in ffmpeg. Depending on your system, these parameters may not be compatible.
Raspberry Pi 3/4 (32-bit OS):
Raspberry Pi 3/4 (32-bit OS)
**NOTICE**: If you are using the addon, ensure you turn off `Protection mode` for hardware acceleration.
```yaml
ffmpeg:
hwaccel_args:
@@ -451,6 +567,7 @@ ffmpeg:
```
Raspberry Pi 3/4 (64-bit OS)
**NOTICE**: If you are using the addon, ensure you turn off `Protection mode` for hardware acceleration.
```yaml
ffmpeg:
hwaccel_args:
@@ -471,7 +588,17 @@ ffmpeg:
```
Intel-based CPUs (>=10th Generation) via Quicksync (https://trac.ffmpeg.org/wiki/Hardware/QuickSync)
**Note:** You also need to set `LIBVA_DRIVER_NAME=iHD` as an environment variable on the container.
```yaml
ffmpeg:
hwaccel_args:
- -hwaccel
- qsv
- -qsv_device
- /dev/dri/renderD128
```
AMD/ATI GPUs (Radeon HD 2000 and newer GPUs) via libva-mesa-driver (https://trac.ffmpeg.org/wiki/Hardware/QuickSync)
**Note:** You also need to set `LIBVA_DRIVER_NAME=radeonsi` as an environment variable on the container.
```yaml
ffmpeg:
hwaccel_args:
@@ -486,10 +613,12 @@ Nvidia GPU based decoding via NVDEC is supported, but requires special configura
[Back to top](#documentation)
## Detectors
By default Frigate will look for a USB Coral device and fall back to the CPU if it cannot be found. If you have PCI or multiple Coral devices, you need to configure your detector devices in the config file. When using multiple detectors, they run in dedicated processes, but pull from a common queue of requested detections across all cameras.
The default config will look for a USB Coral device. If you do not have a Coral, you will need to configure a CPU detector. If you have PCI or multiple Coral devices, you need to configure your detector devices in the config file. When using multiple detectors, they run in dedicated processes, but pull from a common queue of requested detections across all cameras.
Frigate supports `edgetpu` and `cpu` as detector types. The device value should be specified according to the [Documentation for the TensorFlow Lite Python API](https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api).
**Note**: There is no support for Nvidia GPUs to perform object detection with tensorflow. It can be used for ffmpeg decoding, but not object detection.
Single USB Coral:
```yaml
detectors:
@@ -597,13 +726,20 @@ Frigate can save video clips without any CPU overhead for encoding by simply cop
### Database
Event and clip information is managed in a sqlite database at `/media/frigate/clips/frigate.db`. If that database is deleted, clips will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within HomeAssistant.
If you are storing your clips on a network share (SMB, NFS, etc), you may get a `database is locked` error message on startup. You can customize the location of the database in the config if necessary.
### Global Configuration Options
- `max_seconds`: This limits the size of the cache when an object is being tracked. If an object is stationary and being tracked for a long time, the cache files will expire and this value will be the maximum clip length for the *end* of the event. For example, if this is set to 300 seconds and an object is being tracked for 600 seconds, the clip will end up being the last 300 seconds. Defaults to 300 seconds.
### Per-camera Configuration Options
- `pre_capture`: Defines how much time should be included in the clip prior to the beginning of the event. Defaults to 30 seconds.
- `pre_capture`: Defines how much time should be included in the clip prior to the beginning of the event. Defaults to 5 seconds.
- `post_capture`: Defines how much time should be included in the clip after the end of the event. Defaults to 5 seconds.
- `objects`: List of object types to save clips for. Object types here must be listed for tracking at the camera or global configuration. Defaults to all tracked objects.
[Back to top](#documentation)
## Snapshots
Frigate can save a snapshot image to `/media/frigate/clips` for each event named as `<camera>-<id>.jpg`.
[Back to top](#documentation)
@@ -615,12 +751,14 @@ Event and clip information is managed in a sqlite database at `/media/frigate/cl
[Back to top](#documentation)
## RTMP Streams
Frigate can re-stream your video feed as a RTMP feed for other applications such as HomeAssistant to utilize it. This allows you to use a video feed for detection in frigate and HomeAssistant live view at the same time without having to make two separate connections to the camera. The video feed is copied from the original video feed directly to avoid re-encoding. This feed does not include any annotation by Frigate.
Frigate can re-stream your video feed as a RTMP feed for other applications such as HomeAssistant to utilize it at `rtmp://<frigate_host>/live/<camera_name>`. Port 1935 must be open. This allows you to use a video feed for detection in frigate and HomeAssistant live view at the same time without having to make two separate connections to the camera. The video feed is copied from the original video feed directly to avoid re-encoding. This feed does not include any annotation by Frigate.
Some video feeds are not compatible with RTMP. If you are experiencing issues, check to make sure your camera feed is h264 with AAC audio. If your camera doesn't support a compatible format for RTMP, you can use the ffmpeg args to re-encode it on the fly at the expense of increased CPU utilization.
[Back to top](#documentation)
## Integration with HomeAssistant
The best way to integrate with HomeAssistant is to use the [official integration](https://github.com/blakeblackshear/frigate-hass-integration). When configuring the integration, you will be asked for the `Host` of your frigate instance. This value should be the url you use to access Frigate in the browser and will look like `http://<host>:5000/`. If you are using HassOS with the addon, the host should be `http://ccab4aaf-frigate:5000`. HomeAssistant needs access to port 5000 (api) and 1935 (rtmp) for all features. The integration will setup the following entities within HomeAssistant:
The best way to integrate with HomeAssistant is to use the [official integration](https://github.com/blakeblackshear/frigate-hass-integration). When configuring the integration, you will be asked for the `Host` of your frigate instance. This value should be the url you use to access Frigate in the browser and will look like `http://<host>:5000/`. If you are using HassOS with the addon, the host should be `http://ccab4aaf-frigate:5000` (or `http://ccab4aaf-frigate-beta:5000` if your are using the beta version of the addon). HomeAssistant needs access to port 5000 (api) and 1935 (rtmp) for all features. The integration will setup the following entities within HomeAssistant:
Sensors:
- Stats to monitor frigate performance
@@ -652,7 +790,7 @@ automation:
data_template:
message: 'A {{trigger.payload_json["after"]["label"]}} was detected.'
data:
image: 'https://your.public.hass.address.com/api/frigate/notifications/{{trigger.payload_json["after"]["id"]}}.jpg?format=android'
image: 'https://your.public.hass.address.com/api/frigate/notifications/{{trigger.payload_json["after"]["id"]}}/thumbnail.jpg?format=android'
tag: '{{trigger.payload_json["after"]["id"]}}'
```
Note that the image url has `?format=android`. This adjusts the aspect ratio to be idea for android notifications. For iOS optimized snapshots, no format parameter needs to be passed.
@@ -661,28 +799,57 @@ You can find some additional examples for notifications [here](docs/notification
[Back to top](#documentation)
## Web UI
Frigate comes bundled with a simple web ui that supports the following:
- Show cameras
- Browse events
- Mask helper
## HTTP Endpoints
A web server is available on port 5000 with the following endpoints.
### `/<camera_name>`
### `/api/<camera_name>`
An mjpeg stream for debugging. Keep in mind the mjpeg endpoint is for debugging only and will put additional load on the system when in use.
You can access a higher resolution mjpeg stream by appending `h=height-in-pixels` to the endpoint. For example `http://localhost:5000/back?h=1080`. You can also increase the FPS by appending `fps=frame-rate` to the URL such as `http://localhost:5000/back?fps=10` or both with `?fps=10&h=1000`
Accepts the following query string parameters:
|param|Type|Description|
|----|-----|--|
|`fps`|int|Frame rate|
|`h`|int|Height in pixels|
|`bbox`|int|Show bounding boxes for detected objects (0 or 1)|
|`timestamp`|int|Print the timestamp in the upper left (0 or 1)|
|`zones`|int|Draw the zones on the image (0 or 1)|
|`mask`|int|Overlay the mask on the image (0 or 1)|
|`motion`|int|Draw blue boxes for areas with detected motion (0 or 1)|
|`regions`|int|Draw green boxes for areas where object detection was run (0 or 1)|
### `/<camera_name>/<object_name>/best.jpg[?h=300&crop=1]`
You can access a higher resolution mjpeg stream by appending `h=height-in-pixels` to the endpoint. For example `http://localhost:5000/back?h=1080`. You can also increase the FPS by appending `fps=frame-rate` to the URL such as `http://localhost:5000/back?fps=10` or both with `?fps=10&h=1000`.
### `/api/<camera_name>/<object_name>/best.jpg[?h=300&crop=1]`
The best snapshot for any object type. It is a full resolution image by default.
Example parameters:
- `h=300`: resizes the image to 300 pixes tall
- `crop=1`: crops the image to the region of the detection rather than returning the entire image
### `/<camera_name>/latest.jpg[?h=300]`
### `/api/<camera_name>/latest.jpg[?h=300]`
The most recent frame that frigate has finished processing. It is a full resolution image by default.
Accepts the following query string parameters:
|param|Type|Description|
|----|-----|--|
|`h`|int|Height in pixels|
|`bbox`|int|Show bounding boxes for detected objects (0 or 1)|
|`timestamp`|int|Print the timestamp in the upper left (0 or 1)|
|`zones`|int|Draw the zones on the image (0 or 1)|
|`mask`|int|Overlay the mask on the image (0 or 1)|
|`motion`|int|Draw blue boxes for areas with detected motion (0 or 1)|
|`regions`|int|Draw green boxes for areas where object detection was run (0 or 1)|
Example parameters:
- `h=300`: resizes the image to 300 pixes tall
### `/stats`
### `/api/stats`
Contains some granular debug info that can be used for sensors in HomeAssistant.
Sample response:
@@ -741,14 +908,22 @@ Sample response:
***************/
"pid": 25321
}
},
"service": {
/* Uptime in seconds */
"uptime": 10,
"version": "0.8.0-8883709"
}
}
```
### `/config`
### `/api/config`
A json representation of your configuration
### `/events`
### `/api/version`
Version info
### `/api/events`
Events from the database. Accepts the following query string parameters:
|param|Type|Description|
|----|-----|--|
@@ -758,14 +933,22 @@ Events from the database. Accepts the following query string parameters:
|`label`|str|Label name|
|`zone`|str|Zone name|
|`limit`|int|Limit the number of events returned|
|`has_snapshot`|int|Filter to events that have snapshots (0 or 1)|
|`has_clip`|int|Filter to events that have clips (0 or 1)|
### `/events/summary`
### `/api/events/summary`
Returns summary data for events in the database. Used by the HomeAssistant integration.
### `/events/<id>`
### `/api/events/<id>`
Returns data for a single event.
### `/events/<id>/snapshot.jpg`
Returns a snapshot for the event id optimized for notifications. Works while the event is in progress and after completion. Passing `?format=android` will convert the thumbnail to 2:1 aspect ratio.
### `/api/events/<id>/thumbnail.jpg`
Returns a thumbnail for the event id optimized for notifications. Works while the event is in progress and after completion. Passing `?format=android` will convert the thumbnail to 2:1 aspect ratio.
### `/clips/<camera>-<id>.mp4`
Video clip for the given camera and event id.
### `/clips/<camera>-<id>.jpg`
JPG snapshot for the given camera and event id.
[Back to top](#documentation)
@@ -790,9 +973,11 @@ is published again.
The height and crop of snapshots can be configured in the config.
### `frigate/events`
Message published for each changed event:
```json
Message published for each changed event. The first message is published when the tracked object is no longer marked as a false_positive. When frigate finds a better snapshot of the tracked object or when a zone change occurs, it will publish a message with the same id. When the event ends, a final message is published with `end_time` set.
```jsonc
{
"type": "update", // new, update, or end
"before": {
"id": "1607123955.475377-mxklsc",
"camera": "front_door",
@@ -861,6 +1046,26 @@ Message published for each changed event:
}
```
### `frigate/stats`
Same data available at `/api/stats` published at a configurable interval.
### `frigate/<camera_name>/detect/set`
Topic to turn detection for a camera on and off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/detect/state`
Topic with current state of detection for a camera. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/clips/set`
Topic to turn clips for a camera on and off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/clips/state`
Topic with current state of clips for a camera. Published values are `ON` and `OFF`.
### `frigate/<camera_name>/snapshots/set`
Topic to turn snapshots for a camera on and off. Expected values are `ON` and `OFF`.
### `frigate/<camera_name>/snapshots/state`
Topic with current state of snapshots for a camera. Published values are `ON` and `OFF`.
[Back to top](#documentation)
## Custom Models
@@ -869,6 +1074,8 @@ Models for both CPU and EdgeTPU (Coral) are bundled in the image. You can use yo
- EdgeTPU Model: `/edgetpu_model.tflite`
- Labels: `/labelmap.txt`
You also need to update the model width/height in the config if they differ from the defaults.
### Customizing the Labelmap
The labelmap can be customized to your needs. A common reason to do this is to combine multiple object types that are easily confused when you don't need to be as granular such as car/truck. You must retain the same number of labels, but you can change the names. To change:
@@ -895,14 +1102,14 @@ Examples of available modules are:
## Troubleshooting
### "[mov,mp4,m4a,3gp,3g2,mj2 @ 0x5639eeb6e140] moov atom not found"
These messages in the logs are expected in certain situations. Frigate checks the integrity of the video cache before assembling clips. Occasionally these cached files will be invalid and cleaned up automatically.
### "ffmpeg didnt return a frame. something is wrong"
Turn on logging for the camera by overriding the global_args and setting the log level to `info`:
Turn on logging for the ffmpeg process by overriding the global_args and setting the log level to `info` (the default is `fatal`). Note that all ffmpeg logs show up in the Frigate logs as `ERROR` level. This does not mean they are actually errors.
```yaml
ffmpeg:
global_args:
- -hide_banner
- -loglevel
- info
global_args: -hide_banner -loglevel info
```
### "On connect called"

View File

@@ -9,7 +9,7 @@ RUN apt-get -qq update \
# ffmpeg dependencies
libgomp1 \
# VAAPI drivers for Intel hardware accel
libva-drm2 libva2 i965-va-driver vainfo intel-media-va-driver mesa-va-drivers \
libva-drm2 libva2 libmfx1 i965-va-driver vainfo intel-media-va-driver mesa-va-drivers \
## Tensorflow lite
&& wget -q https://github.com/google-coral/pycoral/releases/download/release-frogfish/tflite_runtime-2.5.0-cp38-cp38-linux_x86_64.whl \
&& python3.8 -m pip install tflite_runtime-2.5.0-cp38-cp38-linux_x86_64.whl \

View File

@@ -1,6 +1,9 @@
ARG ARCH=amd64
FROM blakeblackshear/frigate-wheels:${ARCH} as wheels
FROM blakeblackshear/frigate-ffmpeg:1.0.0-${ARCH} as ffmpeg
ARG WHEELS_VERSION
ARG FFMPEG_VERSION
FROM blakeblackshear/frigate-wheels:${WHEELS_VERSION}-${ARCH} as wheels
FROM blakeblackshear/frigate-ffmpeg:${FFMPEG_VERSION}-${ARCH} as ffmpeg
FROM frigate-web as web
FROM ubuntu:20.04
LABEL maintainer "blakeb@blakeshome.com"
@@ -29,20 +32,22 @@ RUN apt-get -qq update \
&& (apt-get autoremove -y; apt-get autoclean -y)
RUN pip3 install \
peewee \
peewee_migrate \
zeroconf \
voluptuous
COPY nginx/nginx.conf /etc/nginx/nginx.conf
# get model and labels
ARG MODEL_REFS=7064b94dd5b996189242320359dbab8b52c94a84
COPY labelmap.txt /labelmap.txt
RUN wget -q https://github.com/google-coral/edgetpu/raw/$MODEL_REFS/test_data/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite -O /edgetpu_model.tflite
RUN wget -q https://github.com/google-coral/edgetpu/raw/$MODEL_REFS/test_data/ssd_mobilenet_v2_coco_quant_postprocess.tflite -O /cpu_model.tflite
RUN wget -q https://github.com/google-coral/test_data/raw/master/ssdlite_mobiledet_coco_qat_postprocess_edgetpu.tflite -O /edgetpu_model.tflite
RUN wget -q https://github.com/google-coral/test_data/raw/master/ssdlite_mobiledet_coco_qat_postprocess.tflite -O /cpu_model.tflite
WORKDIR /opt/frigate/
ADD frigate frigate/
ADD migrations migrations/
COPY --from=web /opt/frigate/build web/
COPY run.sh /run.sh
RUN chmod +x /run.sh

View File

@@ -79,6 +79,7 @@ RUN buildDeps="autoconf \
libssl-dev \
yasm \
libva-dev \
libmfx-dev \
zlib1g-dev" && \
apt-get -yqq update && \
apt-get install -yq --no-install-recommends ${buildDeps}
@@ -404,6 +405,7 @@ RUN \
--enable-gpl \
--enable-libfreetype \
--enable-libvidstab \
--enable-libmfx \
--enable-libmp3lame \
--enable-libopus \
--enable-libtheora \

9
docker/Dockerfile.web Normal file
View File

@@ -0,0 +1,9 @@
ARG NODE_VERSION=14.0
FROM node:${NODE_VERSION}
WORKDIR /opt/frigate
COPY . .
RUN npm install && npm run build

View File

@@ -18,13 +18,14 @@ RUN apt-get -qq update \
gcc gfortran libopenblas-dev liblapack-dev cython
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& python3 get-pip.py
&& python3 get-pip.py "pip==20.2.4"
RUN pip3 install scikit-build
RUN pip3 wheel --wheel-dir=/wheels \
opencv-python-headless \
numpy \
# pinning due to issue in 1.19.5 https://github.com/numpy/numpy/issues/18131
numpy==1.19.4 \
imutils \
scipy \
psutil \
@@ -32,7 +33,9 @@ RUN pip3 wheel --wheel-dir=/wheels \
paho-mqtt \
PyYAML \
matplotlib \
click
click \
setproctitle \
peewee
FROM scratch

View File

@@ -1,49 +0,0 @@
FROM ubuntu:20.04 as build
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get -qq update \
&& apt-get -qq install -y \
python3 \
python3-dev \
wget \
# opencv dependencies
build-essential cmake git pkg-config libgtk-3-dev \
libavcodec-dev libavformat-dev libswscale-dev libv4l-dev \
libxvidcore-dev libx264-dev libjpeg-dev libpng-dev libtiff-dev \
gfortran openexr libatlas-base-dev libssl-dev\
libtbb2 libtbb-dev libdc1394-22-dev libopenexr-dev \
libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev \
# scipy dependencies
gcc gfortran libopenblas-dev liblapack-dev cython
RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
&& python3 get-pip.py
# need to build cmake from source because binary distribution is broken for arm64
# https://github.com/scikit-build/cmake-python-distributions/issues/115
# https://github.com/skvark/opencv-python/issues/366
# https://github.com/scikit-build/cmake-python-distributions/issues/96#issuecomment-663062358
RUN pip3 install scikit-build
RUN git clone https://github.com/scikit-build/cmake-python-distributions.git \
&& cd cmake-python-distributions/ \
&& python3 setup.py bdist_wheel
RUN pip3 install cmake-python-distributions/dist/*.whl
RUN pip3 wheel --wheel-dir=/wheels \
opencv-python-headless \
numpy \
imutils \
scipy \
psutil \
Flask \
paho-mqtt \
PyYAML \
matplotlib \
click
FROM scratch
COPY --from=build /wheels /wheels

View File

@@ -5,7 +5,7 @@ Frigate should work with most RTSP cameras and h264 feeds such as Dahua.
The input parameters need to be adjusted for RTMP cameras
```yaml
ffmpeg:
input_args:
input_args:
- -avoid_negative_ts
- make_zero
- -fflags
@@ -18,4 +18,25 @@ input_args:
- +genpts+discardcorrupt
- -use_wallclock_as_timestamps
- '1'
```
```
## Blue Iris RTSP Cameras
You will need to remove `nobuffer` flag for Blue Iris RTSP cameras
```yaml
ffmpeg:
input_args:
- -avoid_negative_ts
- make_zero
- -flags
- low_delay
- -strict
- experimental
- -fflags
- +genpts+discardcorrupt
- -rtsp_transport
- tcp
- -stimeout
- "5000000"
- -use_wallclock_as_timestamps
- "1"
```

View File

@@ -1,5 +1,6 @@
# Notification examples
Here are some examples of notifications for the HomeAssistant android companion app:
```yaml
automation:
@@ -8,45 +9,63 @@ automation:
platform: mqtt
topic: frigate/events
conditions:
- "{{ trigger.payload_json["after"]["label"] == 'person' }}"
- "{{ 'yard' in trigger.payload_json["after"]["entered_zones"] }}"
- "{{ trigger.payload_json['after']['label'] == 'person' }}"
- "{{ 'yard' in trigger.payload_json['after']['entered_zones'] }}"
action:
- service: notify.mobile_app_pixel_3
data_template:
message: 'A {{trigger.payload_json["after"]["label"]}} has entered the yard.'
message: "A {{trigger.payload_json['after']['label']}} has entered the yard."
data:
image: 'https://url.com/api/frigate/notifications/{{trigger.payload_json["after"]["id"]}}.jpg'
tag: '{{trigger.payload_json["after"]["id"]}}'
image: "https://url.com/api/frigate/notifications/{{trigger.payload_json['after']['id']}}/thumbnail.jpg"
tag: "{{trigger.payload_json['after']['id']}}"
- alias: When a person leaves a zone named yard
trigger:
platform: mqtt
topic: frigate/events
conditions:
- "{{ trigger.payload_json["after"]["label"] == 'person' }}"
- "{{ 'yard' in trigger.payload_json["before"]["current_zones"] }}"
- "{{ not 'yard' in trigger.payload_json["after"]["current_zones"] }}"
- "{{ trigger.payload_json['after']['label'] == 'person' }}"
- "{{ 'yard' in trigger.payload_json['before']['current_zones'] }}"
- "{{ not 'yard' in trigger.payload_json['after']['current_zones'] }}"
action:
- service: notify.mobile_app_pixel_3
data_template:
message: 'A {{trigger.payload_json["after"]["label"]}} has left the yard.'
message: "A {{trigger.payload_json['after']['label']}} has left the yard."
data:
image: 'https://url.com/api/frigate/notifications/{{trigger.payload_json["after"]["id"]}}.jpg'
tag: '{{trigger.payload_json["after"]["id"]}}'
image: "https://url.com/api/frigate/notifications/{{trigger.payload_json['after']['id']}}/thumbnail.jpg"
tag: "{{trigger.payload_json['after']['id']}}"
- alias: Notify for dogs in the front with a high top score
trigger:
platform: mqtt
topic: frigate/events
conditions:
- "{{ trigger.payload_json["after"]["label"] == 'dog' }}"
- "{{ trigger.payload_json["after"]["camera"] == 'front' }}"
- "{{ trigger.payload_json["after"]["top_score"] > 0.98 }}"
- "{{ trigger.payload_json['after']['label'] == 'dog' }}"
- "{{ trigger.payload_json['after']['camera'] == 'front' }}"
- "{{ trigger.payload_json['after']['top_score'] > 0.98 }}"
action:
- service: notify.mobile_app_pixel_3
data_template:
message: 'High confidence dog detection.'
data:
image: 'https://url.com/api/frigate/notifications/{{trigger.payload_json["after"]["id"]}}.jpg'
tag: '{{trigger.payload_json["after"]["id"]}}'
```
image: "https://url.com/api/frigate/notifications/{{trigger.payload_json['after']['id']}}/thumbnail.jpg"
tag: "{{trigger.payload_json['after']['id']}}"
```
If you are using telegram, you can fetch the image directly from Frigate:
```yaml
automation:
- alias: Notify of events
trigger:
platform: mqtt
topic: frigate/events
action:
- service: notify.telegram_full
data_template:
message: 'A {{trigger.payload_json["after"]["label"]}} was detected.'
data:
photo:
# this url should work for addon users
- url: 'http://ccab4aaf-frigate:5000/api/events/{{trigger.payload_json["after"]["id"]}}/thumbnail.jpg'
caption : 'A {{trigger.payload_json["after"]["label"]}} was detected on {{ trigger.payload_json["after"]["camera"] }} camera'
```

View File

@@ -8,6 +8,7 @@ import sys
import signal
import yaml
from peewee_migrate import Router
from playhouse.sqlite_ext import SqliteExtDatabase
from frigate.config import FrigateConfig
@@ -20,6 +21,7 @@ from frigate.models import Event
from frigate.mqtt import create_mqtt_client
from frigate.object_processing import TrackedObjectProcessor
from frigate.record import RecordingMaintainer
from frigate.stats import StatsEmitter, stats_init
from frigate.video import capture_camera, track_camera
from frigate.watchdog import FrigateWatchdog
from frigate.zeroconf import broadcast_zeroconf
@@ -37,6 +39,10 @@ class FrigateApp():
self.log_queue = mp.Queue()
self.camera_metrics = {}
def set_environment_vars(self):
for key, value in self.config.environment_vars.items():
os.environ[key] = value
def ensure_dirs(self):
for d in [RECORD_DIR, CLIPS_DIR, CACHE_DIR]:
if not os.path.exists(d) and not os.path.islink(d):
@@ -44,6 +50,13 @@ class FrigateApp():
os.makedirs(d)
else:
logger.debug(f"Skipping directory: {d}")
tmpfs_size = self.config.clips.tmpfs_cache_size
if tmpfs_size:
logger.info(f"Creating tmpfs of size {tmpfs_size}")
rc = os.system(f"mount -t tmpfs -o size={tmpfs_size} tmpfs {CACHE_DIR}")
if rc != 0:
logger.error(f"Failed to create tmpfs, error code: {rc}")
def init_logger(self):
self.log_process = mp.Process(target=log_process, args=(self.log_queue,), name='log_process')
@@ -61,12 +74,31 @@ class FrigateApp():
'camera_fps': mp.Value('d', 0.0),
'skipped_fps': mp.Value('d', 0.0),
'process_fps': mp.Value('d', 0.0),
'detection_enabled': mp.Value('i', self.config.cameras[camera_name].detect.enabled),
'detection_fps': mp.Value('d', 0.0),
'detection_frame': mp.Value('d', 0.0),
'read_start': mp.Value('d', 0.0),
'ffmpeg_pid': mp.Value('i', 0),
'frame_queue': mp.Queue(maxsize=2)
'frame_queue': mp.Queue(maxsize=2),
}
def check_config(self):
for name, camera in self.config.cameras.items():
assigned_roles = list(set([r for i in camera.ffmpeg.inputs for r in i.roles]))
if not camera.clips.enabled and 'clips' in assigned_roles:
logger.warning(f"Camera {name} has clips assigned to an input, but clips is not enabled.")
elif camera.clips.enabled and not 'clips' in assigned_roles:
logger.warning(f"Camera {name} has clips enabled, but clips is not assigned to an input.")
if not camera.record.enabled and 'record' in assigned_roles:
logger.warning(f"Camera {name} has record assigned to an input, but record is not enabled.")
elif camera.record.enabled and not 'record' in assigned_roles:
logger.warning(f"Camera {name} has record enabled, but record is not assigned to an input.")
if not camera.rtmp.enabled and 'rtmp' in assigned_roles:
logger.warning(f"Camera {name} has rtmp assigned to an input, but rtmp is not enabled.")
elif camera.rtmp.enabled and not 'rtmp' in assigned_roles:
logger.warning(f"Camera {name} has rtmp enabled, but rtmp is not assigned to an input.")
def set_log_levels(self):
logging.getLogger().setLevel(self.config.logger.default)
@@ -85,30 +117,39 @@ class FrigateApp():
self.detected_frames_queue = mp.Queue(maxsize=len(self.config.cameras.keys())*2)
def init_database(self):
self.db = SqliteExtDatabase(f"/{os.path.join(CLIPS_DIR, 'frigate.db')}")
self.db = SqliteExtDatabase(self.config.database.path)
# Run migrations
del(logging.getLogger('peewee_migrate').handlers[:])
router = Router(self.db)
router.run()
models = [Event]
self.db.bind(models)
self.db.create_tables(models, safe=True)
def init_stats(self):
self.stats_tracking = stats_init(self.camera_metrics, self.detectors)
def init_web_server(self):
self.flask_app = create_app(self.config, self.db, self.camera_metrics, self.detectors, self.detected_frames_processor)
self.flask_app = create_app(self.config, self.db, self.stats_tracking, self.detected_frames_processor)
def init_mqtt(self):
self.mqtt_client = create_mqtt_client(self.config.mqtt)
self.mqtt_client = create_mqtt_client(self.config, self.camera_metrics)
def start_detectors(self):
model_shape = (self.config.model.height, self.config.model.width)
for name in self.config.cameras.keys():
self.detection_out_events[name] = mp.Event()
shm_in = mp.shared_memory.SharedMemory(name=name, create=True, size=300*300*3)
shm_in = mp.shared_memory.SharedMemory(name=name, create=True, size=self.config.model.height*self.config.model.width*3)
shm_out = mp.shared_memory.SharedMemory(name=f"out-{name}", create=True, size=20*6*4)
self.detection_shms.append(shm_in)
self.detection_shms.append(shm_out)
for name, detector in self.config.detectors.items():
if detector.type == 'cpu':
self.detectors[name] = EdgeTPUProcess(name, self.detection_queue, out_events=self.detection_out_events, tf_device='cpu')
self.detectors[name] = EdgeTPUProcess(name, self.detection_queue, self.detection_out_events, model_shape, 'cpu', detector.num_threads)
if detector.type == 'edgetpu':
self.detectors[name] = EdgeTPUProcess(name, self.detection_queue, out_events=self.detection_out_events, tf_device=detector.device)
self.detectors[name] = EdgeTPUProcess(name, self.detection_queue, self.detection_out_events, model_shape, detector.device, detector.num_threads)
def start_detected_frames_processor(self):
self.detected_frames_processor = TrackedObjectProcessor(self.config, self.mqtt_client, self.config.mqtt.topic_prefix,
@@ -116,8 +157,9 @@ class FrigateApp():
self.detected_frames_processor.start()
def start_camera_processors(self):
model_shape = (self.config.model.height, self.config.model.width)
for name, config in self.config.cameras.items():
camera_process = mp.Process(target=track_camera, name=f"camera_processor:{name}", args=(name, config,
camera_process = mp.Process(target=track_camera, name=f"camera_processor:{name}", args=(name, config, model_shape,
self.detection_queue, self.detection_out_events[name], self.detected_frames_queue,
self.camera_metrics[name]))
camera_process.daemon = True
@@ -146,6 +188,10 @@ class FrigateApp():
self.recording_maintainer = RecordingMaintainer(self.config, self.stop_event)
self.recording_maintainer.start()
def start_stats_emitter(self):
self.stats_emitter = StatsEmitter(self.config, self.stats_tracking, self.mqtt_client, self.config.mqtt.topic_prefix, self.stop_event)
self.stats_emitter.start()
def start_watchdog(self):
self.frigate_watchdog = FrigateWatchdog(self.detectors, self.stop_event)
self.frigate_watchdog.start()
@@ -153,29 +199,33 @@ class FrigateApp():
def start(self):
self.init_logger()
try:
self.ensure_dirs()
try:
self.init_config()
except Exception as e:
logger.error(f"Error parsing config: {e}")
print(f"Error parsing config: {e}")
self.log_process.terminate()
sys.exit(1)
self.set_environment_vars()
self.ensure_dirs()
self.check_config()
self.set_log_levels()
self.init_queues()
self.init_database()
self.init_mqtt()
except Exception as e:
logger.error(e)
print(e)
self.log_process.terminate()
sys.exit(1)
self.start_detectors()
self.start_detected_frames_processor()
self.start_camera_processors()
self.start_camera_capture_processes()
self.init_stats()
self.init_web_server()
self.start_event_processor()
self.start_event_cleanup()
self.start_recording_maintainer()
self.start_stats_emitter()
self.start_watchdog()
# self.zeroconf = broadcast_zeroconf(self.config.mqtt.client_id)
@@ -196,6 +246,7 @@ class FrigateApp():
self.event_processor.join()
self.event_cleanup.join()
self.recording_maintainer.join()
self.stats_emitter.join()
self.frigate_watchdog.join()
for detector in self.detectors.values():

File diff suppressed because it is too large Load Diff

View File

@@ -8,6 +8,7 @@ import threading
import signal
from abc import ABC, abstractmethod
from multiprocessing.connection import Connection
from setproctitle import setproctitle
from typing import Dict
import numpy as np
@@ -43,7 +44,7 @@ class ObjectDetector(ABC):
pass
class LocalObjectDetector(ObjectDetector):
def __init__(self, tf_device=None, labels=None):
def __init__(self, tf_device=None, num_threads=3, labels=None):
self.fps = EventsPerSecond()
if labels is None:
self.labels = {}
@@ -61,16 +62,15 @@ class LocalObjectDetector(ObjectDetector):
logger.info(f"Attempting to load TPU as {device_config['device']}")
edge_tpu_delegate = load_delegate('libedgetpu.so.1.0', device_config)
logger.info("TPU found")
self.interpreter = tflite.Interpreter(
model_path='/edgetpu_model.tflite',
experimental_delegates=[edge_tpu_delegate])
except ValueError:
logger.info("No EdgeTPU detected. Falling back to CPU.")
if edge_tpu_delegate is None:
self.interpreter = tflite.Interpreter(
model_path='/cpu_model.tflite')
logger.info("No EdgeTPU detected.")
raise
else:
self.interpreter = tflite.Interpreter(
model_path='/edgetpu_model.tflite',
experimental_delegates=[edge_tpu_delegate])
model_path='/cpu_model.tflite', num_threads=num_threads)
self.interpreter.allocate_tensors()
@@ -106,10 +106,11 @@ class LocalObjectDetector(ObjectDetector):
return detections
def run_detector(name: str, detection_queue: mp.Queue, out_events: Dict[str, mp.Event], avg_speed, start, tf_device):
def run_detector(name: str, detection_queue: mp.Queue, out_events: Dict[str, mp.Event], avg_speed, start, model_shape, tf_device, num_threads):
threading.current_thread().name = f"detector:{name}"
logger = logging.getLogger(f"detector.{name}")
logger.info(f"Starting detection process: {os.getpid()}")
setproctitle(f"frigate.detector.{name}")
listen()
stop_event = mp.Event()
@@ -120,7 +121,7 @@ def run_detector(name: str, detection_queue: mp.Queue, out_events: Dict[str, mp.
signal.signal(signal.SIGINT, receiveSignal)
frame_manager = SharedMemoryFrameManager()
object_detector = LocalObjectDetector(tf_device=tf_device)
object_detector = LocalObjectDetector(tf_device=tf_device, num_threads=num_threads)
outputs = {}
for name in out_events.keys():
@@ -139,7 +140,7 @@ def run_detector(name: str, detection_queue: mp.Queue, out_events: Dict[str, mp.
connection_id = detection_queue.get(timeout=5)
except queue.Empty:
continue
input_frame = frame_manager.get(connection_id, (1,300,300,3))
input_frame = frame_manager.get(connection_id, (1,model_shape[0],model_shape[1],3))
if input_frame is None:
continue
@@ -155,14 +156,16 @@ def run_detector(name: str, detection_queue: mp.Queue, out_events: Dict[str, mp.
avg_speed.value = (avg_speed.value*9 + duration)/10
class EdgeTPUProcess():
def __init__(self, name, detection_queue, out_events, tf_device=None):
def __init__(self, name, detection_queue, out_events, model_shape, tf_device=None, num_threads=3):
self.name = name
self.out_events = out_events
self.detection_queue = detection_queue
self.avg_inference_speed = mp.Value('d', 0.01)
self.detection_start = mp.Value('d', 0.0)
self.detect_process = None
self.model_shape = model_shape
self.tf_device = tf_device
self.num_threads = num_threads
self.start_or_restart()
def stop(self):
@@ -178,19 +181,19 @@ class EdgeTPUProcess():
self.detection_start.value = 0.0
if (not self.detect_process is None) and self.detect_process.is_alive():
self.stop()
self.detect_process = mp.Process(target=run_detector, name=f"detector:{self.name}", args=(self.name, self.detection_queue, self.out_events, self.avg_inference_speed, self.detection_start, self.tf_device))
self.detect_process = mp.Process(target=run_detector, name=f"detector:{self.name}", args=(self.name, self.detection_queue, self.out_events, self.avg_inference_speed, self.detection_start, self.model_shape, self.tf_device, self.num_threads))
self.detect_process.daemon = True
self.detect_process.start()
class RemoteObjectDetector():
def __init__(self, name, labels, detection_queue, event):
def __init__(self, name, labels, detection_queue, event, model_shape):
self.labels = load_labels(labels)
self.name = name
self.fps = EventsPerSecond()
self.detection_queue = detection_queue
self.event = event
self.shm = mp.shared_memory.SharedMemory(name=self.name, create=False)
self.np_shm = np.ndarray((1,300,300,3), dtype=np.uint8, buffer=self.shm.buf)
self.np_shm = np.ndarray((1,model_shape[0],model_shape[1],3), dtype=np.uint8, buffer=self.shm.buf)
self.out_shm = mp.shared_memory.SharedMemory(name=f"out-{self.name}", create=False)
self.out_np_shm = np.ndarray((20,6), dtype=np.float32, buffer=self.out_shm.buf)

View File

@@ -36,9 +36,10 @@ class EventProcessor(threading.Thread):
files_in_use = []
for process in psutil.process_iter():
if process.name() != 'ffmpeg':
continue
try:
if process.name() != 'ffmpeg':
continue
flist = process.open_files()
if flist:
for nt in flist:
@@ -87,7 +88,7 @@ class EventProcessor(threading.Thread):
earliest_event = datetime.datetime.now().timestamp()
# if the earliest event exceeds the max seconds, cap it
max_seconds = self.config.save_clips.max_seconds
max_seconds = self.config.clips.max_seconds
if datetime.datetime.now().timestamp()-earliest_event > max_seconds:
earliest_event = datetime.datetime.now().timestamp()-max_seconds
@@ -96,18 +97,19 @@ class EventProcessor(threading.Thread):
del self.cached_clips[f]
os.remove(os.path.join(CACHE_DIR,f))
def create_clip(self, camera, event_data, pre_capture):
def create_clip(self, camera, event_data, pre_capture, post_capture):
# get all clips from the camera with the event sorted
sorted_clips = sorted([c for c in self.cached_clips.values() if c['camera'] == camera], key = lambda i: i['start_time'])
while sorted_clips[-1]['start_time'] + sorted_clips[-1]['duration'] < event_data['end_time']:
while len(sorted_clips) == 0 or sorted_clips[-1]['start_time'] + sorted_clips[-1]['duration'] < event_data['end_time']+post_capture:
logger.debug(f"No cache clips for {camera}. Waiting...")
time.sleep(5)
self.refresh_cache()
# get all clips from the camera with the event sorted
sorted_clips = sorted([c for c in self.cached_clips.values() if c['camera'] == camera], key = lambda i: i['start_time'])
playlist_start = event_data['start_time']-pre_capture
playlist_end = event_data['end_time']+5
playlist_end = event_data['end_time']+post_capture
playlist_lines = []
for clip in sorted_clips:
# clip ends before playlist start time, skip
@@ -138,13 +140,16 @@ class EventProcessor(threading.Thread):
'-',
'-c',
'copy',
'-movflags',
'+faststart',
f"{os.path.join(CLIPS_DIR, clip_name)}.mp4"
]
p = sp.run(ffmpeg_cmd, input="\n".join(playlist_lines), encoding='ascii', capture_output=True)
if p.returncode != 0:
logger.error(p.stderr)
return
return False
return True
def run(self):
while True:
@@ -159,28 +164,20 @@ class EventProcessor(threading.Thread):
self.refresh_cache()
continue
logger.debug(f"Event received: {event_type} {camera} {event_data['id']}")
self.refresh_cache()
save_clips_config = self.config.cameras[camera].save_clips
# if save clips is not enabled for this camera, just continue
if not save_clips_config.enabled:
if event_type == 'end':
self.event_processed_queue.put((event_data['id'], camera))
continue
# if specific objects are listed for this camera, only save clips for them
if not event_data['label'] in save_clips_config.objects:
if event_type == 'end':
self.event_processed_queue.put((event_data['id'], camera))
continue
if event_type == 'start':
self.events_in_process[event_data['id']] = event_data
if event_type == 'end':
if len(self.cached_clips) > 0 and not event_data['false_positive']:
self.create_clip(camera, event_data, save_clips_config.pre_capture)
clips_config = self.config.cameras[camera].clips
if not event_data['false_positive']:
clip_created = False
if clips_config.enabled and (clips_config.objects is None or event_data['label'] in clips_config.objects):
clip_created = self.create_clip(camera, event_data, clips_config.pre_capture, clips_config.post_capture)
Event.create(
id=event_data['id'],
label=event_data['label'],
@@ -190,7 +187,9 @@ class EventProcessor(threading.Thread):
top_score=event_data['top_score'],
false_positive=event_data['false_positive'],
zones=list(event_data['entered_zones']),
thumbnail=event_data['thumbnail']
thumbnail=event_data['thumbnail'],
has_clip=clip_created,
has_snapshot=event_data['has_snapshot'],
)
del self.events_in_process[event_data['id']]
self.event_processed_queue.put((event_data['id'], camera))
@@ -201,7 +200,86 @@ class EventCleanup(threading.Thread):
self.name = 'event_cleanup'
self.config = config
self.stop_event = stop_event
self.camera_keys = list(self.config.cameras.keys())
def expire(self, media):
## Expire events from unlisted cameras based on the global config
if media == 'clips':
retain_config = self.config.clips.retain
file_extension = 'mp4'
update_params = {'has_clip': False}
else:
retain_config = self.config.snapshots.retain
file_extension = 'jpg'
update_params = {'has_snapshot': False}
distinct_labels = (Event.select(Event.label)
.where(Event.camera.not_in(self.camera_keys))
.distinct())
# loop over object types in db
for l in distinct_labels:
# get expiration time for this label
expire_days = retain_config.objects.get(l.label, retain_config.default)
expire_after = (datetime.datetime.now() - datetime.timedelta(days=expire_days)).timestamp()
# grab all events after specific time
expired_events = (
Event.select()
.where(Event.camera.not_in(self.camera_keys),
Event.start_time < expire_after,
Event.label == l.label)
)
# delete the media from disk
for event in expired_events:
media_name = f"{event.camera}-{event.id}"
media = Path(f"{os.path.join(CLIPS_DIR, media_name)}.{file_extension}")
media.unlink(missing_ok=True)
# update the clips attribute for the db entry
update_query = (
Event.update(update_params)
.where(Event.camera.not_in(self.camera_keys),
Event.start_time < expire_after,
Event.label == l.label)
)
update_query.execute()
## Expire events from cameras based on the camera config
for name, camera in self.config.cameras.items():
if media == 'clips':
retain_config = camera.clips.retain
else:
retain_config = camera.snapshots.retain
# get distinct objects in database for this camera
distinct_labels = (Event.select(Event.label)
.where(Event.camera == name)
.distinct())
# loop over object types in db
for l in distinct_labels:
# get expiration time for this label
expire_days = retain_config.objects.get(l.label, retain_config.default)
expire_after = (datetime.datetime.now() - datetime.timedelta(days=expire_days)).timestamp()
# grab all events after specific time
expired_events = (
Event.select()
.where(Event.camera == name,
Event.start_time < expire_after,
Event.label == l.label)
)
# delete the grabbed clips from disk
for event in expired_events:
media_name = f"{event.camera}-{event.id}"
media = Path(f"{os.path.join(CLIPS_DIR, media_name)}.{file_extension}")
media.unlink(missing_ok=True)
# update the clips attribute for the db entry
update_query = (
Event.update(update_params)
.where( Event.camera == name,
Event.start_time < expire_after,
Event.label == l.label)
)
update_query.execute()
def run(self):
counter = 0
while(True):
@@ -216,71 +294,13 @@ class EventCleanup(threading.Thread):
continue
counter = 0
camera_keys = list(self.config.cameras.keys())
self.expire('clips')
self.expire('snapshots')
# Expire events from unlisted cameras based on the global config
retain_config = self.config.save_clips.retain
distinct_labels = (Event.select(Event.label)
.where(Event.camera.not_in(camera_keys))
.distinct())
# loop over object types in db
for l in distinct_labels:
# get expiration time for this label
expire_days = retain_config.objects.get(l.label, retain_config.default)
expire_after = (datetime.datetime.now() - datetime.timedelta(days=expire_days)).timestamp()
# grab all events after specific time
expired_events = (
Event.select()
.where(Event.camera.not_in(camera_keys),
Event.start_time < expire_after,
Event.label == l.label)
)
# delete the grabbed clips from disk
for event in expired_events:
clip_name = f"{event.camera}-{event.id}"
clip = Path(f"{os.path.join(CLIPS_DIR, clip_name)}.mp4")
clip.unlink(missing_ok=True)
# delete the event for this type from the db
delete_query = (
Event.delete()
.where(Event.camera.not_in(camera_keys),
Event.start_time < expire_after,
Event.label == l.label)
)
delete_query.execute()
# Expire events from cameras based on the camera config
for name, camera in self.config.cameras.items():
retain_config = camera.save_clips.retain
# get distinct objects in database for this camera
distinct_labels = (Event.select(Event.label)
.where(Event.camera == name)
.distinct())
# loop over object types in db
for l in distinct_labels:
# get expiration time for this label
expire_days = retain_config.objects.get(l.label, retain_config.default)
expire_after = (datetime.datetime.now() - datetime.timedelta(days=expire_days)).timestamp()
# grab all events after specific time
expired_events = (
Event.select()
.where(Event.camera == name,
Event.start_time < expire_after,
Event.label == l.label)
)
# delete the grabbed clips from disk
for event in expired_events:
clip_name = f"{event.camera}-{event.id}"
clip = Path(f"{os.path.join(CLIPS_DIR, clip_name)}.mp4")
clip.unlink(missing_ok=True)
# delete the event for this type from the db
delete_query = (
Event.delete()
.where( Event.camera == name,
Event.start_time < expire_after,
Event.label == l.label)
)
delete_query.execute()
# drop events from db where has_clip and has_snapshot are false
delete_query = (
Event.delete()
.where( Event.has_clip == False,
Event.has_snapshot == False)
)
delete_query.execute()

View File

@@ -13,12 +13,15 @@ from peewee import SqliteDatabase, operator, fn, DoesNotExist
from playhouse.shortcuts import model_to_dict
from frigate.models import Event
from frigate.stats import stats_snapshot
from frigate.util import calculate_region
from frigate.version import VERSION
logger = logging.getLogger(__name__)
bp = Blueprint('frigate', __name__)
def create_app(frigate_config, database: SqliteDatabase, camera_metrics, detectors, detected_frames_processor):
def create_app(frigate_config, database: SqliteDatabase, stats_tracking, detected_frames_processor):
app = Flask(__name__)
@app.before_request
@@ -31,10 +34,9 @@ def create_app(frigate_config, database: SqliteDatabase, camera_metrics, detecto
database.close()
app.frigate_config = frigate_config
app.camera_metrics = camera_metrics
app.detectors = detectors
app.stats_tracking = stats_tracking
app.detected_frames_processor = detected_frames_processor
app.register_blueprint(bp)
return app
@@ -45,18 +47,33 @@ def is_healthy():
@bp.route('/events/summary')
def events_summary():
has_clip = request.args.get('has_clip', type=int)
has_snapshot = request.args.get('has_snapshot', type=int)
clauses = []
if not has_clip is None:
clauses.append((Event.has_clip == has_clip))
if not has_snapshot is None:
clauses.append((Event.has_snapshot == has_snapshot))
if len(clauses) == 0:
clauses.append((1 == 1))
groups = (
Event
.select(
Event.camera,
Event.label,
fn.strftime('%Y-%m-%d', fn.datetime(Event.start_time, 'unixepoch', 'localtime')).alias('day'),
Event.camera,
Event.label,
fn.strftime('%Y-%m-%d', fn.datetime(Event.start_time, 'unixepoch', 'localtime')).alias('day'),
Event.zones,
fn.COUNT(Event.id).alias('count')
)
.where(reduce(operator.and_, clauses))
.group_by(
Event.camera,
Event.label,
Event.camera,
Event.label,
fn.strftime('%Y-%m-%d', fn.datetime(Event.start_time, 'unixepoch', 'localtime')),
Event.zones
)
@@ -71,7 +88,7 @@ def event(id):
except DoesNotExist:
return "Event not found", 404
@bp.route('/events/<id>/snapshot.jpg')
@bp.route('/events/<id>/thumbnail.jpg')
def event_snapshot(id):
format = request.args.get('format', 'ios')
thumbnail_bytes = None
@@ -88,18 +105,18 @@ def event_snapshot(id):
thumbnail_bytes = tracked_obj.get_jpg_bytes()
except:
return "Event not found", 404
if thumbnail_bytes is None:
return "Event not found", 404
# android notifications prefer a 2:1 ratio
if format == 'android':
jpg_as_np = np.frombuffer(thumbnail_bytes, dtype=np.uint8)
img = cv2.imdecode(jpg_as_np, flags=1)
thumbnail = cv2.copyMakeBorder(img, 0, 0, int(img.shape[1]*0.5), int(img.shape[1]*0.5), cv2.BORDER_CONSTANT, (0,0,0))
ret, jpg = cv2.imencode('.jpg', thumbnail)
ret, jpg = cv2.imencode('.jpg', thumbnail)
thumbnail_bytes = jpg.tobytes()
response = make_response(thumbnail_bytes)
response.headers['Content-Type'] = 'image/jpg'
return response
@@ -112,24 +129,32 @@ def events():
zone = request.args.get('zone')
after = request.args.get('after', type=int)
before = request.args.get('before', type=int)
has_clip = request.args.get('has_clip', type=int)
has_snapshot = request.args.get('has_snapshot', type=int)
clauses = []
if camera:
clauses.append((Event.camera == camera))
if label:
clauses.append((Event.label == label))
if zone:
clauses.append((Event.zones.cast('text') % f"*\"{zone}\"*"))
if after:
clauses.append((Event.start_time >= after))
if before:
clauses.append((Event.start_time <= before))
if not has_clip is None:
clauses.append((Event.has_clip == has_clip))
if not has_snapshot is None:
clauses.append((Event.has_snapshot == has_snapshot))
if len(clauses) == 0:
clauses.append((1 == 1))
@@ -144,33 +169,13 @@ def events():
def config():
return jsonify(current_app.frigate_config.to_dict())
@bp.route('/version')
def version():
return VERSION
@bp.route('/stats')
def stats():
camera_metrics = current_app.camera_metrics
stats = {}
total_detection_fps = 0
for name, camera_stats in camera_metrics.items():
total_detection_fps += camera_stats['detection_fps'].value
stats[name] = {
'camera_fps': round(camera_stats['camera_fps'].value, 2),
'process_fps': round(camera_stats['process_fps'].value, 2),
'skipped_fps': round(camera_stats['skipped_fps'].value, 2),
'detection_fps': round(camera_stats['detection_fps'].value, 2),
'pid': camera_stats['process'].pid,
'capture_pid': camera_stats['capture_process'].pid
}
stats['detectors'] = {}
for name, detector in current_app.detectors.items():
stats['detectors'][name] = {
'inference_speed': round(detector.avg_inference_speed.value*1000, 2),
'detection_start': detector.detection_start.value,
'pid': detector.detect_process.pid
}
stats['detection_fps'] = round(total_detection_fps, 2)
stats = stats_snapshot(current_app.stats_tracking)
return jsonify(stats)
@bp.route('/<camera_name>/<label>/best.jpg')
@@ -182,12 +187,13 @@ def best(camera_name, label):
best_frame = np.zeros((720,1280,3), np.uint8)
else:
best_frame = cv2.cvtColor(best_frame, cv2.COLOR_YUV2BGR_I420)
crop = bool(request.args.get('crop', 0, type=int))
if crop:
region = best_object.get('region', [0,0,300,300])
box = best_object.get('box', (0,0,300,300))
region = calculate_region(best_frame.shape, box[0], box[1], box[2], box[3], 1.1)
best_frame = best_frame[region[1]:region[3], region[0]:region[2]]
height = int(request.args.get('h', str(best_frame.shape[0])))
width = int(height*best_frame.shape[1]/best_frame.shape[0])
@@ -203,18 +209,34 @@ def best(camera_name, label):
def mjpeg_feed(camera_name):
fps = int(request.args.get('fps', '3'))
height = int(request.args.get('h', '360'))
draw_options = {
'bounding_boxes': request.args.get('bbox', type=int),
'timestamp': request.args.get('timestamp', type=int),
'zones': request.args.get('zones', type=int),
'mask': request.args.get('mask', type=int),
'motion_boxes': request.args.get('motion', type=int),
'regions': request.args.get('regions', type=int),
}
if camera_name in current_app.frigate_config.cameras:
# return a multipart response
return Response(imagestream(current_app.detected_frames_processor, camera_name, fps, height),
return Response(imagestream(current_app.detected_frames_processor, camera_name, fps, height, draw_options),
mimetype='multipart/x-mixed-replace; boundary=frame')
else:
return "Camera named {} not found".format(camera_name), 404
@bp.route('/<camera_name>/latest.jpg')
def latest_frame(camera_name):
draw_options = {
'bounding_boxes': request.args.get('bbox', type=int),
'timestamp': request.args.get('timestamp', type=int),
'zones': request.args.get('zones', type=int),
'mask': request.args.get('mask', type=int),
'motion_boxes': request.args.get('motion', type=int),
'regions': request.args.get('regions', type=int),
}
if camera_name in current_app.frigate_config.cameras:
# max out at specified FPS
frame = current_app.detected_frames_processor.get_current_frame(camera_name)
frame = current_app.detected_frames_processor.get_current_frame(camera_name, draw_options)
if frame is None:
frame = np.zeros((720,1280,3), np.uint8)
@@ -229,12 +251,12 @@ def latest_frame(camera_name):
return response
else:
return "Camera named {} not found".format(camera_name), 404
def imagestream(detected_frames_processor, camera_name, fps, height):
def imagestream(detected_frames_processor, camera_name, fps, height, draw_options):
while True:
# max out at specified FPS
time.sleep(1/fps)
frame = detected_frames_processor.get_current_frame(camera_name, draw=True)
frame = detected_frames_processor.get_current_frame(camera_name, draw_options)
if frame is None:
frame = np.zeros((height,int(height*16/9),3), np.uint8)

View File

@@ -6,6 +6,7 @@ import signal
import queue
import multiprocessing as mp
from logging import handlers
from setproctitle import setproctitle
def listener_configurer():
@@ -31,6 +32,7 @@ def log_process(log_queue):
signal.signal(signal.SIGINT, receiveSignal)
threading.current_thread().name = f"logger"
setproctitle("frigate.logger")
listener_configurer()
while True:
if stop_event.is_set() and log_queue.empty():
@@ -72,4 +74,4 @@ class LogPipe(threading.Thread):
def close(self):
"""Close the write end of the pipe.
"""
os.close(self.fdWrite)
os.close(self.fdWrite)

View File

@@ -12,3 +12,5 @@ class Event(Model):
false_positive = BooleanField()
zones = JSONField()
thumbnail = TextField()
has_clip = BooleanField(default=True)
has_snapshot = BooleanField(default=True)

View File

@@ -1,18 +1,20 @@
import cv2
import imutils
import numpy as np
from frigate.config import MotionConfig
class MotionDetector():
def __init__(self, frame_shape, mask, resize_factor=4):
def __init__(self, frame_shape, config: MotionConfig):
self.config = config
self.frame_shape = frame_shape
self.resize_factor = resize_factor
self.motion_frame_size = (int(frame_shape[0]/resize_factor), int(frame_shape[1]/resize_factor))
self.resize_factor = frame_shape[0]/config.frame_height
self.motion_frame_size = (config.frame_height, config.frame_height*frame_shape[1]//frame_shape[0])
self.avg_frame = np.zeros(self.motion_frame_size, np.float)
self.avg_delta = np.zeros(self.motion_frame_size, np.float)
self.motion_frame_count = 0
self.frame_counter = 0
resized_mask = cv2.resize(mask, dsize=(self.motion_frame_size[1], self.motion_frame_size[0]), interpolation=cv2.INTER_LINEAR)
resized_mask = cv2.resize(config.mask, dsize=(self.motion_frame_size[1], self.motion_frame_size[0]), interpolation=cv2.INTER_LINEAR)
self.mask = np.where(resized_mask==[0])
def detect(self, frame):
@@ -23,6 +25,8 @@ class MotionDetector():
# resize frame
resized_frame = cv2.resize(gray, dsize=(self.motion_frame_size[1], self.motion_frame_size[0]), interpolation=cv2.INTER_LINEAR)
# TODO: can I improve the contrast of the grayscale image here?
# convert to grayscale
# resized_frame = cv2.cvtColor(resized_frame, cv2.COLOR_BGR2GRAY)
@@ -38,14 +42,13 @@ class MotionDetector():
frameDelta = cv2.absdiff(resized_frame, cv2.convertScaleAbs(self.avg_frame))
# compute the average delta over the past few frames
# the alpha value can be modified to configure how sensitive the motion detection is.
# higher values mean the current frame impacts the delta a lot, and a single raindrop may
# register as motion, too low and a fast moving person wont be detected as motion
# this also assumes that a person is in the same location across more than a single frame
cv2.accumulateWeighted(frameDelta, self.avg_delta, 0.2)
cv2.accumulateWeighted(frameDelta, self.avg_delta, self.config.delta_alpha)
# compute the threshold image for the current frame
current_thresh = cv2.threshold(frameDelta, 25, 255, cv2.THRESH_BINARY)[1]
# TODO: threshold
current_thresh = cv2.threshold(frameDelta, self.config.threshold, 255, cv2.THRESH_BINARY)[1]
# black out everything in the avg_delta where there isnt motion in the current frame
avg_delta_image = cv2.convertScaleAbs(self.avg_delta)
@@ -53,7 +56,7 @@ class MotionDetector():
# then look for deltas above the threshold, but only in areas where there is a delta
# in the current frame. this prevents deltas from previous frames from being included
thresh = cv2.threshold(avg_delta_image, 25, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.threshold(avg_delta_image, self.config.threshold, 255, cv2.THRESH_BINARY)[1]
# dilate the thresholded image to fill in holes, then find contours
# on thresholded image
@@ -65,19 +68,18 @@ class MotionDetector():
for c in cnts:
# if the contour is big enough, count it as motion
contour_area = cv2.contourArea(c)
if contour_area > 100:
if contour_area > self.config.contour_area:
x, y, w, h = cv2.boundingRect(c)
motion_boxes.append((x*self.resize_factor, y*self.resize_factor, (x+w)*self.resize_factor, (y+h)*self.resize_factor))
motion_boxes.append((int(x*self.resize_factor), int(y*self.resize_factor), int((x+w)*self.resize_factor), int((y+h)*self.resize_factor)))
if len(motion_boxes) > 0:
self.motion_frame_count += 1
# TODO: this really depends on FPS
if self.motion_frame_count >= 10:
# only average in the current frame if the difference persists for at least 3 frames
cv2.accumulateWeighted(resized_frame, self.avg_frame, 0.2)
# only average in the current frame if the difference persists for a bit
cv2.accumulateWeighted(resized_frame, self.avg_frame, self.config.frame_alpha)
else:
# when no motion, just keep averaging the frames together
cv2.accumulateWeighted(resized_frame, self.avg_frame, 0.2)
cv2.accumulateWeighted(resized_frame, self.avg_frame, self.config.frame_alpha)
self.motion_frame_count = 0
return motion_boxes

View File

@@ -3,12 +3,81 @@ import threading
import paho.mqtt.client as mqtt
from frigate.config import MqttConfig
from frigate.config import FrigateConfig
logger = logging.getLogger(__name__)
def create_mqtt_client(config: MqttConfig):
client = mqtt.Client(client_id=config.client_id)
def create_mqtt_client(config: FrigateConfig, camera_metrics):
mqtt_config = config.mqtt
def on_clips_command(client, userdata, message):
payload = message.payload.decode()
logger.debug(f"on_clips_toggle: {message.topic} {payload}")
camera_name = message.topic.split('/')[-3]
clips_settings = config.cameras[camera_name].clips
if payload == 'ON':
if not clips_settings.enabled:
logger.info(f"Turning on clips for {camera_name} via mqtt")
clips_settings._enabled = True
elif payload == 'OFF':
if clips_settings.enabled:
logger.info(f"Turning off clips for {camera_name} via mqtt")
clips_settings._enabled = False
else:
logger.warning(f"Received unsupported value at {message.topic}: {payload}")
state_topic = f"{message.topic[:-4]}/state"
client.publish(state_topic, payload, retain=True)
def on_snapshots_command(client, userdata, message):
payload = message.payload.decode()
logger.debug(f"on_snapshots_toggle: {message.topic} {payload}")
camera_name = message.topic.split('/')[-3]
snapshots_settings = config.cameras[camera_name].snapshots
if payload == 'ON':
if not snapshots_settings.enabled:
logger.info(f"Turning on snapshots for {camera_name} via mqtt")
snapshots_settings._enabled = True
elif payload == 'OFF':
if snapshots_settings.enabled:
logger.info(f"Turning off snapshots for {camera_name} via mqtt")
snapshots_settings._enabled = False
else:
logger.warning(f"Received unsupported value at {message.topic}: {payload}")
state_topic = f"{message.topic[:-4]}/state"
client.publish(state_topic, payload, retain=True)
def on_detect_command(client, userdata, message):
payload = message.payload.decode()
logger.debug(f"on_detect_toggle: {message.topic} {payload}")
camera_name = message.topic.split('/')[-3]
detect_settings = config.cameras[camera_name].detect
if payload == 'ON':
if not camera_metrics[camera_name]["detection_enabled"].value:
logger.info(f"Turning on detection for {camera_name} via mqtt")
camera_metrics[camera_name]["detection_enabled"].value = True
detect_settings._enabled = True
elif payload == 'OFF':
if camera_metrics[camera_name]["detection_enabled"].value:
logger.info(f"Turning off detection for {camera_name} via mqtt")
camera_metrics[camera_name]["detection_enabled"].value = False
detect_settings._enabled = False
else:
logger.warning(f"Received unsupported value at {message.topic}: {payload}")
state_topic = f"{message.topic[:-4]}/state"
client.publish(state_topic, payload, retain=True)
def on_connect(client, userdata, flags, rc):
threading.current_thread().name = "mqtt"
if rc != 0:
@@ -22,15 +91,35 @@ def create_mqtt_client(config: MqttConfig):
logger.error("Unable to connect to MQTT: Connection refused. Error code: " + str(rc))
logger.info("MQTT connected")
client.publish(config.topic_prefix+'/available', 'online', retain=True)
client.publish(mqtt_config.topic_prefix+'/available', 'online', retain=True)
client = mqtt.Client(client_id=mqtt_config.client_id)
client.on_connect = on_connect
client.will_set(config.topic_prefix+'/available', payload='offline', qos=1, retain=True)
if not config.user is None:
client.username_pw_set(config.user, password=config.password)
client.will_set(mqtt_config.topic_prefix+'/available', payload='offline', qos=1, retain=True)
# register callbacks
for name in config.cameras.keys():
client.message_callback_add(f"{mqtt_config.topic_prefix}/{name}/clips/set", on_clips_command)
client.message_callback_add(f"{mqtt_config.topic_prefix}/{name}/snapshots/set", on_snapshots_command)
client.message_callback_add(f"{mqtt_config.topic_prefix}/{name}/detect/set", on_detect_command)
if not mqtt_config.user is None:
client.username_pw_set(mqtt_config.user, password=mqtt_config.password)
try:
client.connect(config.host, config.port, 60)
client.connect(mqtt_config.host, mqtt_config.port, 60)
except Exception as e:
logger.error(f"Unable to connect to MQTT server: {e}")
raise
client.loop_start()
for name in config.cameras.keys():
client.publish(f"{mqtt_config.topic_prefix}/{name}/clips/state", 'ON' if config.cameras[name].clips.enabled else 'OFF', retain=True)
client.publish(f"{mqtt_config.topic_prefix}/{name}/snapshots/state", 'ON' if config.cameras[name].snapshots.enabled else 'OFF', retain=True)
client.publish(f"{mqtt_config.topic_prefix}/{name}/detect/state", 'ON' if config.cameras[name].detect.enabled else 'OFF', retain=True)
client.subscribe(f"{mqtt_config.topic_prefix}/+/clips/set")
client.subscribe(f"{mqtt_config.topic_prefix}/+/snapshots/set")
client.subscribe(f"{mqtt_config.topic_prefix}/+/detect/set")
return client

View File

@@ -20,7 +20,7 @@ import numpy as np
from frigate.config import FrigateConfig, CameraConfig
from frigate.const import RECORD_DIR, CLIPS_DIR, CACHE_DIR
from frigate.edgetpu import load_labels
from frigate.util import SharedMemoryFrameManager, draw_box_with_label
from frigate.util import SharedMemoryFrameManager, draw_box_with_label, calculate_region
logger = logging.getLogger(__name__)
@@ -54,11 +54,11 @@ def is_better_thumbnail(current_thumb, new_obj, frame_shape) -> bool:
# if the score is better by more than 5%
if new_obj['score'] > current_thumb['score']+.05:
return True
# if the area is 10% larger
if new_obj['area'] > current_thumb['area']*1.1:
return True
return False
class TrackedObject():
@@ -73,10 +73,7 @@ class TrackedObject():
self.top_score = self.computed_score = 0.0
self.thumbnail_data = None
self.frame = None
self.previous = None
self._snapshot_jpg_time = 0
ret, jpg = cv2.imencode('.jpg', np.zeros((300,300,3), np.uint8))
self._snapshot_jpg = jpg.tobytes()
self.previous = self.to_dict()
# start the score history
self.score_history = [self.obj_data['score']]
@@ -97,9 +94,9 @@ class TrackedObject():
if len(scores) < 3:
scores += [0.0]*(3 - len(scores))
return median(scores)
def update(self, current_frame_time, obj_data):
previous = self.to_dict()
significant_update = False
self.obj_data.update(obj_data)
# if the object is not in the current frame, add a 0.0 to the score history
if self.obj_data['frame_time'] != current_frame_time:
@@ -119,7 +116,7 @@ class TrackedObject():
if not self.false_positive:
# determine if this frame is a better thumbnail
if (
self.thumbnail_data is None
self.thumbnail_data is None
or is_better_thumbnail(self.thumbnail_data, self.obj_data, self.camera_config.frame_shape)
):
self.thumbnail_data = {
@@ -129,8 +126,8 @@ class TrackedObject():
'region': self.obj_data['region'],
'score': self.obj_data['score']
}
self.previous = previous
significant_update = True
# check zones
current_zones = []
bottom_center = (self.obj_data['centroid'][0], self.obj_data['box'][3])
@@ -143,9 +140,14 @@ class TrackedObject():
if name in self.current_zones or not zone_filtered(self, zone.filters):
current_zones.append(name)
self.entered_zones.add(name)
# if the zones changed, signal an update
if not self.false_positive and set(self.current_zones) != set(current_zones):
significant_update = True
self.current_zones = current_zones
return significant_update
def to_dict(self, include_thumbnail: bool = False):
return {
'id': self.obj_data['id'],
@@ -162,53 +164,62 @@ class TrackedObject():
'region': self.obj_data['region'],
'current_zones': self.current_zones.copy(),
'entered_zones': list(self.entered_zones).copy(),
'thumbnail': base64.b64encode(self.get_jpg_bytes()).decode('utf-8') if include_thumbnail else None
'thumbnail': base64.b64encode(self.get_thumbnail()).decode('utf-8') if include_thumbnail else None
}
def get_thumbnail(self):
if self.thumbnail_data is None or not self.thumbnail_data['frame_time'] in self.frame_cache:
ret, jpg = cv2.imencode('.jpg', np.zeros((175,175,3), np.uint8))
jpg_bytes = self.get_jpg_bytes(timestamp=False, bounding_box=False, crop=True, height=175)
if jpg_bytes:
return jpg_bytes
else:
ret, jpg = cv2.imencode('.jpg', np.zeros((175,175,3), np.uint8))
return jpg.tobytes()
def get_jpg_bytes(self):
if self.thumbnail_data is None or self._snapshot_jpg_time == self.thumbnail_data['frame_time']:
return self._snapshot_jpg
if not self.thumbnail_data['frame_time'] in self.frame_cache:
logger.error(f"Unable to create thumbnail for {self.obj_data['id']}")
logger.error(f"Looking for frame_time of {self.thumbnail_data['frame_time']}")
logger.error(f"Thumbnail frames: {','.join([str(k) for k in self.frame_cache.keys()])}")
return self._snapshot_jpg
# TODO: crop first to avoid converting the entire frame?
snapshot_config = self.camera_config.snapshots
best_frame = cv2.cvtColor(self.frame_cache[self.thumbnail_data['frame_time']], cv2.COLOR_YUV2BGR_I420)
if snapshot_config.draw_bounding_boxes:
def get_jpg_bytes(self, timestamp=False, bounding_box=False, crop=False, height=None):
if self.thumbnail_data is None:
return None
try:
best_frame = cv2.cvtColor(self.frame_cache[self.thumbnail_data['frame_time']], cv2.COLOR_YUV2BGR_I420)
except KeyError:
logger.warning(f"Unable to create jpg because frame {self.thumbnail_data['frame_time']} is not in the cache")
return None
if bounding_box:
thickness = 2
color = COLOR_MAP[self.obj_data['label']]
# draw the bounding boxes on the frame
box = self.thumbnail_data['box']
draw_box_with_label(best_frame, box[0], box[1], box[2], box[3], self.obj_data['label'],
f"{int(self.thumbnail_data['score']*100)}% {int(self.thumbnail_data['area'])}", thickness=thickness, color=color)
if snapshot_config.crop_to_region:
region = self.thumbnail_data['region']
draw_box_with_label(best_frame, box[0], box[1], box[2], box[3], self.obj_data['label'], f"{int(self.thumbnail_data['score']*100)}% {int(self.thumbnail_data['area'])}", thickness=thickness, color=color)
if crop:
box = self.thumbnail_data['box']
region = calculate_region(best_frame.shape, box[0], box[1], box[2], box[3], 1.1)
best_frame = best_frame[region[1]:region[3], region[0]:region[2]]
if snapshot_config.height:
height = snapshot_config.height
if height:
width = int(height*best_frame.shape[1]/best_frame.shape[0])
best_frame = cv2.resize(best_frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
if snapshot_config.show_timestamp:
if timestamp:
time_to_show = datetime.datetime.fromtimestamp(self.thumbnail_data['frame_time']).strftime("%m/%d/%Y %H:%M:%S")
size = cv2.getTextSize(time_to_show, cv2.FONT_HERSHEY_SIMPLEX, fontScale=1, thickness=2)
text_width = size[0][0]
desired_size = max(150, 0.33*best_frame.shape[1])
font_scale = desired_size/text_width
cv2.putText(best_frame, time_to_show, (5, best_frame.shape[0]-7), cv2.FONT_HERSHEY_SIMPLEX,
cv2.putText(best_frame, time_to_show, (5, best_frame.shape[0]-7), cv2.FONT_HERSHEY_SIMPLEX,
fontScale=font_scale, color=(255, 255, 255), thickness=2)
ret, jpg = cv2.imencode('.jpg', best_frame)
if ret:
self._snapshot_jpg = jpg.tobytes()
return self._snapshot_jpg
return jpg.tobytes()
else:
return None
def zone_filtered(obj: TrackedObject, object_config):
object_name = obj.obj_data['label']
@@ -220,7 +231,7 @@ def zone_filtered(obj: TrackedObject, object_config):
# detected object, don't add it to detected objects
if obj_settings.min_area > obj.obj_data['area']:
return True
# if the detected object is larger than the
# max area, don't add it to detected objects
if obj_settings.max_area < obj.obj_data['area']:
@@ -229,7 +240,7 @@ def zone_filtered(obj: TrackedObject, object_config):
# if the score is lower than the threshold, skip
if obj_settings.threshold > obj.computed_score:
return True
return False
# Maintains the state of a camera
@@ -247,23 +258,27 @@ class CameraState():
self._current_frame = np.zeros(self.camera_config.frame_shape_yuv, np.uint8)
self.current_frame_lock = threading.Lock()
self.current_frame_time = 0.0
self.motion_boxes = []
self.regions = []
self.previous_frame_id = None
self.callbacks = defaultdict(lambda: [])
def get_current_frame(self, draw=False):
def get_current_frame(self, draw_options={}):
with self.current_frame_lock:
frame_copy = np.copy(self._current_frame)
frame_time = self.current_frame_time
tracked_objects = {k: v.to_dict() for k,v in self.tracked_objects.items()}
motion_boxes = self.motion_boxes.copy()
regions = self.regions.copy()
frame_copy = cv2.cvtColor(frame_copy, cv2.COLOR_YUV2BGR_I420)
# draw on the frame
if draw:
if draw_options.get('bounding_boxes'):
# draw the bounding boxes on the frame
for obj in tracked_objects.values():
thickness = 2
color = COLOR_MAP[obj['label']]
if obj['frame_time'] != frame_time:
thickness = 1
color = (255,0,0)
@@ -271,19 +286,28 @@ class CameraState():
# draw the bounding boxes on the frame
box = obj['box']
draw_box_with_label(frame_copy, box[0], box[1], box[2], box[3], obj['label'], f"{int(obj['score']*100)}% {int(obj['area'])}", thickness=thickness, color=color)
# draw the regions on the frame
region = obj['region']
cv2.rectangle(frame_copy, (region[0], region[1]), (region[2], region[3]), (0,255,0), 1)
if self.camera_config.snapshots.show_timestamp:
time_to_show = datetime.datetime.fromtimestamp(frame_time).strftime("%m/%d/%Y %H:%M:%S")
cv2.putText(frame_copy, time_to_show, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, fontScale=.8, color=(255, 255, 255), thickness=2)
if self.camera_config.snapshots.draw_zones:
for name, zone in self.camera_config.zones.items():
thickness = 8 if any([name in obj['current_zones'] for obj in tracked_objects.values()]) else 2
cv2.drawContours(frame_copy, [zone.contour], -1, zone.color, thickness)
if draw_options.get('regions'):
for region in regions:
cv2.rectangle(frame_copy, (region[0], region[1]), (region[2], region[3]), (0,255,0), 2)
if draw_options.get('zones'):
for name, zone in self.camera_config.zones.items():
thickness = 8 if any([name in obj['current_zones'] for obj in tracked_objects.values()]) else 2
cv2.drawContours(frame_copy, [zone.contour], -1, zone.color, thickness)
if draw_options.get('mask'):
mask_overlay = np.where(self.camera_config.motion.mask==[0])
frame_copy[mask_overlay] = [0,0,0]
if draw_options.get('motion_boxes'):
for m_box in motion_boxes:
cv2.rectangle(frame_copy, (m_box[0], m_box[1]), (m_box[2], m_box[3]), (0,0,255), 2)
if draw_options.get('timestamp'):
time_to_show = datetime.datetime.fromtimestamp(frame_time).strftime("%m/%d/%Y %H:%M:%S")
cv2.putText(frame_copy, time_to_show, (10, 30), cv2.FONT_HERSHEY_SIMPLEX, fontScale=.8, color=(255, 255, 255), thickness=2)
return frame_copy
def finished(self, obj_id):
@@ -292,8 +316,10 @@ class CameraState():
def on(self, event_type: str, callback: Callable[[Dict], None]):
self.callbacks[event_type].append(callback)
def update(self, frame_time, current_detections):
def update(self, frame_time, current_detections, motion_boxes, regions):
self.current_frame_time = frame_time
self.motion_boxes = motion_boxes
self.regions = regions
# get the new frame
frame_id = f"{self.name}{frame_time}"
current_frame = self.frame_manager.get(frame_id, self.camera_config.frame_shape_yuv)
@@ -310,20 +336,20 @@ class CameraState():
# call event handlers
for c in self.callbacks['start']:
c(self.name, new_obj, frame_time)
for id in updated_ids:
updated_obj = self.tracked_objects[id]
updated_obj.update(frame_time, current_detections[id])
significant_update = updated_obj.update(frame_time, current_detections[id])
if (not updated_obj.false_positive
and updated_obj.thumbnail_data['frame_time'] == frame_time
and frame_time not in self.frame_cache):
self.frame_cache[frame_time] = np.copy(current_frame)
if significant_update:
# ensure this frame is stored in the cache
if updated_obj.thumbnail_data['frame_time'] == frame_time and frame_time not in self.frame_cache:
self.frame_cache[frame_time] = np.copy(current_frame)
# call event handlers
for c in self.callbacks['update']:
c(self.name, updated_obj, frame_time)
# call event handlers
for c in self.callbacks['update']:
c(self.name, updated_obj, frame_time)
for id in removed_ids:
# publish events to mqtt
removed_obj = self.tracked_objects[id]
@@ -342,9 +368,9 @@ class CameraState():
if object_type in self.best_objects:
current_best = self.best_objects[object_type]
now = datetime.datetime.now().timestamp()
# if the object is a higher score than the current best score
# if the object is a higher score than the current best score
# or the current object is older than desired, use the new object
if (is_better_thumbnail(current_best.thumbnail_data, obj.thumbnail_data, self.camera_config.frame_shape)
if (is_better_thumbnail(current_best.thumbnail_data, obj.thumbnail_data, self.camera_config.frame_shape)
or (now - current_best.thumbnail_data['frame_time']) > self.camera_config.best_image_timeout):
self.best_objects[object_type] = obj
for c in self.callbacks['snapshot']:
@@ -353,13 +379,13 @@ class CameraState():
self.best_objects[object_type] = obj
for c in self.callbacks['snapshot']:
c(self.name, self.best_objects[object_type], frame_time)
# update overall camera state for each object type
obj_counter = Counter()
for obj in self.tracked_objects.values():
if not obj.false_positive:
obj_counter[obj.obj_data['label']] += 1
# report on detected objects
for obj_name, count in obj_counter.items():
if count != self.object_counts[obj_name]:
@@ -375,14 +401,14 @@ class CameraState():
c(self.name, obj_name, 0)
for c in self.callbacks['snapshot']:
c(self.name, self.best_objects[obj_name], frame_time)
# cleanup thumbnail frame cache
current_thumb_frames = set([obj.thumbnail_data['frame_time'] for obj in self.tracked_objects.values() if not obj.false_positive])
current_best_frames = set([obj.thumbnail_data['frame_time'] for obj in self.best_objects.values()])
thumb_frames_to_delete = [t for t in self.frame_cache.keys() if not t in current_thumb_frames and not t in current_best_frames]
for t in thumb_frames_to_delete:
del self.frame_cache[t]
with self.current_frame_lock:
self._current_frame = current_frame
if not self.previous_frame_id is None:
@@ -407,18 +433,41 @@ class TrackedObjectProcessor(threading.Thread):
self.event_queue.put(('start', camera, obj.to_dict()))
def update(camera, obj: TrackedObject, current_frame_time):
if not obj.thumbnail_data is None and obj.thumbnail_data['frame_time'] == current_frame_time:
message = { 'before': obj.previous, 'after': obj.to_dict() }
self.client.publish(f"{self.topic_prefix}/events", json.dumps(message), retain=False)
after = obj.to_dict()
message = { 'before': obj.previous, 'after': after, 'type': 'new' if obj.previous['false_positive'] else 'update' }
self.client.publish(f"{self.topic_prefix}/events", json.dumps(message), retain=False)
obj.previous = after
def end(camera, obj: TrackedObject, current_frame_time):
snapshot_config = self.config.cameras[camera].snapshots
event_data = obj.to_dict(include_thumbnail=True)
event_data['has_snapshot'] = False
if not obj.false_positive:
message = { 'before': obj.previous, 'after': obj.to_dict() }
message = { 'before': obj.previous, 'after': obj.to_dict(), 'type': 'end' }
self.client.publish(f"{self.topic_prefix}/events", json.dumps(message), retain=False)
self.event_queue.put(('end', camera, obj.to_dict(include_thumbnail=True)))
# write snapshot to disk if enabled
if snapshot_config.enabled:
jpg_bytes = obj.get_jpg_bytes(
timestamp=snapshot_config.timestamp,
bounding_box=snapshot_config.bounding_box,
crop=snapshot_config.crop,
height=snapshot_config.height
)
with open(os.path.join(CLIPS_DIR, f"{camera}-{obj.obj_data['id']}.jpg"), 'wb') as j:
j.write(jpg_bytes)
event_data['has_snapshot'] = True
self.event_queue.put(('end', camera, event_data))
def snapshot(camera, obj: TrackedObject, current_frame_time):
self.client.publish(f"{self.topic_prefix}/{camera}/{obj.obj_data['label']}/snapshot", obj.get_jpg_bytes(), retain=True)
mqtt_config = self.config.cameras[camera].mqtt
if mqtt_config.enabled:
jpg_bytes = obj.get_jpg_bytes(
timestamp=mqtt_config.timestamp,
bounding_box=mqtt_config.bounding_box,
crop=mqtt_config.crop,
height=mqtt_config.height
)
self.client.publish(f"{self.topic_prefix}/{camera}/{obj.obj_data['label']}/snapshot", jpg_bytes, retain=True)
def object_status(camera, object_name, status):
self.client.publish(f"{self.topic_prefix}/{camera}/{object_name}", status, retain=False)
@@ -441,20 +490,20 @@ class TrackedObjectProcessor(threading.Thread):
# }
# }
self.zone_data = defaultdict(lambda: defaultdict(lambda: {}))
def get_best(self, camera, label):
# TODO: need a lock here
camera_state = self.camera_states[camera]
if label in camera_state.best_objects:
best_obj = camera_state.best_objects[label]
best = best_obj.to_dict()
best['frame'] = camera_state.frame_cache[best_obj.thumbnail_data['frame_time']]
best = best_obj.thumbnail_data.copy()
best['frame'] = camera_state.frame_cache.get(best_obj.thumbnail_data['frame_time'])
return best
else:
return {}
def get_current_frame(self, camera, draw=False):
return self.camera_states[camera].get_current_frame(draw)
def get_current_frame(self, camera, draw_options={}):
return self.camera_states[camera].get_current_frame(draw_options)
def run(self):
while True:
@@ -463,13 +512,13 @@ class TrackedObjectProcessor(threading.Thread):
break
try:
camera, frame_time, current_tracked_objects = self.tracked_objects_queue.get(True, 10)
camera, frame_time, current_tracked_objects, motion_boxes, regions = self.tracked_objects_queue.get(True, 10)
except queue.Empty:
continue
camera_state = self.camera_states[camera]
camera_state.update(frame_time, current_tracked_objects)
camera_state.update(frame_time, current_tracked_objects, motion_boxes, regions)
# update zone counts for each label
# for each zone in the current camera
@@ -479,7 +528,7 @@ class TrackedObjectProcessor(threading.Thread):
for obj in camera_state.tracked_objects.values():
if zone in obj.current_zones and not obj.false_positive:
obj_counter[obj.obj_data['label']] += 1
# update counts and publish status
for label in set(list(self.zone_data[zone].keys()) + list(obj_counter.keys())):
# if we have previously published a count for this zone/label

View File

@@ -12,14 +12,15 @@ import cv2
import numpy as np
from scipy.spatial import distance as dist
from frigate.util import calculate_region, draw_box_with_label
from frigate.config import DetectConfig
from frigate.util import draw_box_with_label
class ObjectTracker():
def __init__(self, max_disappeared):
def __init__(self, config: DetectConfig):
self.tracked_objects = {}
self.disappeared = {}
self.max_disappeared = max_disappeared
self.max_disappeared = config.max_disappeared
def register(self, index, obj):
rand_id = ''.join(random.choices(string.ascii_lowercase + string.digits, k=6))

208
frigate/process_clip.py Normal file
View File

@@ -0,0 +1,208 @@
import datetime
import json
import logging
import multiprocessing as mp
import os
import subprocess as sp
import sys
from unittest import TestCase, main
import click
import cv2
import numpy as np
from frigate.config import FRIGATE_CONFIG_SCHEMA, FrigateConfig
from frigate.edgetpu import LocalObjectDetector
from frigate.motion import MotionDetector
from frigate.object_processing import COLOR_MAP, CameraState
from frigate.objects import ObjectTracker
from frigate.util import (DictFrameManager, EventsPerSecond,
SharedMemoryFrameManager, draw_box_with_label)
from frigate.video import (capture_frames, process_frames,
start_or_restart_ffmpeg)
logging.basicConfig()
logging.root.setLevel(logging.DEBUG)
logger = logging.getLogger(__name__)
def get_frame_shape(source):
ffprobe_cmd = " ".join([
'ffprobe',
'-v',
'panic',
'-show_error',
'-show_streams',
'-of',
'json',
'"'+source+'"'
])
p = sp.Popen(ffprobe_cmd, stdout=sp.PIPE, shell=True)
(output, err) = p.communicate()
p_status = p.wait()
info = json.loads(output)
video_info = [s for s in info['streams'] if s['codec_type'] == 'video'][0]
if video_info['height'] != 0 and video_info['width'] != 0:
return (video_info['height'], video_info['width'], 3)
# fallback to using opencv if ffprobe didnt succeed
video = cv2.VideoCapture(source)
ret, frame = video.read()
frame_shape = frame.shape
video.release()
return frame_shape
class ProcessClip():
def __init__(self, clip_path, frame_shape, config: FrigateConfig):
self.clip_path = clip_path
self.camera_name = 'camera'
self.config = config
self.camera_config = self.config.cameras['camera']
self.frame_shape = self.camera_config.frame_shape
self.ffmpeg_cmd = [c['cmd'] for c in self.camera_config.ffmpeg_cmds if 'detect' in c['roles']][0]
self.frame_manager = SharedMemoryFrameManager()
self.frame_queue = mp.Queue()
self.detected_objects_queue = mp.Queue()
self.camera_state = CameraState(self.camera_name, config, self.frame_manager)
def load_frames(self):
fps = EventsPerSecond()
skipped_fps = EventsPerSecond()
current_frame = mp.Value('d', 0.0)
frame_size = self.camera_config.frame_shape_yuv[0] * self.camera_config.frame_shape_yuv[1]
ffmpeg_process = start_or_restart_ffmpeg(self.ffmpeg_cmd, logger, sp.DEVNULL, frame_size)
capture_frames(ffmpeg_process, self.camera_name, self.camera_config.frame_shape_yuv, self.frame_manager,
self.frame_queue, fps, skipped_fps, current_frame)
ffmpeg_process.wait()
ffmpeg_process.communicate()
def process_frames(self, objects_to_track=['person'], object_filters={}):
mask = np.zeros((self.frame_shape[0], self.frame_shape[1], 1), np.uint8)
mask[:] = 255
motion_detector = MotionDetector(self.frame_shape, mask, self.camera_config.motion)
object_detector = LocalObjectDetector(labels='/labelmap.txt')
object_tracker = ObjectTracker(self.camera_config.detect)
process_info = {
'process_fps': mp.Value('d', 0.0),
'detection_fps': mp.Value('d', 0.0),
'detection_frame': mp.Value('d', 0.0)
}
stop_event = mp.Event()
model_shape = (self.config.model.height, self.config.model.width)
process_frames(self.camera_name, self.frame_queue, self.frame_shape, model_shape,
self.frame_manager, motion_detector, object_detector, object_tracker,
self.detected_objects_queue, process_info,
objects_to_track, object_filters, mask, stop_event, exit_on_empty=True)
def top_object(self, debug_path=None):
obj_detected = False
top_computed_score = 0.0
def handle_event(name, obj, frame_time):
nonlocal obj_detected
nonlocal top_computed_score
if obj.computed_score > top_computed_score:
top_computed_score = obj.computed_score
if not obj.false_positive:
obj_detected = True
self.camera_state.on('new', handle_event)
self.camera_state.on('update', handle_event)
while(not self.detected_objects_queue.empty()):
camera_name, frame_time, current_tracked_objects, motion_boxes, regions = self.detected_objects_queue.get()
if not debug_path is None:
self.save_debug_frame(debug_path, frame_time, current_tracked_objects.values())
self.camera_state.update(frame_time, current_tracked_objects, motion_boxes, regions)
self.frame_manager.delete(self.camera_state.previous_frame_id)
return {
'object_detected': obj_detected,
'top_score': top_computed_score
}
def save_debug_frame(self, debug_path, frame_time, tracked_objects):
current_frame = cv2.cvtColor(self.frame_manager.get(f"{self.camera_name}{frame_time}", self.camera_config.frame_shape_yuv), cv2.COLOR_YUV2BGR_I420)
# draw the bounding boxes on the frame
for obj in tracked_objects:
thickness = 2
color = (0,0,175)
if obj['frame_time'] != frame_time:
thickness = 1
color = (255,0,0)
else:
color = (255,255,0)
# draw the bounding boxes on the frame
box = obj['box']
draw_box_with_label(current_frame, box[0], box[1], box[2], box[3], obj['id'], f"{int(obj['score']*100)}% {int(obj['area'])}", thickness=thickness, color=color)
# draw the regions on the frame
region = obj['region']
draw_box_with_label(current_frame, region[0], region[1], region[2], region[3], 'region', "", thickness=1, color=(0,255,0))
cv2.imwrite(f"{os.path.join(debug_path, os.path.basename(self.clip_path))}.{int(frame_time*1000000)}.jpg", current_frame)
@click.command()
@click.option("-p", "--path", required=True, help="Path to clip or directory to test.")
@click.option("-l", "--label", default='person', help="Label name to detect.")
@click.option("-t", "--threshold", default=0.85, help="Threshold value for objects.")
@click.option("-s", "--scores", default=None, help="File to save csv of top scores")
@click.option("--debug-path", default=None, help="Path to output frames for debugging.")
def process(path, label, threshold, scores, debug_path):
clips = []
if os.path.isdir(path):
files = os.listdir(path)
files.sort()
clips = [os.path.join(path, file) for file in files]
elif os.path.isfile(path):
clips.append(path)
json_config = {
'mqtt': {
'host': 'mqtt'
},
'cameras': {
'camera': {
'ffmpeg': {
'inputs': [
{ 'path': 'path.mp4', 'global_args': '', 'input_args': '', 'roles': ['detect'] }
]
},
'height': 1920,
'width': 1080
}
}
}
results = []
for c in clips:
logger.info(c)
frame_shape = get_frame_shape(c)
json_config['cameras']['camera']['height'] = frame_shape[0]
json_config['cameras']['camera']['width'] = frame_shape[1]
json_config['cameras']['camera']['ffmpeg']['inputs'][0]['path'] = c
config = FrigateConfig(config=FRIGATE_CONFIG_SCHEMA(json_config))
process_clip = ProcessClip(c, frame_shape, config)
process_clip.load_frames()
process_clip.process_frames(objects_to_track=[label])
results.append((c, process_clip.top_object(debug_path)))
if not scores is None:
with open(scores, 'w') as writer:
for result in results:
writer.write(f"{result[0]},{result[1]['top_score']}\n")
positive_count = sum(1 for result in results if result[1]['object_detected'])
print(f"Objects were detected in {positive_count}/{len(results)}({positive_count/len(results)*100:.2f}%) clip(s).")
if __name__ == '__main__':
process()

View File

@@ -45,9 +45,9 @@ class RecordingMaintainer(threading.Thread):
files_in_use = []
for process in psutil.process_iter():
if process.name() != 'ffmpeg':
continue
try:
if process.name() != 'ffmpeg':
continue
flist = process.open_files()
if flist:
for nt in flist:
@@ -98,9 +98,9 @@ class RecordingMaintainer(threading.Thread):
delete_before[name] = datetime.datetime.now().timestamp() - SECONDS_IN_DAY*camera.record.retain_days
for p in Path('/media/frigate/recordings').rglob("*.mp4"):
if not p.parent in delete_before:
if not p.parent.name in delete_before:
continue
if p.stat().st_mtime < delete_before[p.parent]:
if p.stat().st_mtime < delete_before[p.parent.name]:
p.unlink(missing_ok=True)
def run(self):
@@ -122,4 +122,4 @@ class RecordingMaintainer(threading.Thread):
self.move_files()

70
frigate/stats.py Normal file
View File

@@ -0,0 +1,70 @@
import json
import logging
import threading
import time
from frigate.config import FrigateConfig
from frigate.version import VERSION
logger = logging.getLogger(__name__)
def stats_init(camera_metrics, detectors):
stats_tracking = {
'camera_metrics': camera_metrics,
'detectors': detectors,
'started': int(time.time())
}
return stats_tracking
def stats_snapshot(stats_tracking):
camera_metrics = stats_tracking['camera_metrics']
stats = {}
total_detection_fps = 0
for name, camera_stats in camera_metrics.items():
total_detection_fps += camera_stats['detection_fps'].value
stats[name] = {
'camera_fps': round(camera_stats['camera_fps'].value, 2),
'process_fps': round(camera_stats['process_fps'].value, 2),
'skipped_fps': round(camera_stats['skipped_fps'].value, 2),
'detection_fps': round(camera_stats['detection_fps'].value, 2),
'pid': camera_stats['process'].pid,
'capture_pid': camera_stats['capture_process'].pid
}
stats['detectors'] = {}
for name, detector in stats_tracking["detectors"].items():
stats['detectors'][name] = {
'inference_speed': round(detector.avg_inference_speed.value * 1000, 2),
'detection_start': detector.detection_start.value,
'pid': detector.detect_process.pid
}
stats['detection_fps'] = round(total_detection_fps, 2)
stats['service'] = {
'uptime': (int(time.time()) - stats_tracking['started']),
'version': VERSION
}
return stats
class StatsEmitter(threading.Thread):
def __init__(self, config: FrigateConfig, stats_tracking, mqtt_client, topic_prefix, stop_event):
threading.Thread.__init__(self)
self.name = 'frigate_stats_emitter'
self.config = config
self.stats_tracking = stats_tracking
self.mqtt_client = mqtt_client
self.topic_prefix = topic_prefix
self.stop_event = stop_event
def run(self):
time.sleep(10)
while True:
if self.stop_event.is_set():
logger.info(f"Exiting watchdog...")
break
stats = stats_snapshot(self.stats_tracking)
self.mqtt_client.publish(f"{self.topic_prefix}/stats", json.dumps(stats), retain=False)
time.sleep(self.config.mqtt.stats_interval)

View File

@@ -191,12 +191,12 @@ class TestConfig(TestCase):
frigate_config = FrigateConfig(config=config)
assert('-re' in frigate_config.cameras['back'].ffmpeg_cmds[0]['cmd'])
def test_inherit_save_clips_retention(self):
def test_inherit_clips_retention(self):
config = {
'mqtt': {
'host': 'mqtt'
},
'save_clips': {
'clips': {
'retain': {
'default': 20,
'objects': {
@@ -217,14 +217,14 @@ class TestConfig(TestCase):
}
}
frigate_config = FrigateConfig(config=config)
assert(frigate_config.cameras['back'].save_clips.retain.objects['person'] == 30)
assert(frigate_config.cameras['back'].clips.retain.objects['person'] == 30)
def test_roles_listed_twice_throws_error(self):
config = {
'mqtt': {
'host': 'mqtt'
},
'save_clips': {
'clips': {
'retain': {
'default': 20,
'objects': {
@@ -252,7 +252,7 @@ class TestConfig(TestCase):
'mqtt': {
'host': 'mqtt'
},
'save_clips': {
'clips': {
'retain': {
'default': 20,
'objects': {
@@ -279,12 +279,12 @@ class TestConfig(TestCase):
}
self.assertRaises(vol.MultipleInvalid, lambda: FrigateConfig(config=config))
def test_save_clips_should_default_to_global_objects(self):
def test_clips_should_default_to_global_objects(self):
config = {
'mqtt': {
'host': 'mqtt'
},
'save_clips': {
'clips': {
'retain': {
'default': 20,
'objects': {
@@ -304,16 +304,39 @@ class TestConfig(TestCase):
},
'height': 1080,
'width': 1920,
'save_clips': {
'clips': {
'enabled': True
}
}
}
}
config = FrigateConfig(config=config)
assert(len(config.cameras['back'].save_clips.objects) == 2)
assert('dog' in config.cameras['back'].save_clips.objects)
assert('person' in config.cameras['back'].save_clips.objects)
assert(config.cameras['back'].clips.objects is None)
def test_role_assigned_but_not_enabled(self):
json_config = {
'mqtt': {
'host': 'mqtt'
},
'cameras': {
'back': {
'ffmpeg': {
'inputs': [
{ 'path': 'rtsp://10.0.0.1:554/video', 'roles': ['detect', 'rtmp'] },
{ 'path': 'rtsp://10.0.0.1:554/record', 'roles': ['record'] }
]
},
'height': 1080,
'width': 1920
}
}
}
config = FrigateConfig(config=json_config)
ffmpeg_cmds = config.cameras['back'].ffmpeg_cmds
assert(len(ffmpeg_cmds) == 1)
assert(not 'clips' in ffmpeg_cmds[0]['roles'])
if __name__ == '__main__':
main(verbosity=2)

View File

@@ -0,0 +1,39 @@
import cv2
import numpy as np
from unittest import TestCase, main
from frigate.util import yuv_region_2_rgb
class TestYuvRegion2RGB(TestCase):
def setUp(self):
self.bgr_frame = np.zeros((100, 200, 3), np.uint8)
self.bgr_frame[:] = (0, 0, 255)
self.bgr_frame[5:55, 5:55] = (255,0,0)
# cv2.imwrite(f"bgr_frame.jpg", self.bgr_frame)
self.yuv_frame = cv2.cvtColor(self.bgr_frame, cv2.COLOR_BGR2YUV_I420)
def test_crop_yuv(self):
cropped = yuv_region_2_rgb(self.yuv_frame, (10,10,50,50))
# ensure the upper left pixel is blue
assert(np.all(cropped[0, 0] == [0, 0, 255]))
def test_crop_yuv_out_of_bounds(self):
cropped = yuv_region_2_rgb(self.yuv_frame, (0,0,200,200))
# cv2.imwrite(f"cropped.jpg", cv2.cvtColor(cropped, cv2.COLOR_RGB2BGR))
# ensure the upper left pixel is red
# the yuv conversion has some noise
assert(np.all(cropped[0, 0] == [255, 1, 0]))
# ensure the bottom right is black
assert(np.all(cropped[199, 199] == [0, 0, 0]))
def test_crop_yuv_portrait(self):
bgr_frame = np.zeros((1920, 1080, 3), np.uint8)
bgr_frame[:] = (0, 0, 255)
bgr_frame[5:55, 5:55] = (255,0,0)
# cv2.imwrite(f"bgr_frame.jpg", self.bgr_frame)
yuv_frame = cv2.cvtColor(bgr_frame, cv2.COLOR_BGR2YUV_I420)
cropped = yuv_region_2_rgb(yuv_frame, (0, 852, 648, 1500))
# cv2.imwrite(f"cropped.jpg", cv2.cvtColor(cropped, cv2.COLOR_RGB2BGR))
if __name__ == '__main__':
main(verbosity=2)

View File

@@ -2,6 +2,7 @@ import collections
import datetime
import hashlib
import json
import logging
import signal
import subprocess as sp
import threading
@@ -15,6 +16,8 @@ import cv2
import matplotlib.pyplot as plt
import numpy as np
logger = logging.getLogger(__name__)
def draw_box_with_label(frame, x_min, y_min, x_max, y_max, label, info, thickness=2, color=None, position='ul'):
if color is None:
@@ -47,14 +50,11 @@ def draw_box_with_label(frame, x_min, y_min, x_max, y_max, label, info, thicknes
cv2.putText(frame, display_text, (text_offset_x, text_offset_y + line_height - 3), font, fontScale=font_scale, color=(0, 0, 0), thickness=2)
def calculate_region(frame_shape, xmin, ymin, xmax, ymax, multiplier=2):
# size is larger than longest edge
size = int(max(xmax-xmin, ymax-ymin)*multiplier)
# size is the longest edge and divisible by 4
size = int(max(xmax-xmin, ymax-ymin)//4*4*multiplier)
# dont go any smaller than 300
if size < 300:
size = 300
# if the size is too big to fit in the frame
if size > min(frame_shape[0], frame_shape[1]):
size = min(frame_shape[0], frame_shape[1])
# x_offset is midpoint of bounding box minus half the size
x_offset = int((xmax-xmin)/2.0+xmin-size/2.0)
@@ -62,48 +62,156 @@ def calculate_region(frame_shape, xmin, ymin, xmax, ymax, multiplier=2):
if x_offset < 0:
x_offset = 0
elif x_offset > (frame_shape[1]-size):
x_offset = (frame_shape[1]-size)
x_offset = max(0, (frame_shape[1]-size))
# y_offset is midpoint of bounding box minus half the size
y_offset = int((ymax-ymin)/2.0+ymin-size/2.0)
# if outside the image
# # if outside the image
if y_offset < 0:
y_offset = 0
elif y_offset > (frame_shape[0]-size):
y_offset = (frame_shape[0]-size)
y_offset = max(0, (frame_shape[0]-size))
return (x_offset, y_offset, x_offset+size, y_offset+size)
def get_yuv_crop(frame_shape, crop):
# crop should be (x1,y1,x2,y2)
frame_height = frame_shape[0]//3*2
frame_width = frame_shape[1]
# compute the width/height of the uv channels
uv_width = frame_width//2 # width of the uv channels
uv_height = frame_height//4 # height of the uv channels
# compute the offset for upper left corner of the uv channels
uv_x_offset = crop[0]//2 # x offset of the uv channels
uv_y_offset = crop[1]//4 # y offset of the uv channels
# compute the width/height of the uv crops
uv_crop_width = (crop[2] - crop[0])//2 # width of the cropped uv channels
uv_crop_height = (crop[3] - crop[1])//4 # height of the cropped uv channels
# ensure crop dimensions are multiples of 2 and 4
y = (
crop[0],
crop[1],
crop[0] + uv_crop_width*2,
crop[1] + uv_crop_height*4
)
u1 = (
0 + uv_x_offset,
frame_height + uv_y_offset,
0 + uv_x_offset + uv_crop_width,
frame_height + uv_y_offset + uv_crop_height
)
u2 = (
uv_width + uv_x_offset,
frame_height + uv_y_offset,
uv_width + uv_x_offset + uv_crop_width,
frame_height + uv_y_offset + uv_crop_height
)
v1 = (
0 + uv_x_offset,
frame_height + uv_height + uv_y_offset,
0 + uv_x_offset + uv_crop_width,
frame_height + uv_height + uv_y_offset + uv_crop_height
)
v2 = (
uv_width + uv_x_offset,
frame_height + uv_height + uv_y_offset,
uv_width + uv_x_offset + uv_crop_width,
frame_height + uv_height + uv_y_offset + uv_crop_height
)
return y, u1, u2, v1, v2
def yuv_region_2_rgb(frame, region):
height = frame.shape[0]//3*2
width = frame.shape[1]
# make sure the size is a multiple of 4
size = (region[3] - region[1])//4*4
try:
height = frame.shape[0]//3*2
width = frame.shape[1]
x1 = region[0]
y1 = region[1]
# get the crop box if the region extends beyond the frame
crop_x1 = max(0, region[0])
crop_y1 = max(0, region[1])
# ensure these are a multiple of 4
crop_x2 = min(width, region[2])
crop_y2 = min(height, region[3])
crop_box = (crop_x1, crop_y1, crop_x2, crop_y2)
uv_x1 = x1//2
uv_y1 = y1//4
y, u1, u2, v1, v2 = get_yuv_crop(frame.shape, crop_box)
uv_width = size//2
uv_height = size//4
# if the region starts outside the frame, indent the start point in the cropped frame
y_channel_x_offset = abs(min(0, region[0]))
y_channel_y_offset = abs(min(0, region[1]))
u_y_start = height
v_y_start = height + height//4
two_x_offset = width//2
uv_channel_x_offset = y_channel_x_offset//2
uv_channel_y_offset = y_channel_y_offset//4
yuv_cropped_frame = np.zeros((size+size//2, size), np.uint8)
# y channel
yuv_cropped_frame[0:size, 0:size] = frame[y1:y1+size, x1:x1+size]
# u channel
yuv_cropped_frame[size:size+uv_height, 0:uv_width] = frame[uv_y1+u_y_start:uv_y1+u_y_start+uv_height, uv_x1:uv_x1+uv_width]
yuv_cropped_frame[size:size+uv_height, uv_width:size] = frame[uv_y1+u_y_start:uv_y1+u_y_start+uv_height, uv_x1+two_x_offset:uv_x1+two_x_offset+uv_width]
# v channel
yuv_cropped_frame[size+uv_height:size+uv_height*2, 0:uv_width] = frame[uv_y1+v_y_start:uv_y1+v_y_start+uv_height, uv_x1:uv_x1+uv_width]
yuv_cropped_frame[size+uv_height:size+uv_height*2, uv_width:size] = frame[uv_y1+v_y_start:uv_y1+v_y_start+uv_height, uv_x1+two_x_offset:uv_x1+two_x_offset+uv_width]
# create the yuv region frame
# make sure the size is a multiple of 4
size = (region[3] - region[1])//4*4
yuv_cropped_frame = np.zeros((size+size//2, size), np.uint8)
# fill in black
yuv_cropped_frame[:] = 128
yuv_cropped_frame[0:size,0:size] = 16
return cv2.cvtColor(yuv_cropped_frame, cv2.COLOR_YUV2RGB_I420)
# copy the y channel
yuv_cropped_frame[
y_channel_y_offset:y_channel_y_offset + y[3] - y[1],
y_channel_x_offset:y_channel_x_offset + y[2] - y[0]
] = frame[
y[1]:y[3],
y[0]:y[2]
]
uv_crop_width = u1[2] - u1[0]
uv_crop_height = u1[3] - u1[1]
# copy u1
yuv_cropped_frame[
size + uv_channel_y_offset:size + uv_channel_y_offset + uv_crop_height,
0 + uv_channel_x_offset:0 + uv_channel_x_offset + uv_crop_width
] = frame[
u1[1]:u1[3],
u1[0]:u1[2]
]
# copy u2
yuv_cropped_frame[
size + uv_channel_y_offset:size + uv_channel_y_offset + uv_crop_height,
size//2 + uv_channel_x_offset:size//2 + uv_channel_x_offset + uv_crop_width
] = frame[
u2[1]:u2[3],
u2[0]:u2[2]
]
# copy v1
yuv_cropped_frame[
size+size//4 + uv_channel_y_offset:size+size//4 + uv_channel_y_offset + uv_crop_height,
0 + uv_channel_x_offset:0 + uv_channel_x_offset + uv_crop_width
] = frame[
v1[1]:v1[3],
v1[0]:v1[2]
]
# copy v2
yuv_cropped_frame[
size+size//4 + uv_channel_y_offset:size+size//4 + uv_channel_y_offset + uv_crop_height,
size//2 + uv_channel_x_offset:size//2 + uv_channel_x_offset + uv_crop_width
] = frame[
v2[1]:v2[3],
v2[0]:v2[2]
]
return cv2.cvtColor(yuv_cropped_frame, cv2.COLOR_YUV2RGB_I420)
except:
print(f"frame.shape: {frame.shape}")
print(f"region: {region}")
raise
def intersection(box_a, box_b):
return (
@@ -183,6 +291,24 @@ def print_stack(sig, frame):
def listen():
signal.signal(signal.SIGUSR1, print_stack)
def create_mask(frame_shape, mask):
mask_img = np.zeros(frame_shape, np.uint8)
mask_img[:] = 255
if isinstance(mask, list):
for m in mask:
add_mask(m, mask_img)
elif isinstance(mask, str):
add_mask(mask, mask_img)
return mask_img
def add_mask(mask, mask_img):
points = mask.split(',')
contour = np.array([[int(points[i]), int(points[i+1])] for i in range(0, len(points), 2)])
cv2.fillPoly(mask_img, pts=[contour], color=(0))
class FrameManager(ABC):
@abstractmethod
def create(self, name, size) -> AnyStr:

View File

@@ -13,6 +13,7 @@ import signal
import threading
import time
from collections import defaultdict
from setproctitle import setproctitle
from typing import Dict, List
import cv2
@@ -30,7 +31,7 @@ from frigate.util import (EventsPerSecond, FrameManager,
logger = logging.getLogger(__name__)
def filtered(obj, objects_to_track, object_filters, mask=None):
def filtered(obj, objects_to_track, object_filters):
object_name = obj[0]
if not object_name in objects_to_track:
@@ -53,25 +54,26 @@ def filtered(obj, objects_to_track, object_filters, mask=None):
if obj_settings.min_score > obj[1]:
return True
# compute the coordinates of the object and make sure
# the location isnt outside the bounds of the image (can happen from rounding)
y_location = min(int(obj[2][3]), len(mask)-1)
x_location = min(int((obj[2][2]-obj[2][0])/2.0)+obj[2][0], len(mask[0])-1)
if not obj_settings.mask is None:
# compute the coordinates of the object and make sure
# the location isnt outside the bounds of the image (can happen from rounding)
y_location = min(int(obj[2][3]), len(obj_settings.mask)-1)
x_location = min(int((obj[2][2]-obj[2][0])/2.0)+obj[2][0], len(obj_settings.mask[0])-1)
# if the object is in a masked location, don't add it to detected objects
if (not mask is None) and (mask[y_location][x_location] == 0):
return True
# if the object is in a masked location, don't add it to detected objects
if obj_settings.mask[y_location][x_location] == 0:
return True
return False
def create_tensor_input(frame, region):
def create_tensor_input(frame, model_shape, region):
cropped_frame = yuv_region_2_rgb(frame, region)
# Resize to 300x300 if needed
if cropped_frame.shape != (300, 300, 3):
cropped_frame = cv2.resize(cropped_frame, dsize=(300, 300), interpolation=cv2.INTER_LINEAR)
if cropped_frame.shape != (model_shape[0], model_shape[1], 3):
cropped_frame = cv2.resize(cropped_frame, dsize=model_shape, interpolation=cv2.INTER_LINEAR)
# Expand dimensions since the model expects images to have shape: [1, 300, 300, 3]
# Expand dimensions since the model expects images to have shape: [1, height, width, 3]
return np.expand_dims(cropped_frame, axis=0)
def stop_ffmpeg(ffmpeg_process, logger):
@@ -112,16 +114,15 @@ def capture_frames(ffmpeg_process, camera_name, frame_shape, frame_manager: Fram
frame_name = f"{camera_name}{current_frame.value}"
frame_buffer = frame_manager.create(frame_name, frame_size)
try:
frame_buffer[:] = ffmpeg_process.stdout.read(frame_size)
except:
logger.info(f"{camera_name}: ffmpeg sent a broken frame. something is wrong.")
frame_buffer[:] = ffmpeg_process.stdout.read(frame_size)
except Exception as e:
logger.info(f"{camera_name}: ffmpeg sent a broken frame. {e}")
if ffmpeg_process.poll() != None:
logger.info(f"{camera_name}: ffmpeg process is not running. exiting capture thread...")
frame_manager.delete(frame_name)
break
continue
if ffmpeg_process.poll() != None:
logger.info(f"{camera_name}: ffmpeg process is not running. exiting capture thread...")
frame_manager.delete(frame_name)
break
continue
frame_rate.update()
@@ -241,7 +242,7 @@ def capture_camera(name, config: CameraConfig, process_info):
camera_watchdog.start()
camera_watchdog.join()
def track_camera(name, config: CameraConfig, detection_queue, result_connection, detected_objects_queue, process_info):
def track_camera(name, config: CameraConfig, model_shape, detection_queue, result_connection, detected_objects_queue, process_info):
stop_event = mp.Event()
def receiveSignal(signalNumber, frame):
stop_event.set()
@@ -250,24 +251,25 @@ def track_camera(name, config: CameraConfig, detection_queue, result_connection,
signal.signal(signal.SIGINT, receiveSignal)
threading.current_thread().name = f"process:{name}"
setproctitle(f"frigate.process:{name}")
listen()
frame_queue = process_info['frame_queue']
detection_enabled = process_info['detection_enabled']
frame_shape = config.frame_shape
objects_to_track = config.objects.track
object_filters = config.objects.filters
mask = config.mask
motion_detector = MotionDetector(frame_shape, mask, resize_factor=6)
object_detector = RemoteObjectDetector(name, '/labelmap.txt', detection_queue, result_connection)
motion_detector = MotionDetector(frame_shape, config.motion)
object_detector = RemoteObjectDetector(name, '/labelmap.txt', detection_queue, result_connection, model_shape)
object_tracker = ObjectTracker(10)
object_tracker = ObjectTracker(config.detect)
frame_manager = SharedMemoryFrameManager()
process_frames(name, frame_queue, frame_shape, frame_manager, motion_detector, object_detector,
object_tracker, detected_objects_queue, process_info, objects_to_track, object_filters, mask, stop_event)
process_frames(name, frame_queue, frame_shape, model_shape, frame_manager, motion_detector, object_detector,
object_tracker, detected_objects_queue, process_info, objects_to_track, object_filters, detection_enabled, stop_event)
logger.info(f"{name}: exiting subprocess")
@@ -277,8 +279,8 @@ def reduce_boxes(boxes):
reduced_boxes = cv2.groupRectangles([list(b) for b in itertools.chain(boxes, boxes)], 1, 0.2)[0]
return [tuple(b) for b in reduced_boxes]
def detect(object_detector, frame, region, objects_to_track, object_filters, mask):
tensor_input = create_tensor_input(frame, region)
def detect(object_detector, frame, model_shape, region, objects_to_track, object_filters):
tensor_input = create_tensor_input(frame, model_shape, region)
detections = []
region_detections = object_detector.detect(tensor_input)
@@ -295,16 +297,16 @@ def detect(object_detector, frame, region, objects_to_track, object_filters, mas
(x_max-x_min)*(y_max-y_min),
region)
# apply object filters
if filtered(det, objects_to_track, object_filters, mask):
if filtered(det, objects_to_track, object_filters):
continue
detections.append(det)
return detections
def process_frames(camera_name: str, frame_queue: mp.Queue, frame_shape,
def process_frames(camera_name: str, frame_queue: mp.Queue, frame_shape, model_shape,
frame_manager: FrameManager, motion_detector: MotionDetector,
object_detector: RemoteObjectDetector, object_tracker: ObjectTracker,
detected_objects_queue: mp.Queue, process_info: Dict,
objects_to_track: List[str], object_filters, mask, stop_event,
objects_to_track: List[str], object_filters, detection_enabled: mp.Value, stop_event,
exit_on_empty: bool = False):
fps = process_info['process_fps']
@@ -335,6 +337,14 @@ def process_frames(camera_name: str, frame_queue: mp.Queue, frame_shape,
logger.info(f"{camera_name}: frame {frame_time} is not in memory store.")
continue
if not detection_enabled.value:
fps.value = fps_tracker.eps()
object_tracker.match_and_update(frame_time, [])
detected_objects_queue.put((camera_name, frame_time, object_tracker.tracked_objects, [], []))
detection_fps.value = object_detector.fps.eps()
frame_manager.close(f"{camera_name}{frame_time}")
continue
# look for motion
motion_boxes = motion_detector.detect(frame)
@@ -357,7 +367,7 @@ def process_frames(camera_name: str, frame_queue: mp.Queue, frame_shape,
# resize regions and detect
detections = []
for region in regions:
detections.extend(detect(object_detector, frame, region, objects_to_track, object_filters, mask))
detections.extend(detect(object_detector, frame, model_shape, region, objects_to_track, object_filters))
#########
# merge objects, check for clipped objects and look again up to 4 times
@@ -389,8 +399,10 @@ def process_frames(camera_name: str, frame_queue: mp.Queue, frame_shape,
region = calculate_region(frame_shape,
box[0], box[1],
box[2], box[3])
regions.append(region)
selected_objects.extend(detect(object_detector, frame, region, objects_to_track, object_filters, mask))
selected_objects.extend(detect(object_detector, frame, model_shape, region, objects_to_track, object_filters))
refining = True
else:
@@ -407,11 +419,11 @@ def process_frames(camera_name: str, frame_queue: mp.Queue, frame_shape,
# add to the queue if not full
if(detected_objects_queue.full()):
frame_manager.delete(f"{camera_name}{frame_time}")
continue
frame_manager.delete(f"{camera_name}{frame_time}")
continue
else:
fps_tracker.update()
fps.value = fps_tracker.eps()
detected_objects_queue.put((camera_name, frame_time, object_tracker.tracked_objects))
detection_fps.value = object_detector.fps.eps()
frame_manager.close(f"{camera_name}{frame_time}")
fps_tracker.update()
fps.value = fps_tracker.eps()
detected_objects_queue.put((camera_name, frame_time, object_tracker.tracked_objects, motion_boxes, regions))
detection_fps.value = object_detector.fps.eps()
frame_manager.close(f"{camera_name}{frame_time}")

View File

@@ -0,0 +1,41 @@
"""Peewee migrations -- 001_create_events_table.py.
Some examples (model - class or model name)::
> Model = migrator.orm['model_name'] # Return model in current state by name
> migrator.sql(sql) # Run custom SQL
> migrator.python(func, *args, **kwargs) # Run python code
> migrator.create_model(Model) # Create a model (could be used as decorator)
> migrator.remove_model(model, cascade=True) # Remove a model
> migrator.add_fields(model, **fields) # Add fields to a model
> migrator.change_fields(model, **fields) # Change fields
> migrator.remove_fields(model, *field_names, cascade=True)
> migrator.rename_field(model, old_field_name, new_field_name)
> migrator.rename_table(model, new_table_name)
> migrator.add_index(model, *col_names, unique=False)
> migrator.drop_index(model, *col_names)
> migrator.add_not_null(model, *field_names)
> migrator.drop_not_null(model, *field_names)
> migrator.add_default(model, field_name, default)
"""
import datetime as dt
import peewee as pw
from decimal import ROUND_HALF_EVEN
try:
import playhouse.postgres_ext as pw_pext
except ImportError:
pass
SQL = pw.SQL
def migrate(migrator, database, fake=False, **kwargs):
migrator.sql('CREATE TABLE IF NOT EXISTS "event" ("id" VARCHAR(30) NOT NULL PRIMARY KEY, "label" VARCHAR(20) NOT NULL, "camera" VARCHAR(20) NOT NULL, "start_time" DATETIME NOT NULL, "end_time" DATETIME NOT NULL, "top_score" REAL NOT NULL, "false_positive" INTEGER NOT NULL, "zones" JSON NOT NULL, "thumbnail" TEXT NOT NULL)')
migrator.sql('CREATE INDEX IF NOT EXISTS "event_label" ON "event" ("label")')
migrator.sql('CREATE INDEX IF NOT EXISTS "event_camera" ON "event" ("camera")')
def rollback(migrator, database, fake=False, **kwargs):
pass

View File

@@ -0,0 +1,41 @@
"""Peewee migrations -- 002_add_clip_snapshot.py.
Some examples (model - class or model name)::
> Model = migrator.orm['model_name'] # Return model in current state by name
> migrator.sql(sql) # Run custom SQL
> migrator.python(func, *args, **kwargs) # Run python code
> migrator.create_model(Model) # Create a model (could be used as decorator)
> migrator.remove_model(model, cascade=True) # Remove a model
> migrator.add_fields(model, **fields) # Add fields to a model
> migrator.change_fields(model, **fields) # Change fields
> migrator.remove_fields(model, *field_names, cascade=True)
> migrator.rename_field(model, old_field_name, new_field_name)
> migrator.rename_table(model, new_table_name)
> migrator.add_index(model, *col_names, unique=False)
> migrator.drop_index(model, *col_names)
> migrator.add_not_null(model, *field_names)
> migrator.drop_not_null(model, *field_names)
> migrator.add_default(model, field_name, default)
"""
import datetime as dt
import peewee as pw
from decimal import ROUND_HALF_EVEN
from frigate.models import Event
try:
import playhouse.postgres_ext as pw_pext
except ImportError:
pass
SQL = pw.SQL
def migrate(migrator, database, fake=False, **kwargs):
migrator.add_fields(Event, has_clip=pw.BooleanField(default=True), has_snapshot=pw.BooleanField(default=True))
def rollback(migrator, database, fake=False, **kwargs):
migrator.remove_fields(Event, ['has_clip', 'has_snapshot'])

View File

@@ -96,13 +96,25 @@ http {
root /media/frigate;
}
location / {
location /api/ {
add_header 'Access-Control-Allow-Origin' '*';
proxy_pass http://frigate_api/;
proxy_pass_request_headers on;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location / {
sub_filter 'href="/' 'href="$http_x_ingress_path/';
sub_filter 'url(/' 'url($http_x_ingress_path/';
sub_filter '"/js/' '"$http_x_ingress_path/js/';
sub_filter '<body>' '<body><script>window.baseUrl="$http_x_ingress_path";</script>';
sub_filter_types text/css application/javascript;
sub_filter_once off;
root /opt/frigate/web;
try_files $uri $uri/ /index.html;
}
}
}
@@ -119,4 +131,4 @@ rtmp {
meta copy;
}
}
}
}

View File

@@ -1,152 +0,0 @@
import sys
import click
import os
import datetime
from unittest import TestCase, main
from frigate.video import process_frames, start_or_restart_ffmpeg, capture_frames, get_frame_shape
from frigate.util import DictFrameManager, SharedMemoryFrameManager, EventsPerSecond, draw_box_with_label
from frigate.motion import MotionDetector
from frigate.edgetpu import LocalObjectDetector
from frigate.objects import ObjectTracker
import multiprocessing as mp
import numpy as np
import cv2
from frigate.object_processing import COLOR_MAP, CameraState
class ProcessClip():
def __init__(self, clip_path, frame_shape, config):
self.clip_path = clip_path
self.frame_shape = frame_shape
self.camera_name = 'camera'
self.frame_manager = DictFrameManager()
# self.frame_manager = SharedMemoryFrameManager()
self.frame_queue = mp.Queue()
self.detected_objects_queue = mp.Queue()
self.camera_state = CameraState(self.camera_name, config, self.frame_manager)
def load_frames(self):
fps = EventsPerSecond()
skipped_fps = EventsPerSecond()
stop_event = mp.Event()
detection_frame = mp.Value('d', datetime.datetime.now().timestamp()+100000)
current_frame = mp.Value('d', 0.0)
ffmpeg_cmd = f"ffmpeg -hide_banner -loglevel panic -i {self.clip_path} -f rawvideo -pix_fmt rgb24 pipe:".split(" ")
ffmpeg_process = start_or_restart_ffmpeg(ffmpeg_cmd, self.frame_shape[0]*self.frame_shape[1]*self.frame_shape[2])
capture_frames(ffmpeg_process, self.camera_name, self.frame_shape, self.frame_manager, self.frame_queue, 1, fps, skipped_fps, stop_event, detection_frame, current_frame)
ffmpeg_process.wait()
ffmpeg_process.communicate()
def process_frames(self, objects_to_track=['person'], object_filters={}):
mask = np.zeros((self.frame_shape[0], self.frame_shape[1], 1), np.uint8)
mask[:] = 255
motion_detector = MotionDetector(self.frame_shape, mask)
object_detector = LocalObjectDetector(labels='/labelmap.txt')
object_tracker = ObjectTracker(10)
process_fps = mp.Value('d', 0.0)
detection_fps = mp.Value('d', 0.0)
current_frame = mp.Value('d', 0.0)
stop_event = mp.Event()
process_frames(self.camera_name, self.frame_queue, self.frame_shape, self.frame_manager, motion_detector, object_detector, object_tracker, self.detected_objects_queue,
process_fps, detection_fps, current_frame, objects_to_track, object_filters, mask, stop_event, exit_on_empty=True)
def objects_found(self, debug_path=None):
obj_detected = False
top_computed_score = 0.0
def handle_event(name, obj):
nonlocal obj_detected
nonlocal top_computed_score
if obj['computed_score'] > top_computed_score:
top_computed_score = obj['computed_score']
if not obj['false_positive']:
obj_detected = True
self.camera_state.on('new', handle_event)
self.camera_state.on('update', handle_event)
while(not self.detected_objects_queue.empty()):
camera_name, frame_time, current_tracked_objects = self.detected_objects_queue.get()
if not debug_path is None:
self.save_debug_frame(debug_path, frame_time, current_tracked_objects.values())
self.camera_state.update(frame_time, current_tracked_objects)
for obj in self.camera_state.tracked_objects.values():
print(f"{frame_time}: {obj['id']} - {obj['computed_score']} - {obj['score_history']}")
self.frame_manager.delete(self.camera_state.previous_frame_id)
return {
'object_detected': obj_detected,
'top_score': top_computed_score
}
def save_debug_frame(self, debug_path, frame_time, tracked_objects):
current_frame = self.frame_manager.get(f"{self.camera_name}{frame_time}", self.frame_shape)
# draw the bounding boxes on the frame
for obj in tracked_objects:
thickness = 2
color = (0,0,175)
if obj['frame_time'] != frame_time:
thickness = 1
color = (255,0,0)
else:
color = (255,255,0)
# draw the bounding boxes on the frame
box = obj['box']
draw_box_with_label(current_frame, box[0], box[1], box[2], box[3], obj['label'], f"{int(obj['score']*100)}% {int(obj['area'])}", thickness=thickness, color=color)
# draw the regions on the frame
region = obj['region']
draw_box_with_label(current_frame, region[0], region[1], region[2], region[3], 'region', "", thickness=1, color=(0,255,0))
cv2.imwrite(f"{os.path.join(debug_path, os.path.basename(self.clip_path))}.{int(frame_time*1000000)}.jpg", cv2.cvtColor(current_frame, cv2.COLOR_RGB2BGR))
@click.command()
@click.option("-p", "--path", required=True, help="Path to clip or directory to test.")
@click.option("-l", "--label", default='person', help="Label name to detect.")
@click.option("-t", "--threshold", default=0.85, help="Threshold value for objects.")
@click.option("--debug-path", default=None, help="Path to output frames for debugging.")
def process(path, label, threshold, debug_path):
clips = []
if os.path.isdir(path):
files = os.listdir(path)
files.sort()
clips = [os.path.join(path, file) for file in files]
elif os.path.isfile(path):
clips.append(path)
config = {
'snapshots': {
'show_timestamp': False,
'draw_zones': False
},
'zones': {},
'objects': {
'track': [label],
'filters': {
'person': {
'threshold': threshold
}
}
}
}
results = []
for c in clips:
frame_shape = get_frame_shape(c)
config['frame_shape'] = frame_shape
process_clip = ProcessClip(c, frame_shape, config)
process_clip.load_frames()
process_clip.process_frames(objects_to_track=config['objects']['track'])
results.append((c, process_clip.objects_found(debug_path)))
for result in results:
print(f"{result[0]}: {result[1]}")
positive_count = sum(1 for result in results if result[1]['object_detected'])
print(f"Objects were detected in {positive_count}/{len(results)}({positive_count/len(results)*100:.2f}%) clip(s).")
if __name__ == '__main__':
process()

1
web/.dockerignore Normal file
View File

@@ -0,0 +1 @@
node_modules

8
web/README.md Normal file
View File

@@ -0,0 +1,8 @@
# Frigate Web UI
## Development
1. Build the docker images in the root of the repository `make amd64_all` (or appropriate for your system)
2. Create a config file in `config/`
3. Run the container: `docker run --rm --name frigate --privileged -v $PWD/config:/config:ro -v /etc/localtime:/etc/localtime:ro -p 5000:5000 frigate`
4. Run the dev ui: `cd web && npm run start`

8497
web/package-lock.json generated Normal file

File diff suppressed because it is too large Load Diff

24
web/package.json Normal file
View File

@@ -0,0 +1,24 @@
{
"name": "frigate",
"private": true,
"scripts": {
"start": "cross-env SNOWPACK_PUBLIC_API_HOST=http://localhost:5000 snowpack dev",
"prebuild": "rimraf build",
"build": "snowpack build"
},
"dependencies": {
"@prefresh/snowpack": "^3.0.1",
"@snowpack/plugin-optimize": "^0.2.13",
"@snowpack/plugin-postcss": "^1.1.0",
"@snowpack/plugin-webpack": "^2.3.0",
"autoprefixer": "^10.2.1",
"cross-env": "^7.0.3",
"postcss": "^8.2.2",
"postcss-cli": "^8.3.1",
"preact": "^10.5.9",
"preact-router": "^3.2.1",
"rimraf": "^3.0.2",
"snowpack": "^3.0.0",
"tailwindcss": "^2.0.2"
}
}

8
web/postcss.config.js Normal file
View File

@@ -0,0 +1,8 @@
'use strict';
module.exports = {
plugins: [
require('tailwindcss'),
require('autoprefixer'),
],
};

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.1 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 558 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 800 B

BIN
web/public/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

BIN
web/public/favicon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

21
web/public/index.html Normal file
View File

@@ -0,0 +1,21 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<link rel="icon" href="/favicon.ico" />
<title>Frigate</title>
<link rel="apple-touch-icon" sizes="180x180" href="/apple-touch-icon.png" />
<link rel="icon" type="image/png" sizes="32x32" href="/favicon-32x32.png" />
<link rel="icon" type="image/png" sizes="16x16" href="/favicon-16x16.png" />
<link rel="manifest" href="/site.webmanifest" />
<link rel="mask-icon" href="/safari-pinned-tab.svg" color="#3b82f7" />
<meta name="msapplication-TileColor" content="#3b82f7" />
<meta name="theme-color" content="#ff0000" />
</head>
<body>
<div id="root"></div>
<noscript>You need to enable JavaScript to run this app.</noscript>
<script type="module" src="/dist/index.js"></script>
</body>
</html>

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 KiB

View File

@@ -0,0 +1,46 @@
<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20010904//EN"
"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">
<svg version="1.0" xmlns="http://www.w3.org/2000/svg"
width="888.000000pt" height="888.000000pt" viewBox="0 0 888.000000 888.000000"
preserveAspectRatio="xMidYMid meet">
<metadata>
Created by potrace 1.11, written by Peter Selinger 2001-2013
</metadata>
<g transform="translate(0.000000,888.000000) scale(0.100000,-0.100000)"
fill="#000000" stroke="none">
<path d="M8228 8865 c-2 -2 -25 -6 -53 -9 -38 -5 -278 -56 -425 -91 -33 -7
-381 -98 -465 -121 -49 -14 -124 -34 -165 -45 -67 -18 -485 -138 -615 -176
-50 -14 -106 -30 -135 -37 -8 -2 -35 -11 -60 -19 -25 -8 -85 -27 -135 -42 -49
-14 -101 -31 -115 -36 -14 -5 -34 -11 -45 -13 -11 -3 -65 -19 -120 -36 -55
-18 -127 -40 -160 -50 -175 -53 -247 -77 -550 -178 -364 -121 -578 -200 -820
-299 -88 -36 -214 -88 -280 -115 -66 -27 -129 -53 -140 -58 -11 -5 -67 -29
-125 -54 -342 -144 -535 -259 -579 -343 -34 -66 7 -145 156 -299 229 -238 293
-316 340 -413 38 -80 41 -152 10 -281 -57 -234 -175 -543 -281 -732 -98 -174
-172 -239 -341 -297 -116 -40 -147 -52 -210 -80 -107 -49 -179 -107 -290 -236
-51 -59 -179 -105 -365 -131 -19 -2 -48 -7 -65 -9 -16 -3 -50 -8 -75 -11 -69
-9 -130 -39 -130 -63 0 -24 31 -46 78 -56 18 -4 139 -8 270 -10 250 -4 302
-11 335 -44 19 -18 19 -23 7 -46 -19 -36 -198 -121 -490 -233 -850 -328 -914
-354 -1159 -473 -185 -90 -337 -186 -395 -249 -60 -65 -67 -107 -62 -350 3
-113 7 -216 10 -230 3 -14 7 -52 10 -85 7 -70 14 -128 21 -170 2 -16 7 -48 10
-70 3 -22 11 -64 16 -94 6 -30 12 -64 14 -75 1 -12 5 -34 9 -51 3 -16 8 -39
10 -50 12 -57 58 -258 71 -310 9 -33 18 -69 20 -79 25 -110 138 -416 216 -582
21 -47 39 -87 39 -90 0 -7 217 -438 261 -521 109 -201 293 -501 347 -564 11
-13 37 -44 56 -68 69 -82 126 -109 160 -75 26 25 14 65 -48 164 -138 218 -142
245 -138 800 2 206 4 488 5 625 1 138 -1 293 -6 345 -28 345 -28 594 -1 760
12 69 54 187 86 235 33 52 188 212 293 302 98 84 108 93 144 121 19 15 52 42
75 61 78 64 302 229 426 313 248 169 483 297 600 326 53 14 205 6 365 -17 33
-5 155 -8 270 -6 179 3 226 7 316 28 58 13 140 25 182 26 82 2 120 6 217 22
73 12 97 16 122 18 12 1 23 21 38 70 l20 68 74 -17 c81 -20 155 -30 331 -45
69 -6 132 -8 715 -20 484 -11 620 -8 729 16 85 19 131 63 98 96 -25 26 -104
34 -302 32 -373 -2 -408 -1 -471 26 -90 37 2 102 171 120 33 3 76 8 95 10 19
2 71 7 115 10 243 17 267 20 338 37 145 36 47 102 -203 137 -136 19 -262 25
-490 22 -124 -2 -362 -4 -530 -4 l-305 -1 -56 26 c-65 31 -171 109 -238 176
-52 51 -141 173 -141 191 0 6 -6 22 -14 34 -18 27 -54 165 -64 244 -12 98 -6
322 12 414 9 47 29 127 45 176 26 80 58 218 66 278 1 11 6 47 10 80 3 33 8 70
10 83 2 13 7 53 11 90 3 37 8 74 9 83 22 118 22 279 -1 464 -20 172 -20 172
70 238 108 79 426 248 666 355 25 11 77 34 115 52 92 42 443 191 570 242 55
22 109 44 120 48 24 11 130 52 390 150 199 75 449 173 500 195 17 7 118 50
225 95 237 100 333 143 490 220 229 113 348 191 337 223 -3 10 -70 20 -79 12z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 2.9 KiB

View File

@@ -0,0 +1,19 @@
{
"name": "",
"short_name": "",
"icons": [
{
"src": "/android-chrome-192x192.png",
"sizes": "192x192",
"type": "image/png"
},
{
"src": "/android-chrome-512x512.png",
"sizes": "512x512",
"type": "image/png"
}
],
"theme_color": "#ff0000",
"background_color": "#ff0000",
"display": "standalone"
}

31
web/snowpack.config.js Normal file
View File

@@ -0,0 +1,31 @@
'use strict';
module.exports = {
mount: {
public: { url: '/', static: true },
src: { url: '/dist' },
},
plugins: [
'@snowpack/plugin-postcss',
'@prefresh/snowpack',
[
'@snowpack/plugin-optimize',
{
preloadModules: true,
},
],
[
'@snowpack/plugin-webpack',
{
sourceMap: true,
},
],
],
routes: [{ match: 'routes', src: '.*', dest: '/index.html' }],
packageOptions: {
sourcemap: false,
},
buildOptions: {
sourcemap: true,
},
};

43
web/src/App.jsx Normal file
View File

@@ -0,0 +1,43 @@
import { h } from 'preact';
import Camera from './Camera';
import CameraMap from './CameraMap';
import Cameras from './Cameras';
import Debug from './Debug';
import Event from './Event';
import Events from './Events';
import { Router } from 'preact-router';
import Sidebar from './Sidebar';
import { ApiHost, Config } from './context';
import { useContext, useEffect, useState } from 'preact/hooks';
export default function App() {
const apiHost = useContext(ApiHost);
const [config, setConfig] = useState(null);
useEffect(async () => {
const response = await fetch(`${apiHost}/api/config`);
const data = response.ok ? await response.json() : {};
setConfig(data);
}, []);
return !config ? (
<div />
) : (
<Config.Provider value={config}>
<div className="md:flex flex-col md:flex-row md:min-h-screen w-full bg-gray-100 dark:bg-gray-800 text-gray-900 dark:text-white">
<Sidebar />
<div className="p-4 min-w-0">
<Router>
<CameraMap path="/cameras/:camera/editor" />
<Camera path="/cameras/:camera" />
<Event path="/events/:eventId" />
<Events path="/events" />
<Debug path="/debug" />
<Cameras default path="/" />
</Router>
</div>
</div>
</Config.Provider>
);
return;
}

68
web/src/Camera.jsx Normal file
View File

@@ -0,0 +1,68 @@
import { h } from 'preact';
import AutoUpdatingCameraImage from './components/AutoUpdatingCameraImage';
import Box from './components/Box';
import Heading from './components/Heading';
import Link from './components/Link';
import Switch from './components/Switch';
import { route } from 'preact-router';
import { useCallback, useContext } from 'preact/hooks';
import { ApiHost, Config } from './context';
export default function Camera({ camera, url }) {
const config = useContext(Config);
const apiHost = useContext(ApiHost);
if (!(camera in config.cameras)) {
return <div>{`No camera named ${camera}`}</div>;
}
const cameraConfig = config.cameras[camera];
const { pathname, searchParams } = new URL(`${window.location.protocol}//${window.location.host}${url}`);
const searchParamsString = searchParams.toString();
const handleSetOption = useCallback(
(id, value) => {
searchParams.set(id, value ? 1 : 0);
route(`${pathname}?${searchParams.toString()}`, true);
},
[searchParams]
);
function getBoolean(id) {
return Boolean(parseInt(searchParams.get(id), 10));
}
return (
<div className="space-y-4">
<Heading size="2xl">{camera}</Heading>
<Box>
<AutoUpdatingCameraImage camera={camera} searchParams={searchParamsString} />
</Box>
<Box className="grid grid-cols-2 md:grid-cols-3 lg:grid-cols-4 gap-4 p-4">
<Switch checked={getBoolean('bbox')} id="bbox" label="Bounding box" onChange={handleSetOption} />
<Switch checked={getBoolean('timestamp')} id="timestamp" label="Timestamp" onChange={handleSetOption} />
<Switch checked={getBoolean('zones')} id="zones" label="Zones" onChange={handleSetOption} />
<Switch checked={getBoolean('mask')} id="mask" label="Masks" onChange={handleSetOption} />
<Switch checked={getBoolean('motion')} id="motion" label="Motion boxes" onChange={handleSetOption} />
<Switch checked={getBoolean('regions')} id="regions" label="Regions" onChange={handleSetOption} />
<Link href={`/cameras/${camera}/editor`}>Mask & Zone creator</Link>
</Box>
<div className="space-y-4">
<Heading size="sm">Tracked objects</Heading>
<div className="grid grid-cols-3 md:grid-cols-4 gap-4">
{cameraConfig.objects.track.map((objectType) => {
return (
<Box key={objectType} hover href={`/events?camera=${camera}&label=${objectType}`}>
<Heading size="sm">{objectType}</Heading>
<img src={`${apiHost}/api/${camera}/${objectType}/best.jpg?crop=1&h=150`} />
</Box>
);
})}
</div>
</div>
</div>
);
}

598
web/src/CameraMap.jsx Normal file
View File

@@ -0,0 +1,598 @@
import { h } from 'preact';
import Box from './components/Box';
import Button from './components/Button';
import Heading from './components/Heading';
import Switch from './components/Switch';
import { route } from 'preact-router';
import { useCallback, useContext, useEffect, useMemo, useRef, useState } from 'preact/hooks';
import { ApiHost, Config } from './context';
export default function CameraMasks({ camera, url }) {
const config = useContext(Config);
const apiHost = useContext(ApiHost);
const imageRef = useRef(null);
const [imageScale, setImageScale] = useState(1);
const [snap, setSnap] = useState(true);
if (!(camera in config.cameras)) {
return <div>{`No camera named ${camera}`}</div>;
}
const cameraConfig = config.cameras[camera];
const {
width,
height,
motion: { mask: motionMask },
objects: { filters: objectFilters },
zones,
} = cameraConfig;
useEffect(() => {
if (!imageRef.current) {
return;
}
const scaledWidth = imageRef.current.width;
const scale = scaledWidth / width;
setImageScale(scale);
}, [imageRef.current, setImageScale]);
const [motionMaskPoints, setMotionMaskPoints] = useState(
Array.isArray(motionMask)
? motionMask.map((mask) => getPolylinePoints(mask))
: motionMask
? [getPolylinePoints(motionMask)]
: []
);
const [zonePoints, setZonePoints] = useState(
Object.keys(zones).reduce((memo, zone) => ({ ...memo, [zone]: getPolylinePoints(zones[zone].coordinates) }), {})
);
const [objectMaskPoints, setObjectMaskPoints] = useState(
Object.keys(objectFilters).reduce(
(memo, name) => ({
...memo,
[name]: Array.isArray(objectFilters[name].mask)
? objectFilters[name].mask.map((mask) => getPolylinePoints(mask))
: objectFilters[name].mask
? [getPolylinePoints(objectFilters[name].mask)]
: [],
}),
{}
)
);
const [editing, setEditing] = useState({ set: motionMaskPoints, key: 0, fn: setMotionMaskPoints });
const handleUpdateEditable = useCallback(
(newPoints) => {
let newSet;
if (Array.isArray(editing.set)) {
newSet = [...editing.set];
newSet[editing.key] = newPoints;
} else if (editing.subkey !== undefined) {
newSet = { ...editing.set };
newSet[editing.key][editing.subkey] = newPoints;
} else {
newSet = { ...editing.set, [editing.key]: newPoints };
}
editing.set = newSet;
editing.fn(newSet);
},
[editing]
);
const handleSelectEditable = useCallback(
(name) => {
setEditing(name);
},
[setEditing]
);
const handleRemoveEditable = useCallback(
(name) => {
const filteredZonePoints = Object.keys(zonePoints)
.filter((zoneName) => zoneName !== name)
.reduce((memo, name) => {
memo[name] = zonePoints[name];
return memo;
}, {});
setZonePoints(filteredZonePoints);
},
[zonePoints, setZonePoints]
);
// Motion mask methods
const handleAddMask = useCallback(() => {
const newMotionMaskPoints = [...motionMaskPoints, []];
setMotionMaskPoints(newMotionMaskPoints);
setEditing({ set: newMotionMaskPoints, key: newMotionMaskPoints.length - 1, fn: setMotionMaskPoints });
}, [motionMaskPoints, setMotionMaskPoints]);
const handleEditMask = useCallback(
(key) => {
setEditing({ set: motionMaskPoints, key, fn: setMotionMaskPoints });
},
[setEditing, motionMaskPoints, setMotionMaskPoints]
);
const handleRemoveMask = useCallback(
(key) => {
const newMotionMaskPoints = [...motionMaskPoints];
newMotionMaskPoints.splice(key, 1);
setMotionMaskPoints(newMotionMaskPoints);
},
[motionMaskPoints, setMotionMaskPoints]
);
const handleCopyMotionMasks = useCallback(async () => {
await window.navigator.clipboard.writeText(` motion:
mask:
${motionMaskPoints.map((mask, i) => ` - ${polylinePointsToPolyline(mask)}`).join('\n')}`);
}, [motionMaskPoints]);
// Zone methods
const handleEditZone = useCallback(
(key) => {
setEditing({ set: zonePoints, key, fn: setZonePoints });
},
[setEditing, zonePoints, setZonePoints]
);
const handleAddZone = useCallback(() => {
const n = Object.keys(zonePoints).filter((name) => name.startsWith('zone_')).length;
const zoneName = `zone_${n}`;
const newZonePoints = { ...zonePoints, [zoneName]: [] };
setZonePoints(newZonePoints);
setEditing({ set: newZonePoints, key: zoneName, fn: setZonePoints });
}, [zonePoints, setZonePoints]);
const handleRemoveZone = useCallback(
(key) => {
const newZonePoints = { ...zonePoints };
delete newZonePoints[key];
setZonePoints(newZonePoints);
},
[zonePoints, setZonePoints]
);
const handleCopyZones = useCallback(async () => {
await window.navigator.clipboard.writeText(` zones:
${Object.keys(zonePoints)
.map(
(zoneName) => ` ${zoneName}:
coordinates: ${polylinePointsToPolyline(zonePoints[zoneName])}`
)
.join('\n')}`);
}, [zonePoints]);
// Object methods
const handleEditObjectMask = useCallback(
(key, subkey) => {
setEditing({ set: objectMaskPoints, key, subkey, fn: setObjectMaskPoints });
},
[setEditing, objectMaskPoints, setObjectMaskPoints]
);
const handleAddObjectMask = useCallback(() => {
const n = Object.keys(objectMaskPoints).filter((name) => name.startsWith('object_')).length;
const newObjectName = `object_${n}`;
const newObjectMaskPoints = { ...objectMaskPoints, [newObjectName]: [] };
setObjectMaskPoints(newObjectMaskPoints);
setEditing({ set: newObjectMaskPoints, key: newObjectName, subkey: 0, fn: setObjectMaskPoints });
}, [objectMaskPoints, setObjectMaskPoints, setEditing]);
const handleRemoveObjectMask = useCallback(
(key, subkey) => {
const newObjectMaskPoints = { ...objectMaskPoints };
delete newObjectMaskPoints[key];
setObjectMaskPoints(newObjectMaskPoints);
},
[objectMaskPoints, setObjectMaskPoints]
);
const handleCopyObjectMasks = useCallback(async () => {
await window.navigator.clipboard.writeText(` objects:
filters:
${Object.keys(objectMaskPoints)
.map((objectName) =>
objectMaskPoints[objectName].length
? ` ${objectName}:
mask: ${polylinePointsToPolyline(objectMaskPoints[objectName])}`
: ''
)
.filter(Boolean)
.join('\n')}`);
}, [objectMaskPoints]);
const handleChangeSnap = useCallback(
(id, value) => {
setSnap(value);
},
[setSnap]
);
return (
<div class="flex-col space-y-4">
<Heading size="2xl">{camera} mask & zone creator</Heading>
<Box>
<p>
This tool can help you create masks & zones for your {camera} camera. When done, copy each mask configuration
into your <code className="font-mono">config.yml</code> file restart your Frigate instance to save your
changes.
</p>
</Box>
<Box className="space-y-4">
<div className="relative">
<img ref={imageRef} className="w-full" src={`${apiHost}/api/${camera}/latest.jpg`} />
<EditableMask
onChange={handleUpdateEditable}
points={editing.subkey ? editing.set[editing.key][editing.subkey] : editing.set[editing.key]}
scale={imageScale}
snap={snap}
width={width}
height={height}
/>
</div>
<Switch checked={snap} label="Snap to edges" onChange={handleChangeSnap} />
</Box>
<div class="flex-col space-y-4">
<MaskValues
editing={editing}
title="Motion masks"
onCopy={handleCopyMotionMasks}
onCreate={handleAddMask}
onEdit={handleEditMask}
onRemove={handleRemoveMask}
points={motionMaskPoints}
yamlPrefix={'motion:\n mask:'}
yamlKeyPrefix={maskYamlKeyPrefix}
/>
<MaskValues
editing={editing}
title="Zones"
onCopy={handleCopyZones}
onCreate={handleAddZone}
onEdit={handleEditZone}
onRemove={handleRemoveZone}
points={zonePoints}
yamlPrefix="zones:"
yamlKeyPrefix={zoneYamlKeyPrefix}
/>
<MaskValues
isMulti
editing={editing}
title="Object masks"
onCopy={handleCopyObjectMasks}
onCreate={handleAddObjectMask}
onEdit={handleEditObjectMask}
onRemove={handleRemoveObjectMask}
points={objectMaskPoints}
yamlPrefix={'objects:\n filters:'}
yamlKeyPrefix={objectYamlKeyPrefix}
/>
</div>
</div>
);
}
function maskYamlKeyPrefix(points) {
return ` - `;
}
function zoneYamlKeyPrefix(points, key) {
return ` ${key}:
coordinates: `;
}
function objectYamlKeyPrefix(points, key, subkey) {
return ` - `;
}
const MaskInset = 20;
function EditableMask({ onChange, points, scale, snap, width, height }) {
if (!points) {
return null;
}
const boundingRef = useRef(null);
function boundedSize(value, maxValue) {
const newValue = Math.min(Math.max(0, Math.round(value)), maxValue);
if (snap) {
if (newValue <= MaskInset) {
return 0;
} else if (maxValue - newValue <= MaskInset) {
return maxValue;
}
}
return newValue;
}
const handleMovePoint = useCallback(
(index, newX, newY) => {
if (newX < 0 && newY < 0) {
return;
}
let x = boundedSize(newX / scale, width, snap);
let y = boundedSize(newY / scale, height, snap);
const newPoints = [...points];
newPoints[index] = [x, y];
onChange(newPoints);
},
[scale, points, snap]
);
// Add a new point between the closest two other points
const handleAddPoint = useCallback(
(event) => {
const { offsetX, offsetY } = event;
const scaledX = boundedSize((offsetX - MaskInset) / scale, width, snap);
const scaledY = boundedSize((offsetY - MaskInset) / scale, height, snap);
const newPoint = [scaledX, scaledY];
let closest;
const { index } = points.reduce(
(result, point, i) => {
const nextPoint = points.length === i + 1 ? points[0] : points[i + 1];
const distance0 = Math.sqrt(Math.pow(point[0] - newPoint[0], 2) + Math.pow(point[1] - newPoint[1], 2));
const distance1 = Math.sqrt(Math.pow(point[0] - nextPoint[0], 2) + Math.pow(point[1] - nextPoint[1], 2));
const distance = distance0 + distance1;
return distance < result.distance ? { distance, index: i } : result;
},
{ distance: Infinity, index: -1 }
);
const newPoints = [...points];
newPoints.splice(index, 0, newPoint);
onChange(newPoints);
},
[scale, points, onChange, snap]
);
const handleRemovePoint = useCallback(
(index) => {
const newPoints = [...points];
newPoints.splice(index, 1);
onChange(newPoints);
},
[points, onChange]
);
const scaledPoints = useMemo(() => scalePolylinePoints(points, scale), [points, scale]);
return (
<div className="absolute" style={`inset: -${MaskInset}px`}>
{!scaledPoints
? null
: scaledPoints.map(([x, y], i) => (
<PolyPoint
boundingRef={boundingRef}
index={i}
onMove={handleMovePoint}
onRemove={handleRemovePoint}
x={x + MaskInset}
y={y + MaskInset}
/>
))}
<div className="absolute inset-0 right-0 bottom-0" onclick={handleAddPoint} ref={boundingRef} />
<svg width="100%" height="100%" className="absolute pointer-events-none" style={`inset: ${MaskInset}px`}>
{!scaledPoints ? null : (
<g>
<polyline points={polylinePointsToPolyline(scaledPoints)} fill="rgba(244,0,0,0.5)" />
</g>
)}
</svg>
</div>
);
}
function MaskValues({
isMulti = false,
editing,
title,
onCopy,
onCreate,
onEdit,
onRemove,
points,
yamlPrefix,
yamlKeyPrefix,
}) {
const [showButtons, setShowButtons] = useState(false);
const handleMousein = useCallback(() => {
setShowButtons(true);
}, [setShowButtons]);
const handleMouseout = useCallback(
(event) => {
const el = event.toElement || event.relatedTarget;
if (!el || el.parentNode === event.target) {
return;
}
setShowButtons(false);
},
[setShowButtons]
);
const handleEdit = useCallback(
(event) => {
const { key, subkey } = event.target.dataset;
onEdit(key, subkey);
},
[onEdit]
);
const handleRemove = useCallback(
(event) => {
const { key, subkey } = event.target.dataset;
onRemove(key, subkey);
},
[onRemove]
);
return (
<Box className="overflow-hidden" onmouseover={handleMousein} onmouseout={handleMouseout}>
<div class="flex space-x-4">
<Heading className="flex-grow self-center" size="base">
{title}
</Heading>
<Button onClick={onCopy}>Copy</Button>
<Button onClick={onCreate}>Add</Button>
</div>
<pre class="relative overflow-auto font-mono text-gray-900 dark:text-gray-100 rounded bg-gray-100 dark:bg-gray-800 p-2">
{yamlPrefix}
{Object.keys(points).map((mainkey) => {
if (isMulti) {
return (
<div>
{` ${mainkey}:\n mask:\n`}
{points[mainkey].map((item, subkey) => (
<Item
mainkey={mainkey}
subkey={subkey}
editing={editing}
handleEdit={handleEdit}
points={item}
showButtons={showButtons}
handleRemove={handleRemove}
yamlKeyPrefix={yamlKeyPrefix}
/>
))}
</div>
);
} else {
return (
<Item
mainkey={mainkey}
editing={editing}
handleEdit={handleEdit}
points={points[mainkey]}
showButtons={showButtons}
handleRemove={handleRemove}
yamlKeyPrefix={yamlKeyPrefix}
/>
);
}
})}
</pre>
</Box>
);
}
function Item({ mainkey, subkey, editing, handleEdit, points, showButtons, handleRemove, yamlKeyPrefix }) {
return (
<span
data-key={mainkey}
data-subkey={subkey}
className={`block hover:text-blue-400 cursor-pointer relative ${
editing.key === mainkey && editing.subkey === subkey ? 'text-blue-800 dark:text-blue-600' : ''
}`}
onClick={handleEdit}
title="Click to edit"
>
{`${yamlKeyPrefix(points, mainkey, subkey)}${polylinePointsToPolyline(points)}`}
{showButtons ? (
<Button
className="absolute top-0 right-0"
color="red"
data-key={mainkey}
data-subkey={subkey}
onClick={handleRemove}
>
Remove
</Button>
) : null}
</span>
);
}
function getPolylinePoints(polyline) {
if (!polyline) {
return;
}
return polyline.split(',').reduce((memo, point, i) => {
if (i % 2) {
memo[memo.length - 1].push(parseInt(point, 10));
} else {
memo.push([parseInt(point, 10)]);
}
return memo;
}, []);
}
function scalePolylinePoints(polylinePoints, scale) {
if (!polylinePoints) {
return;
}
return polylinePoints.map(([x, y]) => [Math.round(x * scale), Math.round(y * scale)]);
}
function polylinePointsToPolyline(polylinePoints) {
if (!polylinePoints) {
return;
}
return polylinePoints.reduce((memo, [x, y]) => `${memo}${x},${y},`, '').replace(/,$/, '');
}
const PolyPointRadius = 10;
function PolyPoint({ boundingRef, index, x, y, onMove, onRemove }) {
const [hidden, setHidden] = useState(false);
const handleDragOver = useCallback(
(event) => {
if (
!boundingRef.current ||
(event.target !== boundingRef.current && !boundingRef.current.contains(event.target))
) {
return;
}
onMove(index, event.layerX - PolyPointRadius * 2, event.layerY - PolyPointRadius * 2);
},
[onMove, index, boundingRef.current]
);
const handleDragStart = useCallback(() => {
boundingRef.current && boundingRef.current.addEventListener('dragover', handleDragOver, false);
setHidden(true);
}, [setHidden, boundingRef.current, handleDragOver]);
const handleDragEnd = useCallback(() => {
boundingRef.current && boundingRef.current.removeEventListener('dragover', handleDragOver);
setHidden(false);
}, [setHidden, boundingRef.current, handleDragOver]);
const handleRightClick = useCallback(
(event) => {
event.preventDefault();
onRemove(index);
},
[onRemove, index]
);
const handleClick = useCallback((event) => {
event.stopPropagation();
event.preventDefault();
}, []);
return (
<div
className={`${hidden ? 'opacity-0' : ''} bg-gray-900 rounded-full absolute z-20`}
style={`top: ${y - PolyPointRadius}px; left: ${x - PolyPointRadius}px; width: 20px; height: 20px;`}
draggable
onclick={handleClick}
oncontextmenu={handleRightClick}
ondragstart={handleDragStart}
ondragend={handleDragEnd}
/>
);
}

38
web/src/Cameras.jsx Normal file
View File

@@ -0,0 +1,38 @@
import { h } from 'preact';
import Box from './components/Box';
import Events from './Events';
import Heading from './components/Heading';
import { route } from 'preact-router';
import { useContext } from 'preact/hooks';
import { ApiHost, Config } from './context';
export default function Cameras() {
const config = useContext(Config);
if (!config.cameras) {
return <p>loading</p>;
}
return (
<div className="grid lg:grid-cols-2 md:grid-cols-1 gap-4">
{Object.keys(config.cameras).map((camera) => (
<Camera name={camera} />
))}
</div>
);
}
function Camera({ name }) {
const apiHost = useContext(ApiHost);
const href = `/cameras/${name}`;
return (
<Box
className="bg-white dark:bg-gray-700 shadow-lg rounded-lg p-4 hover:bg-gray-300 hover:dark:bg-gray-500 dark:hover:text-gray-900 dark:hover:text-gray-900"
href={href}
>
<Heading size="base">{name}</Heading>
<img className="w-full" src={`${apiHost}/api/${name}/latest.jpg`} />
</Box>
);
}

97
web/src/Debug.jsx Normal file
View File

@@ -0,0 +1,97 @@
import { h } from 'preact';
import Heading from './components/Heading';
import Link from './components/Link';
import { ApiHost, Config } from './context';
import { Table, Tbody, Thead, Tr, Th, Td } from './components/Table';
import { useCallback, useContext, useEffect, useState } from 'preact/hooks';
export default function Debug() {
const apiHost = useContext(ApiHost);
const config = useContext(Config);
const [stats, setStats] = useState({});
const [timeoutId, setTimeoutId] = useState(null);
const fetchStats = useCallback(async () => {
const statsResponse = await fetch(`${apiHost}/api/stats`);
const stats = statsResponse.ok ? await statsResponse.json() : {};
setStats(stats);
setTimeoutId(setTimeout(fetchStats, 1000));
}, [setStats]);
useEffect(() => {
fetchStats();
}, []);
useEffect(() => {
return () => {
clearTimeout(timeoutId);
};
}, [timeoutId]);
const { detectors, detection_fps, service, ...cameras } = stats;
if (!service) {
return 'loading…';
}
const detectorNames = Object.keys(detectors);
const detectorDataKeys = Object.keys(detectors[detectorNames[0]]);
const cameraNames = Object.keys(cameras);
const cameraDataKeys = Object.keys(cameras[cameraNames[0]]);
return (
<div>
<Heading>
Debug <span className="text-sm">{service.version}</span>
</Heading>
<Table className="w-full">
<Thead>
<Tr>
<Th>detector</Th>
{detectorDataKeys.map((name) => (
<Th>{name.replace('_', ' ')}</Th>
))}
</Tr>
</Thead>
<Tbody>
{detectorNames.map((detector, i) => (
<Tr index={i}>
<Td>{detector}</Td>
{detectorDataKeys.map((name) => (
<Td key={`${name}-${detector}`}>{detectors[detector][name]}</Td>
))}
</Tr>
))}
</Tbody>
</Table>
<Table className="w-full">
<Thead>
<Tr>
<Th>camera</Th>
{cameraDataKeys.map((name) => (
<Th>{name.replace('_', ' ')}</Th>
))}
</Tr>
</Thead>
<Tbody>
{cameraNames.map((camera, i) => (
<Tr index={i}>
<Td>
<Link href={`/cameras/${camera}`}>{camera}</Link>
</Td>
{cameraDataKeys.map((name) => (
<Td key={`${name}-${camera}`}>{cameras[camera][name]}</Td>
))}
</Tr>
))}
</Tbody>
</Table>
<Heading size="sm">Config</Heading>
<pre className="font-mono overflow-y-scroll overflow-x-scroll max-h-96 rounded bg-white dark:bg-gray-900">
{JSON.stringify(config, null, 2)}
</pre>
</div>
);
}

90
web/src/Event.jsx Normal file
View File

@@ -0,0 +1,90 @@
import { h, Fragment } from 'preact';
import { ApiHost } from './context';
import Box from './components/Box';
import Heading from './components/Heading';
import Link from './components/Link';
import { Table, Thead, Tbody, Tfoot, Th, Tr, Td } from './components/Table';
import { useContext, useEffect, useState } from 'preact/hooks';
export default function Event({ eventId }) {
const apiHost = useContext(ApiHost);
const [data, setData] = useState(null);
useEffect(async () => {
const response = await fetch(`${apiHost}/api/events/${eventId}`);
const data = response.ok ? await response.json() : null;
setData(data);
}, [apiHost, eventId]);
if (!data) {
return (
<div>
<Heading>{eventId}</Heading>
<p>loading</p>
</div>
);
}
const startime = new Date(data.start_time * 1000);
const endtime = new Date(data.end_time * 1000);
return (
<div className="space-y-4">
<Heading>
{data.camera} {data.label} <span className="text-sm">{startime.toLocaleString()}</span>
</Heading>
<Box>
{data.has_clip ? (
<Fragment>
<Heading size="sm">Clip</Heading>
<video className="w-100" src={`${apiHost}/clips/${data.camera}-${eventId}.mp4`} controls />
</Fragment>
) : (
<p>No clip available</p>
)}
</Box>
<Box>
<Heading size="sm">{data.has_snapshot ? 'Best image' : 'Thumbnail'}</Heading>
<img
src={
data.has_snapshot
? `${apiHost}/clips/${data.camera}-${eventId}.jpg`
: `data:image/jpeg;base64,${data.thumbnail}`
}
alt={`${data.label} at ${(data.top_score * 100).toFixed(1)}% confidence`}
/>
</Box>
<Table>
<Thead>
<Th>Key</Th>
<Th>Value</Th>
</Thead>
<Tbody>
<Tr>
<Td>Camera</Td>
<Td>
<Link href={`/cameras/${data.camera}`}>{data.camera}</Link>
</Td>
</Tr>
<Tr index={1}>
<Td>Timeframe</Td>
<Td>
{startime.toLocaleString()} {endtime.toLocaleString()}
</Td>
</Tr>
<Tr>
<Td>Score</Td>
<Td>{(data.top_score * 100).toFixed(2)}%</Td>
</Tr>
<Tr index={1}>
<Td>Zones</Td>
<Td>{data.zones.join(', ')}</Td>
</Tr>
</Tbody>
</Table>
</div>
);
}

120
web/src/Events.jsx Normal file
View File

@@ -0,0 +1,120 @@
import { h } from 'preact';
import { ApiHost } from './context';
import Box from './components/Box';
import Heading from './components/Heading';
import Link from './components/Link';
import { route } from 'preact-router';
import { Table, Thead, Tbody, Tfoot, Th, Tr, Td } from './components/Table';
import { useCallback, useContext, useEffect, useState } from 'preact/hooks';
export default function Events({ url } = {}) {
const apiHost = useContext(ApiHost);
const [events, setEvents] = useState([]);
const searchParams = new URL(`${window.location.protocol}//${window.location.host}${url || '/events'}`).searchParams;
const searchParamsString = searchParams.toString();
useEffect(async () => {
const response = await fetch(`${apiHost}/api/events?${searchParamsString}`);
const data = response.ok ? await response.json() : {};
setEvents(data);
}, [searchParamsString]);
const searchKeys = Array.from(searchParams.keys());
return (
<div className="space-y-4">
<Heading>Events</Heading>
{searchKeys.length ? (
<Box>
<Heading size="sm">Filters</Heading>
<div className="flex flex-wrap space-x-2">
{searchKeys.map((filterKey) => (
<UnFilterable
paramName={filterKey}
searchParams={searchParamsString}
name={`${filterKey}: ${searchParams.get(filterKey)}`}
/>
))}
</div>
</Box>
) : null}
<Box className="min-w-0 overflow-auto">
<Table>
<Thead>
<Tr>
<Th></Th>
<Th>Camera</Th>
<Th>Label</Th>
<Th>Score</Th>
<Th>Zones</Th>
<Th>Date</Th>
<Th>Start</Th>
<Th>End</Th>
</Tr>
</Thead>
<Tbody>
{events.map(
(
{ camera, id, label, start_time: startTime, end_time: endTime, thumbnail, top_score: score, zones },
i
) => {
const start = new Date(parseInt(startTime * 1000, 10));
const end = new Date(parseInt(endTime * 1000, 10));
return (
<Tr key={id} index={i}>
<Td>
<a href={`/events/${id}`}>
<img className="w-32 max-w-none" src={`data:image/jpeg;base64,${thumbnail}`} />
</a>
</Td>
<Td>
<Filterable searchParams={searchParamsString} paramName="camera" name={camera} />
</Td>
<Td>
<Filterable searchParams={searchParamsString} paramName="label" name={label} />
</Td>
<Td>{(score * 100).toFixed(2)}%</Td>
<Td>
<ul>
{zones.map((zone) => (
<li>
<Filterable searchParams={searchParamsString} paramName="zone" name={zone} />
</li>
))}
</ul>
</Td>
<Td>{start.toLocaleDateString()}</Td>
<Td>{start.toLocaleTimeString()}</Td>
<Td>{end.toLocaleTimeString()}</Td>
</Tr>
);
}
)}
</Tbody>
</Table>
</Box>
</div>
);
}
function Filterable({ searchParams, paramName, name }) {
const params = new URLSearchParams(searchParams);
params.set(paramName, name);
return <Link href={`?${params.toString()}`}>{name}</Link>;
}
function UnFilterable({ searchParams, paramName, name }) {
const params = new URLSearchParams(searchParams);
params.delete(paramName);
return (
<a
className="bg-gray-700 text-white px-3 py-1 rounded-md hover:bg-gray-300 hover:text-gray-900 dark:bg-gray-300 dark:text-gray-900 dark:hover:bg-gray-700 dark:hover:text-white"
href={`?${params.toString()}`}
>
{name}
</a>
);
}

87
web/src/Sidebar.jsx Normal file
View File

@@ -0,0 +1,87 @@
import { h } from 'preact';
import Link from './components/Link';
import { Link as RouterLink } from 'preact-router/match';
import { useCallback, useState } from 'preact/hooks';
function HamburgerIcon() {
return (
<svg fill="currentColor" viewBox="0 0 20 20" className="w-6 h-6">
<path
fill-rule="evenodd"
d="M3 5a1 1 0 011-1h12a1 1 0 110 2H4a1 1 0 01-1-1zM3 10a1 1 0 011-1h12a1 1 0 110 2H4a1 1 0 01-1-1zM9 15a1 1 0 011-1h6a1 1 0 110 2h-6a1 1 0 01-1-1z"
clip-rule="evenodd"
></path>
</svg>
);
}
function CloseIcon() {
return (
<svg fill="currentColor" viewBox="0 0 20 20" className="w-6 h-6">
<path
fill-rule="evenodd"
d="M4.293 4.293a1 1 0 011.414 0L10 8.586l4.293-4.293a1 1 0 111.414 1.414L11.414 10l4.293 4.293a1 1 0 01-1.414 1.414L10 11.414l-4.293 4.293a1 1 0 01-1.414-1.414L8.586 10 4.293 5.707a1 1 0 010-1.414z"
clip-rule="evenodd"
></path>
</svg>
);
}
function NavLink({ className = '', href, text }) {
const external = href.startsWith('http');
const El = external ? Link : RouterLink;
const props = external ? { rel: 'noopener nofollow', target: '_blank' } : {};
return (
<El
activeClassName="bg-gray-200 dark:bg-gray-700 dark:hover:bg-gray-600 dark:focus:bg-gray-600 dark:focus:text-white dark:hover:text-white dark:text-gray-200"
className={`block px-4 py-2 mt-2 text-sm font-semibold text-gray-900 bg-transparent rounded-lg dark:bg-transparent dark:hover:bg-gray-600 dark:focus:bg-gray-600 dark:focus:text-white dark:hover:text-white dark:text-gray-200 hover:text-gray-900 focus:text-gray-900 hover:bg-gray-200 focus:bg-gray-200 focus:outline-none focus:shadow-outline self-end ${className}`}
href={href}
{...props}
>
{text}
</El>
);
}
export default function Sidebar() {
const [open, setOpen] = useState(false);
const handleToggle = useCallback(() => {
setOpen(!open);
}, [open, setOpen]);
return (
<div className="flex flex-col w-full md:w-64 text-gray-700 bg-white dark:text-gray-200 dark:bg-gray-700 flex-shrink-0">
<div className="flex-shrink-0 px-8 py-4 flex flex-row items-center justify-between">
<a
href="#"
className="text-lg font-semibold tracking-widest text-gray-900 uppercase rounded-lg dark:text-white focus:outline-none focus:shadow-outline"
>
Frigate
</a>
<button
className="rounded-lg md:hidden rounded-lg focus:outline-none focus:shadow-outline"
onClick={handleToggle}
>
{open ? <CloseIcon /> : <HamburgerIcon />}
</button>
</div>
<nav
className={`flex-col flex-grow md:block overflow-hidden px-4 pb-4 md:pb-0 md:overflow-y-auto ${
!open ? 'md:h-0 hidden' : ''
}`}
>
<NavLink href="/" text="Cameras" />
<NavLink href="/events" text="Events" />
<NavLink href="/debug" text="Debug" />
<hr className="border-solid border-gray-500 mt-2" />
<NavLink
className="self-end"
href="https://github.com/blakeblackshear/frigate/blob/master/README.md"
text="Documentation"
/>
<NavLink className="self-end" href="https://github.com/blakeblackshear/frigate" text="GitHub" />
</nav>
</div>
);
}

View File

@@ -0,0 +1,27 @@
import { h } from 'preact';
import { ApiHost, Config } from '../context';
import { useCallback, useEffect, useContext, useState } from 'preact/hooks';
export default function AutoUpdatingCameraImage({ camera, searchParams }) {
const config = useContext(Config);
const apiHost = useContext(ApiHost);
const cameraConfig = config.cameras[camera];
const [key, setKey] = useState(Date.now());
useEffect(() => {
const timeoutId = setTimeout(() => {
setKey(Date.now());
}, 500);
return () => {
clearTimeout(timeoutId);
};
}, [key, searchParams]);
return (
<img
className="w-full"
src={`${apiHost}/api/${camera}/latest.jpg?cache=${key}&${searchParams}`}
alt={`Auto-updating ${camera} image`}
/>
);
}

View File

@@ -0,0 +1,16 @@
import { h } from 'preact';
export default function Box({ children, className = '', hover = false, href, ...props }) {
const Element = href ? 'a' : 'div';
return (
<Element
className={`bg-white dark:bg-gray-700 shadow-lg rounded-lg p-4 ${
hover ? 'hover:bg-gray-300 hover:dark:bg-gray-500 dark:hover:text-gray-900 dark:hover:text-gray-900' : ''
} ${className}`}
href={href}
{...props}
>
{children}
</Element>
);
}

View File

@@ -0,0 +1,23 @@
import { h } from 'preact';
const noop = () => {};
const BUTTON_COLORS = {
blue: { normal: 'bg-blue-500', hover: 'hover:bg-blue-400' },
red: { normal: 'bg-red-500', hover: 'hover:bg-red-400' },
green: { normal: 'bg-green-500', hover: 'hover:bg-green-400' },
};
export default function Button({ children, className, color = 'blue', onClick, size, ...attrs }) {
return (
<div
role="button"
tabindex="0"
className={`rounded ${BUTTON_COLORS[color].normal} text-white pl-4 pr-4 pt-2 pb-2 font-bold shadow ${BUTTON_COLORS[color].hover} hover:shadow-lg cursor-pointer ${className}`}
onClick={onClick || noop}
{...attrs}
>
{children}
</div>
);
}

View File

@@ -0,0 +1,5 @@
import { h } from 'preact';
export default function Heading({ children, className = '', size = '2xl' }) {
return <h1 className={`font-semibold tracking-widest uppercase text-${size} ${className}`}>{children}</h1>;
}

View File

@@ -0,0 +1,9 @@
import { h } from 'preact';
export default function Link({ className, children, href, ...props }) {
return (
<a className={`text-blue-500 dark:text-blue-400 hover:underline ${className}`} href={href} {...props}>
{children}
</a>
);
}

View File

@@ -0,0 +1,30 @@
import { h } from 'preact';
import { useCallback, useState } from 'preact/hooks';
export default function Switch({ checked, label, id, onChange }) {
const handleChange = useCallback(
(event) => {
console.log(event.target.checked, !checked);
onChange(id, !checked);
},
[id, onChange, checked]
);
return (
<label for={id} className="flex items-center cursor-pointer">
<div className="relative">
<input id={id} type="checkbox" className="hidden" onChange={handleChange} checked={checked} />
<div
className={`transition-colors toggle__line w-12 h-6 ${
!checked ? 'bg-gray-400' : 'bg-blue-400'
} rounded-full shadow-inner`}
/>
<div
className="transition-transform absolute w-6 h-6 bg-white rounded-full shadow-md inset-y-0 left-0"
style={checked ? 'transform: translateX(100%);' : 'transform: translateX(0%);'}
/>
</div>
<div className="ml-3 text-gray-700 font-medium dark:text-gray-200">{label}</div>
</label>
);
}

View File

@@ -0,0 +1,31 @@
import { h } from 'preact';
export function Table({ children, className = '' }) {
return (
<table className={`table-auto border-collapse text-gray-900 dark:text-gray-200 ${className}`}>{children}</table>
);
}
export function Thead({ children, className = '' }) {
return <thead className={`${className}`}>{children}</thead>;
}
export function Tbody({ children, className = '' }) {
return <tbody className={`${className}`}>{children}</tbody>;
}
export function Tfoot({ children, className = '' }) {
return <tfoot className={`${className}`}>{children}</tfoot>;
}
export function Tr({ children, className = '', index }) {
return <tr className={`${index % 2 ? 'bg-gray-200 dark:bg-gray-700' : ''} ${className}`}>{children}</tr>;
}
export function Th({ children, className = '' }) {
return <th className={`border-b-2 border-gray-400 p-4 text-left ${className}`}>{children}</th>;
}
export function Td({ children, className = '' }) {
return <td className={`p-4 ${className}`}>{children}</td>;
}

5
web/src/context/index.js Normal file
View File

@@ -0,0 +1,5 @@
import { createContext } from 'preact';
export const Config = createContext({});
export const ApiHost = createContext(import.meta.env.SNOWPACK_PUBLIC_API_HOST || window.baseUrl || '');

3
web/src/index.css Normal file
View File

@@ -0,0 +1,3 @@
@tailwind base;
@tailwind components;
@tailwind utilities;

9
web/src/index.jsx Normal file
View File

@@ -0,0 +1,9 @@
import App from './App';
import { h, render } from 'preact';
import 'preact/devtools';
import './index.css';
render(
<App />,
document.getElementById('root')
);

13
web/tailwind.config.js Normal file
View File

@@ -0,0 +1,13 @@
'use strict';
module.exports = {
purge: ['./public/**/*.html', './src/**/*.jsx'],
darkMode: 'media',
theme: {
extend: {},
},
variants: {
extend: {},
},
plugins: [],
};