forked from Github/frigate
Compare commits
24 Commits
v0.9.0-bet
...
v0.9.0-rc1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4efc584816 | ||
|
|
10ab70080a | ||
|
|
29de723267 | ||
|
|
354a9240f0 | ||
|
|
5ae4f47e96 | ||
|
|
26424488a5 | ||
|
|
334095252c | ||
|
|
1c85f774eb | ||
|
|
bbf0fc8324 | ||
|
|
b143e11e0e | ||
|
|
927f56ab9f | ||
|
|
2181379475 | ||
|
|
45798d6d14 | ||
|
|
f6d5e96dbf | ||
|
|
e18aa56427 | ||
|
|
f3a1c1de0a | ||
|
|
0ccf543ec1 | ||
|
|
1f1a708388 | ||
|
|
58c0d97b5f | ||
|
|
abef002af8 | ||
|
|
adf2bc078c | ||
|
|
3bc75ae931 | ||
|
|
03e756dd27 | ||
|
|
5d0984998d |
@@ -14,7 +14,7 @@ Use of a [Google Coral Accelerator](https://coral.ai/products/) is optional, but
|
||||
- Uses a very low overhead motion detection to determine where to run object detection
|
||||
- Object detection with TensorFlow runs in separate processes for maximum FPS
|
||||
- Communicates over MQTT for easy integration into other systems
|
||||
- Records video clips of detected objects
|
||||
- Records video with retention settings based on detected objects
|
||||
- 24/7 recording
|
||||
- Re-streaming via RTMP to reduce the number of connections to your camera
|
||||
|
||||
@@ -23,16 +23,20 @@ Use of a [Google Coral Accelerator](https://coral.ai/products/) is optional, but
|
||||
View the documentation at https://blakeblackshear.github.io/frigate
|
||||
|
||||
## Donations
|
||||
|
||||
If you would like to make a donation to support development, please use [Github Sponsors](https://github.com/sponsors/blakeblackshear).
|
||||
|
||||
## Screenshots
|
||||
|
||||
Integration into Home Assistant
|
||||
|
||||
<div>
|
||||
<a href="docs/static/img/media_browser.png"><img src="docs/static/img/media_browser.png" height=400></a>
|
||||
<a href="docs/static/img/notification.png"><img src="docs/static/img/notification.png" height=400></a>
|
||||
</div>
|
||||
|
||||
Also comes with a builtin UI:
|
||||
|
||||
<div>
|
||||
<a href="docs/static/img/home-ui.png"><img src="docs/static/img/home-ui.png" height=400></a>
|
||||
<a href="docs/static/img/camera-ui.png"><img src="docs/static/img/camera-ui.png" height=400></a>
|
||||
|
||||
@@ -81,15 +81,15 @@ environment_vars:
|
||||
|
||||
### `database`
|
||||
|
||||
Event and clip information is managed in a sqlite database at `/media/frigate/clips/frigate.db`. If that database is deleted, clips will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant.
|
||||
Event and recording information is managed in a sqlite database at `/media/frigate/frigate.db`. If that database is deleted, recordings will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant.
|
||||
|
||||
If you are storing your clips on a network share (SMB, NFS, etc), you may get a `database is locked` error message on startup. You can customize the location of the database in the config if necessary.
|
||||
If you are storing your database on a network share (SMB, NFS, etc), you may get a `database is locked` error message on startup. You can customize the location of the database in the config if necessary.
|
||||
|
||||
This may need to be in a custom location if network storage is used for clips.
|
||||
This may need to be in a custom location if network storage is used for the media folder.
|
||||
|
||||
```yaml
|
||||
database:
|
||||
path: /media/frigate/clips/frigate.db
|
||||
path: /media/frigate/frigate.db
|
||||
```
|
||||
|
||||
### `detectors`
|
||||
|
||||
@@ -5,7 +5,7 @@ title: Cameras
|
||||
|
||||
## Setting Up Camera Inputs
|
||||
|
||||
Up to 4 inputs can be configured for each camera and the role of each input can be mixed and matched based on your needs. This allows you to use a lower resolution stream for object detection, but create clips from a higher resolution stream, or vice versa.
|
||||
Up to 4 inputs can be configured for each camera and the role of each input can be mixed and matched based on your needs. This allows you to use a lower resolution stream for object detection, but create recordings from a higher resolution stream, or vice versa.
|
||||
|
||||
Each role can only be assigned to one input per camera. The options for roles are as follows:
|
||||
|
||||
@@ -30,13 +30,15 @@ cameras:
|
||||
- rtmp
|
||||
- path: rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/live
|
||||
roles:
|
||||
- clips
|
||||
- record
|
||||
width: 1280
|
||||
height: 720
|
||||
fps: 5
|
||||
detect:
|
||||
width: 1280
|
||||
height: 720
|
||||
fps: 5
|
||||
```
|
||||
|
||||
`width`, `height`, and `fps` are only used for the `detect` role. Other streams are passed through, so there is no need to specify the resolution.
|
||||
|
||||
## Masks & Zones
|
||||
|
||||
### Masks
|
||||
@@ -133,7 +135,7 @@ objects:
|
||||
|
||||
24/7 recordings can be enabled and are stored at `/media/frigate/recordings`. The folder structure for the recordings is `YYYY-MM/DD/HH/<camera_name>/MM.SS.mp4`. These recordings are written directly from your camera stream without re-encoding and are available in Home Assistant's media browser. Each camera supports a configurable retention policy in the config.
|
||||
|
||||
Clips are also created off of these recordings. Frigate chooses the largest matching retention value between the recording retention and the event retention when determining if a recording should be removed.
|
||||
Exported clips are also created off of these recordings. Frigate chooses the largest matching retention value between the recording retention and the event retention when determining if a recording should be removed.
|
||||
|
||||
These recordings will not be playable in the web UI or in Home Assistant's media browser unless your camera sends video as h264.
|
||||
|
||||
@@ -155,16 +157,16 @@ record:
|
||||
# NOTE: If an object is being tracked for longer than this amount of time, the cache
|
||||
# will begin to expire and the resulting clip will be the last x seconds of the event unless retain_days under record is > 0.
|
||||
max_seconds: 300
|
||||
# Optional: Number of seconds before the event to include in the clips (default: shown below)
|
||||
# Optional: Number of seconds before the event to include in the event (default: shown below)
|
||||
pre_capture: 5
|
||||
# Optional: Number of seconds after the event to include in the clips (default: shown below)
|
||||
# Optional: Number of seconds after the event to include in the event (default: shown below)
|
||||
post_capture: 5
|
||||
# Optional: Objects to save clips for. (default: all tracked objects)
|
||||
# Optional: Objects to save event for. (default: all tracked objects)
|
||||
objects:
|
||||
- person
|
||||
# Optional: Restrict clips to objects that entered any of the listed zones (default: no required zones)
|
||||
# Optional: Restrict event to objects that entered any of the listed zones (default: no required zones)
|
||||
required_zones: []
|
||||
# Optional: Retention settings for clips
|
||||
# Optional: Retention settings for event
|
||||
retain:
|
||||
# Required: Default retention days (default: shown below)
|
||||
default: 10
|
||||
@@ -259,8 +261,8 @@ cameras:
|
||||
# Required: the path to the stream
|
||||
# NOTE: Environment variables that begin with 'FRIGATE_' may be referenced in {}
|
||||
- path: rtsp://viewer:{FRIGATE_RTSP_PASSWORD}@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
|
||||
# Required: list of roles for this stream. valid values are: detect,record,clips,rtmp
|
||||
# NOTICE: In addition to assigning the record, clips, and rtmp roles,
|
||||
# Required: list of roles for this stream. valid values are: detect,record,rtmp
|
||||
# NOTICE: In addition to assigning the record, and rtmp roles,
|
||||
# they must also be enabled in the camera config.
|
||||
roles:
|
||||
- detect
|
||||
@@ -280,14 +282,20 @@ cameras:
|
||||
# Optional: camera specific output args (default: inherit)
|
||||
output_args:
|
||||
|
||||
# Required: width of the frame for the input with the detect role
|
||||
width: 1280
|
||||
# Required: height of the frame for the input with the detect role
|
||||
height: 720
|
||||
# Optional: desired fps for your camera for the input with the detect role
|
||||
# NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
|
||||
# Frigate will attempt to autodetect if not specified.
|
||||
fps: 5
|
||||
# Required: Camera level detect settings
|
||||
detect:
|
||||
# Required: width of the frame for the input with the detect role
|
||||
width: 1280
|
||||
# Required: height of the frame for the input with the detect role
|
||||
height: 720
|
||||
# Required: desired fps for your camera for the input with the detect role
|
||||
# NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
|
||||
fps: 5
|
||||
# Optional: enables detection for the camera (default: True)
|
||||
# This value can be set via MQTT and will be updated in startup based on retained value
|
||||
enabled: True
|
||||
# Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
|
||||
max_disappeared: 25
|
||||
|
||||
# Optional: camera level motion config
|
||||
motion:
|
||||
@@ -319,42 +327,33 @@ cameras:
|
||||
max_area: 100000
|
||||
threshold: 0.7
|
||||
|
||||
# Optional: Camera level detect settings
|
||||
detect:
|
||||
# Optional: enables detection for the camera (default: True)
|
||||
# This value can be set via MQTT and will be updated in startup based on retained value
|
||||
enabled: True
|
||||
# Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
|
||||
max_disappeared: 25
|
||||
|
||||
# Optional: save clips configuration
|
||||
clips:
|
||||
# Required: enables clips for the camera (default: shown below)
|
||||
# This value can be set via MQTT and will be updated in startup based on retained value
|
||||
enabled: False
|
||||
# Optional: Number of seconds before the event to include in the clips (default: shown below)
|
||||
pre_capture: 5
|
||||
# Optional: Number of seconds after the event to include in the clips (default: shown below)
|
||||
post_capture: 5
|
||||
# Optional: Objects to save clips for. (default: all tracked objects)
|
||||
objects:
|
||||
- person
|
||||
# Optional: Restrict clips to objects that entered any of the listed zones (default: no required zones)
|
||||
required_zones: []
|
||||
# Optional: Camera override for retention settings (default: global values)
|
||||
retain:
|
||||
# Required: Default retention days (default: shown below)
|
||||
default: 10
|
||||
# Optional: Per object retention days
|
||||
objects:
|
||||
person: 15
|
||||
|
||||
# Optional: 24/7 recording configuration
|
||||
record:
|
||||
# Optional: Enable recording (default: global setting)
|
||||
enabled: False
|
||||
# Optional: Number of days to retain (default: global setting)
|
||||
retain_days: 30
|
||||
# Optional: Event recording settings
|
||||
events:
|
||||
# Required: enables event recordings for the camera (default: shown below)
|
||||
# This value can be set via MQTT and will be updated in startup based on retained value
|
||||
enabled: False
|
||||
# Optional: Number of seconds before the event to include (default: shown below)
|
||||
pre_capture: 5
|
||||
# Optional: Number of seconds after the event to include (default: shown below)
|
||||
post_capture: 5
|
||||
# Optional: Objects to save events for. (default: all tracked objects)
|
||||
objects:
|
||||
- person
|
||||
# Optional: Restrict events to objects that entered any of the listed zones (default: no required zones)
|
||||
required_zones: []
|
||||
# Optional: Camera override for retention settings (default: global values)
|
||||
retain:
|
||||
# Required: Default retention days (default: shown below)
|
||||
default: 10
|
||||
# Optional: Per object retention days
|
||||
objects:
|
||||
person: 15
|
||||
|
||||
# Optional: RTMP re-stream configuration
|
||||
rtmp:
|
||||
@@ -482,12 +481,11 @@ input_args:
|
||||
- "1"
|
||||
```
|
||||
|
||||
Note that mjpeg cameras require encoding the video into h264 for clips, recording, and rtmp roles. This will use significantly more CPU than if the cameras supported h264 feeds directly.
|
||||
Note that mjpeg cameras require encoding the video into h264 for recording, and rtmp roles. This will use significantly more CPU than if the cameras supported h264 feeds directly.
|
||||
|
||||
```yaml
|
||||
output_args:
|
||||
record: -f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v libx264 -an
|
||||
clips: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v libx264 -an
|
||||
rtmp: -c:v libx264 -an -f flv
|
||||
```
|
||||
|
||||
|
||||
@@ -30,6 +30,15 @@ detectors:
|
||||
device: usb:1
|
||||
```
|
||||
|
||||
Native Coral (Dev Board):
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
coral:
|
||||
type: edgetpu
|
||||
device: ''
|
||||
```
|
||||
|
||||
Multiple PCIE/M.2 Corals:
|
||||
|
||||
```yaml
|
||||
|
||||
@@ -20,9 +20,10 @@ cameras:
|
||||
roles:
|
||||
- detect
|
||||
- rtmp
|
||||
width: 1280
|
||||
height: 720
|
||||
fps: 5
|
||||
detect:
|
||||
width: 1280
|
||||
height: 720
|
||||
fps: 5
|
||||
```
|
||||
|
||||
## Required
|
||||
@@ -76,9 +77,10 @@ cameras:
|
||||
roles:
|
||||
- detect
|
||||
- rtmp
|
||||
width: 1280
|
||||
height: 720
|
||||
fps: 5
|
||||
detect:
|
||||
width: 1280
|
||||
height: 720
|
||||
fps: 5
|
||||
```
|
||||
|
||||
## Optional
|
||||
@@ -125,7 +127,7 @@ logger:
|
||||
|
||||
Can be overridden at the camera level. 24/7 recordings can be enabled and are stored at `/media/frigate/recordings`. The folder structure for the recordings is `YYYY-MM/DD/HH/<camera_name>/MM.SS.mp4`. These recordings are written directly from your camera stream without re-encoding and are available in Home Assistant's media browser. Each camera supports a configurable retention policy in the config.
|
||||
|
||||
Clips are also created off of these recordings. Frigate chooses the largest matching retention value between the recording retention and the event retention when determining if a recording should be removed.
|
||||
Exported clips are also created off of these recordings. Frigate chooses the largest matching retention value between the recording retention and the event retention when determining if a recording should be removed.
|
||||
|
||||
These recordings will not be playable in the web UI or in Home Assistant's media browser unless your camera sends video as h264.
|
||||
|
||||
@@ -147,16 +149,16 @@ record:
|
||||
# NOTE: If an object is being tracked for longer than this amount of time, the cache
|
||||
# will begin to expire and the resulting clip will be the last x seconds of the event unless retain_days under record is > 0.
|
||||
max_seconds: 300
|
||||
# Optional: Number of seconds before the event to include in the clips (default: shown below)
|
||||
# Optional: Number of seconds before the event to include (default: shown below)
|
||||
pre_capture: 5
|
||||
# Optional: Number of seconds after the event to include in the clips (default: shown below)
|
||||
# Optional: Number of seconds after the event to include (default: shown below)
|
||||
post_capture: 5
|
||||
# Optional: Objects to save clips for. (default: all tracked objects)
|
||||
# Optional: Objects to save recordings for. (default: all tracked objects)
|
||||
objects:
|
||||
- person
|
||||
# Optional: Restrict clips to objects that entered any of the listed zones (default: no required zones)
|
||||
# Optional: Restrict recordings to objects that entered any of the listed zones (default: no required zones)
|
||||
required_zones: []
|
||||
# Optional: Retention settings for clips
|
||||
# Optional: Retention settings for events
|
||||
retain:
|
||||
# Required: Default retention days (default: shown below)
|
||||
default: 10
|
||||
@@ -199,8 +201,6 @@ ffmpeg:
|
||||
detect: -f rawvideo -pix_fmt yuv420p
|
||||
# Optional: output args for record streams (default: shown below)
|
||||
record: -f segment -segment_time 60 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an
|
||||
# Optional: output args for clips streams (default: shown below)
|
||||
clips: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c copy -an
|
||||
# Optional: output args for rtmp streams (default: shown below)
|
||||
rtmp: -c copy -f flv
|
||||
```
|
||||
|
||||
@@ -63,17 +63,17 @@ cameras:
|
||||
roles:
|
||||
- detect
|
||||
- rtmp
|
||||
- clips
|
||||
height: 1080
|
||||
width: 1920
|
||||
fps: 5
|
||||
detect:
|
||||
height: 1080
|
||||
width: 1920
|
||||
fps: 5
|
||||
```
|
||||
|
||||
These input args tell ffmpeg to read the mp4 file in an infinite loop. You can use any valid ffmpeg input here.
|
||||
|
||||
#### 3. Gather some mp4 files for testing
|
||||
|
||||
Create and place these files in a `debug` folder in the root of the repo. This is also where clips and recordings will be created if you enable them in your test config. Update your config from step 2 above to point at the right file. You can check the `docker-compose.yml` file in the repo to see how the volumes are mapped.
|
||||
Create and place these files in a `debug` folder in the root of the repo. This is also where recordings will be created if you enable them in your test config. Update your config from step 2 above to point at the right file. You can check the `docker-compose.yml` file in the repo to see how the volumes are mapped.
|
||||
|
||||
#### 4. Open the repo with Visual Studio Code
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ title: Recommended hardware
|
||||
|
||||
## Cameras
|
||||
|
||||
Cameras that output H.264 video and AAC audio will offer the most compatibility with all features of Frigate and Home Assistant. It is also helpful if your camera supports multiple substreams to allow different resolutions to be used for detection, streaming, clips, and recordings without re-encoding.
|
||||
Cameras that output H.264 video and AAC audio will offer the most compatibility with all features of Frigate and Home Assistant. It is also helpful if your camera supports multiple substreams to allow different resolutions to be used for detection, streaming, and recordings without re-encoding.
|
||||
|
||||
## Computer
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ title: Installation
|
||||
|
||||
Frigate is a Docker container that can be run on any Docker host including as a [HassOS Addon](https://www.home-assistant.io/addons/). See instructions below for installing the HassOS addon.
|
||||
|
||||
For Home Assistant users, there is also a [custom component (aka integration)](https://github.com/blakeblackshear/frigate-hass-integration). This custom component adds tighter integration with Home Assistant by automatically setting up camera entities, sensors, media browser for clips and recordings, and a public API to simplify notifications.
|
||||
For Home Assistant users, there is also a [custom component (aka integration)](https://github.com/blakeblackshear/frigate-hass-integration). This custom component adds tighter integration with Home Assistant by automatically setting up camera entities, sensors, media browser for recordings, and a public API to simplify notifications.
|
||||
|
||||
Note that HassOS Addons and custom components are different things. If you are already running Frigate with Docker directly, you do not need the Addon since the Addon would run another instance of Frigate.
|
||||
|
||||
|
||||
@@ -3,25 +3,27 @@ id: troubleshooting
|
||||
title: Troubleshooting and FAQ
|
||||
---
|
||||
|
||||
### How can I get sound or audio in my clips and recordings?
|
||||
By default, Frigate removes audio from clips and recordings to reduce the likelihood of failing for invalid data. If you would like to include audio, you need to override the output args to remove `-an` for where you want to include audio. The recommended audio codec is `aac`. Not all audio codecs are supported by RTMP, so you may need to re-encode your audio with `-c:a aac`. The default ffmpeg args are shown [here](/frigate/configuration/index#ffmpeg).
|
||||
### I am seeing a solid green image for my camera.
|
||||
|
||||
A solid green image means that frigate has not received any frames from ffmpeg. Check the logs to see why ffmpeg is exiting and adjust your ffmpeg args accordingly.
|
||||
|
||||
### How can I get sound or audio in my recordings?
|
||||
|
||||
By default, Frigate removes audio from recordings to reduce the likelihood of failing for invalid data. If you would like to include audio, you need to override the output args to remove `-an` for where you want to include audio. The recommended audio codec is `aac`. Not all audio codecs are supported by RTMP, so you may need to re-encode your audio with `-c:a aac`. The default ffmpeg args are shown [here](/frigate/configuration/index#ffmpeg).
|
||||
|
||||
### My mjpeg stream or snapshots look green and crazy
|
||||
|
||||
This almost always means that the width/height defined for your camera are not correct. Double check the resolution with vlc or another player. Also make sure you don't have the width and height values backwards.
|
||||
|
||||

|
||||
|
||||
### I have clips and snapshots in my clips folder, but I can't view them in the Web UI.
|
||||
This is usually caused one of two things:
|
||||
|
||||
- The permissions on the parent folder don't have execute and nginx returns a 403 error you can see in the browser logs
|
||||
- In this case, try mounting a volume to `/media/frigate` inside the container instead of `/media/frigate/clips`.
|
||||
- Your cameras do not send h264 encoded video and the mp4 files are not playable in the browser
|
||||
### I can't view events or recordings in the Web UI.
|
||||
|
||||
Ensure your cameras send h264 encoded video
|
||||
|
||||
### "[mov,mp4,m4a,3gp,3g2,mj2 @ 0x5639eeb6e140] moov atom not found"
|
||||
|
||||
These messages in the logs are expected in certain situations. Frigate checks the integrity of the video cache before assembling clips. Occasionally these cached files will be invalid and cleaned up automatically.
|
||||
These messages in the logs are expected in certain situations. Frigate checks the integrity of the recordings before storing. Occasionally these cached files will be invalid and cleaned up automatically.
|
||||
|
||||
### "On connect called"
|
||||
|
||||
|
||||
@@ -206,10 +206,6 @@ Accepts the following query string parameters, but they are only applied when an
|
||||
| `crop` | int | Crop the snapshot to the (0 or 1) |
|
||||
| `quality` | int | Jpeg encoding quality (0-100). Defaults to 70. |
|
||||
|
||||
### `/clips/<camera>-<id>.mp4`
|
||||
|
||||
Video clip for the given camera and event id.
|
||||
|
||||
### `/clips/<camera>-<id>.jpg`
|
||||
|
||||
JPG snapshot for the given camera and event id.
|
||||
|
||||
@@ -4,31 +4,93 @@ title: Integration with Home Assistant
|
||||
sidebar_label: Home Assistant
|
||||
---
|
||||
|
||||
The best way to integrate with Home Assistant is to use the [official integration](https://github.com/blakeblackshear/frigate-hass-integration). When configuring the integration, you will be asked for the `Host` of your frigate instance. This value should be the url you use to access Frigate in the browser and will look like `http://<host>:5000/`. If you are using HassOS with the addon, the host should be `http://ccab4aaf-frigate:5000` (or `http://ccab4aaf-frigate-beta:5000` if your are using the beta version of the addon). Home Assistant needs access to port 5000 (api) and 1935 (rtmp) for all features. The integration will setup the following entities within Home Assistant:
|
||||
The best way to integrate with Home Assistant is to use the [official integration](https://github.com/blakeblackshear/frigate-hass-integration).
|
||||
|
||||
## Sensors:
|
||||
## Installation
|
||||
|
||||
- Stats to monitor frigate performance
|
||||
- Object counts for all zones and cameras
|
||||
Available via HACS as a [custom repository](https://hacs.xyz/docs/faq/custom_repositories). To install:
|
||||
|
||||
## Cameras:
|
||||
- Add the custom repository:
|
||||
|
||||
- Cameras for image of the last detected object for each camera
|
||||
- Camera entities with stream support (requires RTMP)
|
||||
```
|
||||
Home Assistant > HACS > Integrations > [...] > Custom Repositories
|
||||
```
|
||||
|
||||
## Media Browser:
|
||||
| Key | Value |
|
||||
| -------------- | ----------------------------------------------------------- |
|
||||
| Repository URL | https://github.com/blakeblackshear/frigate-hass-integration |
|
||||
| Category | Integration |
|
||||
|
||||
- Rich UI with thumbnails for browsing event clips
|
||||
- Use [HACS](https://hacs.xyz/) to install the integration:
|
||||
|
||||
```
|
||||
Home Assistant > HACS > Integrations > "Explore & Add Integrations" > Frigate
|
||||
```
|
||||
|
||||
- Restart Home Assistant.
|
||||
- Then add/configure the integration:
|
||||
|
||||
```
|
||||
Home Assistant > Configuration > Integrations > Add Integration > Frigate
|
||||
```
|
||||
|
||||
Note: You will also need
|
||||
[media_source](https://www.home-assistant.io/integrations/media_source/) enabled
|
||||
in your Home Assistant configuration for the Media Browser to appear.
|
||||
|
||||
## Configuration
|
||||
|
||||
When configuring the integration, you will be asked for the following parameters:
|
||||
|
||||
| Variable | Description |
|
||||
| -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| URL | The `URL` of your frigate instance, the URL you use to access Frigate in the browser. This may look like `http://<host>:5000/`. If you are using HassOS with the addon, the URL should be `http://ccab4aaf-frigate:5000` (or `http://ccab4aaf-frigate-beta:5000` if your are using the beta version of the addon). Live streams required port 1935, see [RTMP streams](#streams) |
|
||||
|
||||
<a name="options"></a>
|
||||
|
||||
## Options
|
||||
|
||||
```
|
||||
Home Assistant > Configuration > Integrations > Frigate > Options
|
||||
```
|
||||
|
||||
| Option | Description |
|
||||
| ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||
| RTMP URL Template | A [jinja2](https://jinja.palletsprojects.com/) template that is used to override the standard RTMP stream URL (e.g. for use with reverse proxies). This option is only shown to users who have [advanced mode](https://www.home-assistant.io/blog/2019/07/17/release-96/#advanced-mode) enabled. See [RTMP streams](#streams) below. |
|
||||
|
||||
## Entities Provided
|
||||
|
||||
| Platform | Description |
|
||||
| --------------- | --------------------------------------------------------------------------------- |
|
||||
| `camera` | Live camera stream (requires RTMP), camera for image of the last detected object. |
|
||||
| `sensor` | States to monitor Frigate performance, object counts for all zones and cameras. |
|
||||
| `switch` | Switch entities to toggle detection, recordings and snapshots. |
|
||||
| `binary_sensor` | A "motion" binary sensor entity per camera/zone/object. |
|
||||
|
||||
## Media Browser Support
|
||||
|
||||
The integration provides:
|
||||
|
||||
- Rich UI with thumbnails for browsing event recordings
|
||||
- Rich UI for browsing 24/7 recordings by month, day, camera, time
|
||||
|
||||
## API:
|
||||
This is accessible via "Media Browser" on the left menu panel in Home Assistant.
|
||||
|
||||
<a name="api"></a>
|
||||
|
||||
## API
|
||||
|
||||
- Notification API with public facing endpoints for images in notifications
|
||||
|
||||
### Notifications
|
||||
|
||||
Frigate publishes event information in the form of a change feed via MQTT. This allows lots of customization for notifications to meet your needs. Event changes are published with `before` and `after` information as shown [here](#frigateevents).
|
||||
Note that some people may not want to expose frigate to the web, so you can leverage the HA API that frigate custom_integration ties into (which is exposed to the web, and thus can be used for mobile notifications etc):
|
||||
Frigate publishes event information in the form of a change feed via MQTT. This
|
||||
allows lots of customization for notifications to meet your needs. Event changes
|
||||
are published with `before` and `after` information as shown
|
||||
[here](#frigateevents). Note that some people may not want to expose frigate to
|
||||
the web, so you can leverage the HA API that frigate custom_integration ties
|
||||
into (which is exposed to the web, and thus can be used for mobile notifications
|
||||
etc):
|
||||
|
||||
To load an image taken by frigate from Home Assistants API see below:
|
||||
|
||||
@@ -57,6 +119,7 @@ automation:
|
||||
data:
|
||||
image: 'https://your.public.hass.address.com/api/frigate/notifications/{{trigger.payload_json["after"]["id"]}}/thumbnail.jpg?format=android'
|
||||
tag: '{{trigger.payload_json["after"]["id"]}}'
|
||||
when: '{{trigger.payload_json["after"]["start_time"]|int}}'
|
||||
```
|
||||
|
||||
```yaml
|
||||
@@ -75,6 +138,7 @@ automation:
|
||||
data:
|
||||
image: "https://url.com/api/frigate/notifications/{{trigger.payload_json['after']['id']}}/thumbnail.jpg"
|
||||
tag: "{{trigger.payload_json['after']['id']}}"
|
||||
when: '{{trigger.payload_json["after"]["start_time"]|int}}'
|
||||
```
|
||||
|
||||
```yaml
|
||||
@@ -93,6 +157,7 @@ automation:
|
||||
data:
|
||||
image: "https://url.com/api/frigate/notifications/{{trigger.payload_json['after']['id']}}/thumbnail.jpg"
|
||||
tag: "{{trigger.payload_json['after']['id']}}"
|
||||
when: '{{trigger.payload_json["after"]["start_time"]|int}}'
|
||||
```
|
||||
|
||||
```yaml
|
||||
@@ -111,6 +176,7 @@ automation:
|
||||
data:
|
||||
image: "https://url.com/api/frigate/notifications/{{trigger.payload_json['after']['id']}}/thumbnail.jpg"
|
||||
tag: "{{trigger.payload_json['after']['id']}}"
|
||||
when: '{{trigger.payload_json["after"]["start_time"]|int}}'
|
||||
```
|
||||
|
||||
If you are using telegram, you can fetch the image directly from Frigate:
|
||||
@@ -131,3 +197,85 @@ automation:
|
||||
- url: 'http://ccab4aaf-frigate:5000/api/events/{{trigger.payload_json["after"]["id"]}}/thumbnail.jpg'
|
||||
caption: 'A {{trigger.payload_json["after"]["label"]}} was detected on {{ trigger.payload_json["after"]["camera"] }} camera'
|
||||
```
|
||||
|
||||
<a name="streams"></a>
|
||||
|
||||
## RTMP stream
|
||||
|
||||
In order for the live streams to function they need to be accessible on the RTMP
|
||||
port (default: `1935`) at `<frigatehost>:1935`. Home Assistant will directly
|
||||
connect to that streaming port when the live camera is viewed.
|
||||
|
||||
#### RTMP URL Template
|
||||
|
||||
For advanced usecases, this behavior can be changed with the [RTMP URL
|
||||
template](#options) option. When set, this string will override the default stream
|
||||
address that is derived from the default behavior described above. This option supports
|
||||
[jinja2 templates](https://jinja.palletsprojects.com/) and has the `camera` dict
|
||||
variables from [Frigate API](https://blakeblackshear.github.io/frigate/usage/api#apiconfig)
|
||||
available for the template. Note that no Home Assistant state is available to the
|
||||
template, only the camera dict from Frigate.
|
||||
|
||||
This is potentially useful when Frigate is behind a reverse proxy, and/or when
|
||||
the default stream port is otherwise not accessible to Home Assistant (e.g.
|
||||
firewall rules).
|
||||
|
||||
###### RTMP URL Template Examples
|
||||
|
||||
Use a different port number:
|
||||
|
||||
```
|
||||
rtmp://<frigate_host>:2000/live/front_door
|
||||
```
|
||||
|
||||
Use the camera name in the stream URL:
|
||||
|
||||
```
|
||||
rtmp://<frigate_host>:2000/live/{{ name }}
|
||||
```
|
||||
|
||||
Use the camera name in the stream URL, converting it to lowercase first:
|
||||
|
||||
```
|
||||
rtmp://<frigate_host>:2000/live/{{ name|lower }}
|
||||
```
|
||||
|
||||
## Multiple Instance Support
|
||||
|
||||
The Frigate integration seamlessly supports the use of multiple Frigate servers.
|
||||
|
||||
### Requirements for Multiple Instances
|
||||
|
||||
In order for multiple Frigate instances to function correctly, the
|
||||
`topic_prefix` and `client_id` parameters must be set differently per server.
|
||||
See [MQTT
|
||||
configuration](https://blakeblackshear.github.io/frigate/configuration/index#mqtt)
|
||||
for how to set these.
|
||||
|
||||
#### API URLs
|
||||
|
||||
When multiple Frigate instances are configured, [API](#api) URLs should include an
|
||||
identifier to tell Home Assistant which Frigate instance to refer to. The
|
||||
identifier used is the MQTT `client_id` paremeter included in the configuration,
|
||||
and is used like so:
|
||||
|
||||
```
|
||||
https://HA_URL/api/frigate/<client-id>/notifications/<event-id>/thumbnail.jpg
|
||||
```
|
||||
|
||||
```
|
||||
https://HA_URL/api/frigate/<client-id>/clips/front_door-1624599978.427826-976jaa.mp4
|
||||
```
|
||||
|
||||
#### Default Treatment
|
||||
|
||||
When a single Frigate instance is configured, the `client-id` parameter need not
|
||||
be specified in URLs/identifiers -- that single instance is assumed. When
|
||||
multiple Frigate instances are configured, the user **must** explicitly specify
|
||||
which server they are referring to.
|
||||
|
||||
## FAQ
|
||||
|
||||
### If I am detecting multiple objects, how do I assign the correct `binary_sensor` to the camera in HomeKit?
|
||||
|
||||
The [HomeKit integration](https://www.home-assistant.io/integrations/homekit/) randomly links one of the binary sensors (motion sensor entities) grouped with the camera device in Home Assistant. You can specify a `linked_motion_sensor` in the Home Assistant [HomeKit configuration](https://www.home-assistant.io/integrations/homekit/#linked_motion_sensor) for each camera.
|
||||
|
||||
@@ -90,15 +90,6 @@ class FrigateApp:
|
||||
assigned_roles = list(
|
||||
set([r for i in camera.ffmpeg.inputs for r in i.roles])
|
||||
)
|
||||
if not camera.clips.enabled and "clips" in assigned_roles:
|
||||
logger.warning(
|
||||
f"Camera {name} has clips assigned to an input, but clips is not enabled."
|
||||
)
|
||||
elif camera.clips.enabled and not "clips" in assigned_roles:
|
||||
logger.warning(
|
||||
f"Camera {name} has clips enabled, but clips is not assigned to an input."
|
||||
)
|
||||
|
||||
if not camera.record.enabled and "record" in assigned_roles:
|
||||
logger.warning(
|
||||
f"Camera {name} has record assigned to an input, but record is not enabled."
|
||||
|
||||
@@ -1,18 +1,18 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from enum import Enum
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
from enum import Enum
|
||||
from typing import Dict, List, Optional, Tuple, Union
|
||||
|
||||
import matplotlib.pyplot as plt
|
||||
import numpy as np
|
||||
import yaml
|
||||
from pydantic import BaseModel, Field, validator
|
||||
from pydantic.fields import PrivateAttr
|
||||
import yaml
|
||||
|
||||
from frigate.const import BASE_DIR, RECORD_DIR, CACHE_DIR
|
||||
from frigate.const import BASE_DIR, CACHE_DIR, RECORD_DIR
|
||||
from frigate.edgetpu import load_labels
|
||||
from frigate.util import create_mask, deep_merge
|
||||
|
||||
@@ -26,7 +26,7 @@ DEFAULT_TIME_FORMAT = "%m/%d/%Y %H:%M:%S"
|
||||
FRIGATE_ENV_VARS = {k: v for k, v in os.environ.items() if k.startswith("FRIGATE_")}
|
||||
|
||||
DEFAULT_TRACKED_OBJECTS = ["person"]
|
||||
DEFAULT_DETECTORS = {"coral": {"type": "edgetpu", "device": "usb"}}
|
||||
DEFAULT_DETECTORS = {"cpu": {"type": "cpu"}}
|
||||
|
||||
|
||||
class DetectorTypeEnum(str, Enum):
|
||||
@@ -35,9 +35,7 @@ class DetectorTypeEnum(str, Enum):
|
||||
|
||||
|
||||
class DetectorConfig(BaseModel):
|
||||
type: DetectorTypeEnum = Field(
|
||||
default=DetectorTypeEnum.edgetpu, title="Detector Type"
|
||||
)
|
||||
type: DetectorTypeEnum = Field(default=DetectorTypeEnum.cpu, title="Detector Type")
|
||||
device: str = Field(default="usb", title="Device Type")
|
||||
num_threads: int = Field(default=3, title="Number of detection threads")
|
||||
|
||||
@@ -151,6 +149,9 @@ class RuntimeMotionConfig(MotionConfig):
|
||||
|
||||
|
||||
class DetectConfig(BaseModel):
|
||||
height: int = Field(title="Height of the stream for the detect role.")
|
||||
width: int = Field(title="Width of the stream for the detect role.")
|
||||
fps: int = Field(title="Number of frames per second to process through detection.")
|
||||
enabled: bool = Field(default=True, title="Detection Enabled.")
|
||||
max_disappeared: Optional[int] = Field(
|
||||
title="Maximum number of frames the object can dissapear before detection ends."
|
||||
@@ -435,11 +436,6 @@ class CameraLiveConfig(BaseModel):
|
||||
class CameraConfig(BaseModel):
|
||||
name: Optional[str] = Field(title="Camera name.")
|
||||
ffmpeg: CameraFfmpegConfig = Field(title="FFmpeg configuration for the camera.")
|
||||
height: int = Field(title="Height of the stream for the detect role.")
|
||||
width: int = Field(title="Width of the stream for the detect role.")
|
||||
fps: Optional[int] = Field(
|
||||
title="Number of frames per second to process through Frigate."
|
||||
)
|
||||
best_image_timeout: int = Field(
|
||||
default=60,
|
||||
title="How long to wait for the image with the highest confidence score.",
|
||||
@@ -447,7 +443,6 @@ class CameraConfig(BaseModel):
|
||||
zones: Dict[str, ZoneConfig] = Field(
|
||||
default_factory=dict, title="Zone configuration."
|
||||
)
|
||||
clips: ClipsConfig = Field(default_factory=ClipsConfig, title="Clip configuration.")
|
||||
record: RecordConfig = Field(
|
||||
default_factory=RecordConfig, title="Record configuration."
|
||||
)
|
||||
@@ -465,7 +460,7 @@ class CameraConfig(BaseModel):
|
||||
default_factory=ObjectConfig, title="Object configuration."
|
||||
)
|
||||
motion: Optional[MotionConfig] = Field(title="Motion detection configuration.")
|
||||
detect: Optional[DetectConfig] = Field(title="Object detection configuration.")
|
||||
detect: DetectConfig = Field(title="Object detection configuration.")
|
||||
timestamp_style: TimestampStyleConfig = Field(
|
||||
default_factory=TimestampStyleConfig, title="Timestamp style configuration."
|
||||
)
|
||||
@@ -483,11 +478,11 @@ class CameraConfig(BaseModel):
|
||||
|
||||
@property
|
||||
def frame_shape(self) -> Tuple[int, int]:
|
||||
return self.height, self.width
|
||||
return self.detect.height, self.detect.width
|
||||
|
||||
@property
|
||||
def frame_shape_yuv(self) -> Tuple[int, int]:
|
||||
return self.height * 3 // 2, self.width
|
||||
return self.detect.height * 3 // 2, self.detect.width
|
||||
|
||||
@property
|
||||
def ffmpeg_cmds(self) -> List[Dict[str, List[str]]]:
|
||||
@@ -508,9 +503,17 @@ class CameraConfig(BaseModel):
|
||||
if isinstance(self.ffmpeg.output_args.detect, list)
|
||||
else self.ffmpeg.output_args.detect.split(" ")
|
||||
)
|
||||
ffmpeg_output_args = detect_args + ffmpeg_output_args + ["pipe:"]
|
||||
if self.fps:
|
||||
ffmpeg_output_args = ["-r", str(self.fps)] + ffmpeg_output_args
|
||||
ffmpeg_output_args = (
|
||||
[
|
||||
"-r",
|
||||
str(self.detect.fps),
|
||||
"-s",
|
||||
f"{self.detect.width}x{self.detect.height}",
|
||||
]
|
||||
+ detect_args
|
||||
+ ffmpeg_output_args
|
||||
+ ["pipe:"]
|
||||
)
|
||||
if "rtmp" in ffmpeg_input.roles and self.rtmp.enabled:
|
||||
rtmp_args = (
|
||||
self.ffmpeg.output_args.rtmp
|
||||
@@ -520,9 +523,7 @@ class CameraConfig(BaseModel):
|
||||
ffmpeg_output_args = (
|
||||
rtmp_args + [f"rtmp://127.0.0.1/live/{self.name}"] + ffmpeg_output_args
|
||||
)
|
||||
if any(role in ["clips", "record"] for role in ffmpeg_input.roles) and (
|
||||
self.record.enabled or self.clips.enabled
|
||||
):
|
||||
if "record" in ffmpeg_input.roles and self.record.enabled:
|
||||
record_args = (
|
||||
self.ffmpeg.output_args.record
|
||||
if isinstance(self.ffmpeg.output_args.record, list)
|
||||
@@ -577,11 +578,16 @@ class ModelConfig(BaseModel):
|
||||
default_factory=dict, title="Labelmap customization."
|
||||
)
|
||||
_merged_labelmap: Optional[Dict[int, str]] = PrivateAttr()
|
||||
_colormap: Dict[int, Tuple[int, int, int]] = PrivateAttr()
|
||||
|
||||
@property
|
||||
def merged_labelmap(self) -> Dict[int, str]:
|
||||
return self._merged_labelmap
|
||||
|
||||
@property
|
||||
def colormap(self) -> Dict[int, tuple[int, int, int]]:
|
||||
return self._colormap
|
||||
|
||||
def __init__(self, **config):
|
||||
super().__init__(**config)
|
||||
|
||||
@@ -590,6 +596,12 @@ class ModelConfig(BaseModel):
|
||||
**config.get("labelmap", {}),
|
||||
}
|
||||
|
||||
cmap = plt.cm.get_cmap("tab10", len(self._merged_labelmap.keys()))
|
||||
|
||||
self._colormap = {}
|
||||
for key, val in self._merged_labelmap.items():
|
||||
self._colormap[val] = tuple(int(round(255 * c)) for c in cmap(key)[:3])
|
||||
|
||||
|
||||
class LogLevelEnum(str, Enum):
|
||||
debug = "debug"
|
||||
@@ -632,9 +644,6 @@ class FrigateConfig(BaseModel):
|
||||
logger: LoggerConfig = Field(
|
||||
default_factory=LoggerConfig, title="Logging configuration."
|
||||
)
|
||||
clips: ClipsConfig = Field(
|
||||
default_factory=ClipsConfig, title="Global clips configuration."
|
||||
)
|
||||
record: RecordConfig = Field(
|
||||
default_factory=RecordConfig, title="Global record configuration."
|
||||
)
|
||||
@@ -670,7 +679,6 @@ class FrigateConfig(BaseModel):
|
||||
# Global config to propegate down to camera level
|
||||
global_config = config.dict(
|
||||
include={
|
||||
"clips": ...,
|
||||
"record": ...,
|
||||
"snapshots": ...,
|
||||
"objects": ...,
|
||||
@@ -735,12 +743,9 @@ class FrigateConfig(BaseModel):
|
||||
)
|
||||
|
||||
# Default detect configuration
|
||||
max_disappeared = (camera_config.fps or 5) * 5
|
||||
if camera_config.detect:
|
||||
if camera_config.detect.max_disappeared is None:
|
||||
camera_config.detect.max_disappeared = max_disappeared
|
||||
else:
|
||||
camera_config.detect = DetectConfig(max_disappeared=max_disappeared)
|
||||
max_disappeared = camera_config.detect.fps * 5
|
||||
if camera_config.detect.max_disappeared is None:
|
||||
camera_config.detect.max_disappeared = max_disappeared
|
||||
|
||||
# Default live configuration
|
||||
if camera_config.live is None:
|
||||
@@ -748,21 +753,6 @@ class FrigateConfig(BaseModel):
|
||||
|
||||
config.cameras[name] = camera_config
|
||||
|
||||
# Merge Clips configuration for backward compatibility
|
||||
if camera_config.clips.enabled:
|
||||
logger.warn(
|
||||
"Clips configuration is deprecated. Configure clip settings under record -> events."
|
||||
)
|
||||
if not camera_config.record.enabled:
|
||||
camera_config.record.enabled = True
|
||||
camera_config.record.retain_days = 0
|
||||
camera_config.record.events = ClipsConfig.parse_obj(
|
||||
deep_merge(
|
||||
camera_config.clips.dict(exclude_unset=True),
|
||||
camera_config.record.events.dict(exclude_unset=True),
|
||||
)
|
||||
)
|
||||
|
||||
return config
|
||||
|
||||
@validator("cameras")
|
||||
|
||||
@@ -9,7 +9,6 @@ from abc import ABC, abstractmethod
|
||||
from typing import Dict
|
||||
|
||||
import numpy as np
|
||||
from pycoral.adapters import detect
|
||||
import tflite_runtime.interpreter as tflite
|
||||
from setproctitle import setproctitle
|
||||
from tflite_runtime.interpreter import load_delegate
|
||||
@@ -69,9 +68,14 @@ class LocalObjectDetector(ObjectDetector):
|
||||
experimental_delegates=[edge_tpu_delegate],
|
||||
)
|
||||
except ValueError:
|
||||
logger.info("No EdgeTPU detected.")
|
||||
logger.error(
|
||||
"No EdgeTPU was detected. If you do not have a Coral device yet, you must configure CPU detectors."
|
||||
)
|
||||
raise
|
||||
else:
|
||||
logger.warning(
|
||||
"CPU detectors are not recommended and should only be used for testing or for trial purposes."
|
||||
)
|
||||
self.interpreter = tflite.Interpreter(
|
||||
model_path="/cpu_model.tflite", num_threads=num_threads
|
||||
)
|
||||
@@ -99,19 +103,25 @@ class LocalObjectDetector(ObjectDetector):
|
||||
self.interpreter.set_tensor(self.tensor_input_details[0]["index"], tensor_input)
|
||||
self.interpreter.invoke()
|
||||
|
||||
objects = detect.get_objects(self.interpreter, 0.4)
|
||||
boxes = self.interpreter.tensor(self.tensor_output_details[0]["index"])()[0]
|
||||
class_ids = self.interpreter.tensor(self.tensor_output_details[1]["index"])()[0]
|
||||
scores = self.interpreter.tensor(self.tensor_output_details[2]["index"])()[0]
|
||||
count = int(
|
||||
self.interpreter.tensor(self.tensor_output_details[3]["index"])()[0]
|
||||
)
|
||||
|
||||
detections = np.zeros((20, 6), np.float32)
|
||||
for i, obj in enumerate(objects):
|
||||
if i == 20:
|
||||
|
||||
for i in range(count):
|
||||
if scores[i] < 0.4 or i == 20:
|
||||
break
|
||||
detections[i] = [
|
||||
obj.id,
|
||||
obj.score,
|
||||
obj.bbox.ymin,
|
||||
obj.bbox.xmin,
|
||||
obj.bbox.ymax,
|
||||
obj.bbox.xmax,
|
||||
class_ids[i],
|
||||
float(scores[i]),
|
||||
boxes[i][0],
|
||||
boxes[i][1],
|
||||
boxes[i][2],
|
||||
boxes[i][3],
|
||||
]
|
||||
|
||||
return detections
|
||||
|
||||
@@ -112,7 +112,7 @@ class EventCleanup(threading.Thread):
|
||||
def expire(self, media_type):
|
||||
## Expire events from unlisted cameras based on the global config
|
||||
if media_type == "clips":
|
||||
retain_config = self.config.clips.retain
|
||||
retain_config = self.config.record.events.retain
|
||||
file_extension = "mp4"
|
||||
update_params = {"has_clip": False}
|
||||
else:
|
||||
@@ -163,7 +163,7 @@ class EventCleanup(threading.Thread):
|
||||
## Expire events from cameras based on the camera config
|
||||
for name, camera in self.config.cameras.items():
|
||||
if media_type == "clips":
|
||||
retain_config = camera.clips.retain
|
||||
retain_config = camera.record.events.retain
|
||||
else:
|
||||
retain_config = camera.snapshots.retain
|
||||
# get distinct objects in database for this camera
|
||||
|
||||
@@ -96,14 +96,14 @@ def create_mqtt_client(config: FrigateConfig, camera_metrics):
|
||||
threading.current_thread().name = "mqtt"
|
||||
if rc != 0:
|
||||
if rc == 3:
|
||||
logger.error("MQTT Server unavailable")
|
||||
logger.error("Unable to connect to MQTT server: MQTT Server unavailable")
|
||||
elif rc == 4:
|
||||
logger.error("MQTT Bad username or password")
|
||||
logger.error("Unable to connect to MQTT server: MQTT Bad username or password")
|
||||
elif rc == 5:
|
||||
logger.error("MQTT Not authorized")
|
||||
logger.error("Unable to connect to MQTT server: MQTT Not authorized")
|
||||
else:
|
||||
logger.error(
|
||||
"Unable to connect to MQTT: Connection refused. Error code: "
|
||||
"Unable to connect to MQTT server: Connection refused. Error code: "
|
||||
+ str(rc)
|
||||
)
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import copy
|
||||
import base64
|
||||
import copy
|
||||
import datetime
|
||||
import hashlib
|
||||
import itertools
|
||||
@@ -14,30 +14,20 @@ from statistics import mean, median
|
||||
from typing import Callable, Dict
|
||||
|
||||
import cv2
|
||||
import matplotlib.pyplot as plt
|
||||
import numpy as np
|
||||
|
||||
from frigate.config import FrigateConfig, CameraConfig
|
||||
from frigate.const import RECORD_DIR, CLIPS_DIR, CACHE_DIR
|
||||
from frigate.config import CameraConfig, FrigateConfig
|
||||
from frigate.const import CACHE_DIR, CLIPS_DIR, RECORD_DIR
|
||||
from frigate.edgetpu import load_labels
|
||||
from frigate.util import (
|
||||
SharedMemoryFrameManager,
|
||||
calculate_region,
|
||||
draw_box_with_label,
|
||||
draw_timestamp,
|
||||
calculate_region,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
PATH_TO_LABELS = "/labelmap.txt"
|
||||
|
||||
LABELS = load_labels(PATH_TO_LABELS)
|
||||
cmap = plt.cm.get_cmap("tab10", len(LABELS.keys()))
|
||||
|
||||
COLOR_MAP = {}
|
||||
for key, val in LABELS.items():
|
||||
COLOR_MAP[val] = tuple(int(round(255 * c)) for c in cmap(key)[:3])
|
||||
|
||||
|
||||
def on_edge(box, frame_shape):
|
||||
if (
|
||||
@@ -72,9 +62,12 @@ def is_better_thumbnail(current_thumb, new_obj, frame_shape) -> bool:
|
||||
|
||||
|
||||
class TrackedObject:
|
||||
def __init__(self, camera, camera_config: CameraConfig, frame_cache, obj_data):
|
||||
def __init__(
|
||||
self, camera, colormap, camera_config: CameraConfig, frame_cache, obj_data
|
||||
):
|
||||
self.obj_data = obj_data
|
||||
self.camera = camera
|
||||
self.colormap = colormap
|
||||
self.camera_config = camera_config
|
||||
self.frame_cache = frame_cache
|
||||
self.current_zones = []
|
||||
@@ -247,7 +240,7 @@ class TrackedObject:
|
||||
|
||||
if bounding_box:
|
||||
thickness = 2
|
||||
color = COLOR_MAP[self.obj_data["label"]]
|
||||
color = self.colormap[self.obj_data["label"]]
|
||||
|
||||
# draw the bounding boxes on the frame
|
||||
box = self.thumbnail_data["box"]
|
||||
@@ -357,7 +350,7 @@ class CameraState:
|
||||
for obj in tracked_objects.values():
|
||||
if obj["frame_time"] == frame_time:
|
||||
thickness = 2
|
||||
color = COLOR_MAP[obj["label"]]
|
||||
color = self.config.model.colormap[obj["label"]]
|
||||
else:
|
||||
thickness = 1
|
||||
color = (255, 0, 0)
|
||||
@@ -448,7 +441,11 @@ class CameraState:
|
||||
|
||||
for id in new_ids:
|
||||
new_obj = tracked_objects[id] = TrackedObject(
|
||||
self.name, self.camera_config, self.frame_cache, current_detections[id]
|
||||
self.name,
|
||||
self.config.model.colormap,
|
||||
self.camera_config,
|
||||
self.frame_cache,
|
||||
current_detections[id],
|
||||
)
|
||||
|
||||
# call event handlers
|
||||
|
||||
@@ -14,7 +14,7 @@ import numpy as np
|
||||
from frigate.config import FRIGATE_CONFIG_SCHEMA, FrigateConfig
|
||||
from frigate.edgetpu import LocalObjectDetector
|
||||
from frigate.motion import MotionDetector
|
||||
from frigate.object_processing import COLOR_MAP, CameraState
|
||||
from frigate.object_processing import CameraState
|
||||
from frigate.objects import ObjectTracker
|
||||
from frigate.util import (
|
||||
DictFrameManager,
|
||||
|
||||
@@ -111,7 +111,9 @@ class RecordingMaintainer(threading.Thread):
|
||||
file_name = f"{start_time.strftime('%M.%S.mp4')}"
|
||||
file_path = os.path.join(directory, file_name)
|
||||
|
||||
shutil.move(cache_path, file_path)
|
||||
# copy then delete is required when recordings are stored on some network drives
|
||||
shutil.copyfile(cache_path, file_path)
|
||||
os.remove(cache_path)
|
||||
|
||||
rand_id = "".join(
|
||||
random.choices(string.ascii_lowercase + string.digits, k=6)
|
||||
@@ -242,29 +244,48 @@ class RecordingCleanup(threading.Thread):
|
||||
|
||||
def expire_files(self):
|
||||
logger.debug("Start expire files (legacy).")
|
||||
|
||||
default_expire = (
|
||||
datetime.datetime.now().timestamp()
|
||||
- SECONDS_IN_DAY * self.config.record.retain_days
|
||||
)
|
||||
delete_before = {}
|
||||
|
||||
for name, camera in self.config.cameras.items():
|
||||
delete_before[name] = (
|
||||
datetime.datetime.now().timestamp()
|
||||
- SECONDS_IN_DAY * camera.record.retain_days
|
||||
)
|
||||
|
||||
for p in Path("/media/frigate/recordings").rglob("*.mp4"):
|
||||
# Ignore files that have a record in the recordings DB
|
||||
if Recordings.select().where(Recordings.path == str(p)).count():
|
||||
continue
|
||||
# find all the recordings older than the oldest recording in the db
|
||||
oldest_recording = (
|
||||
Recordings.select().order_by(Recordings.start_time.desc()).get()
|
||||
)
|
||||
|
||||
oldest_timestamp = (
|
||||
oldest_recording.start_time
|
||||
if oldest_recording
|
||||
else datetime.datetime.now().timestamp()
|
||||
)
|
||||
|
||||
logger.debug(f"Oldest recording in the db: {oldest_timestamp}")
|
||||
process = sp.run(
|
||||
["find", RECORD_DIR, "-type", "f", "-newermt", f"@{oldest_timestamp}"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
files_to_check = process.stdout.splitlines()
|
||||
|
||||
for f in files_to_check:
|
||||
p = Path(f)
|
||||
if p.stat().st_mtime < delete_before.get(p.parent.name, default_expire):
|
||||
p.unlink(missing_ok=True)
|
||||
|
||||
logger.debug("End expire files (legacy).")
|
||||
|
||||
def run(self):
|
||||
# Expire recordings every minute, clean directories every 5 minutes.
|
||||
for counter in itertools.cycle(range(5)):
|
||||
# Expire recordings every minute, clean directories every hour.
|
||||
for counter in itertools.cycle(range(60)):
|
||||
if self.stop_event.wait(60):
|
||||
logger.info(f"Exiting recording cleanup...")
|
||||
break
|
||||
|
||||
@@ -18,8 +18,11 @@ class TestConfig(unittest.TestCase):
|
||||
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
@@ -42,8 +45,11 @@ class TestConfig(unittest.TestCase):
|
||||
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
@@ -60,8 +66,11 @@ class TestConfig(unittest.TestCase):
|
||||
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
@@ -82,8 +91,11 @@ class TestConfig(unittest.TestCase):
|
||||
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
"objects": {"track": ["cat"]},
|
||||
}
|
||||
},
|
||||
@@ -105,8 +117,11 @@ class TestConfig(unittest.TestCase):
|
||||
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
@@ -130,8 +145,11 @@ class TestConfig(unittest.TestCase):
|
||||
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
@@ -152,8 +170,11 @@ class TestConfig(unittest.TestCase):
|
||||
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
"objects": {
|
||||
"track": ["person", "dog"],
|
||||
"filters": {"dog": {"threshold": 0.7}},
|
||||
@@ -179,8 +200,11 @@ class TestConfig(unittest.TestCase):
|
||||
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
"objects": {
|
||||
"mask": "0,0,1,1,0,1",
|
||||
"filters": {"dog": {"mask": "1,1,1,1,1,1"}},
|
||||
@@ -210,8 +234,11 @@ class TestConfig(unittest.TestCase):
|
||||
},
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
@@ -233,8 +260,11 @@ class TestConfig(unittest.TestCase):
|
||||
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
"objects": {
|
||||
"track": ["person", "dog"],
|
||||
"filters": {"dog": {"threshold": 0.7}},
|
||||
@@ -260,8 +290,11 @@ class TestConfig(unittest.TestCase):
|
||||
],
|
||||
"input_args": ["-re"],
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
"objects": {
|
||||
"track": ["person", "dog"],
|
||||
"filters": {"dog": {"threshold": 0.7}},
|
||||
@@ -292,8 +325,11 @@ class TestConfig(unittest.TestCase):
|
||||
],
|
||||
"input_args": "test3",
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
"objects": {
|
||||
"track": ["person", "dog"],
|
||||
"filters": {"dog": {"threshold": 0.7}},
|
||||
@@ -313,7 +349,9 @@ class TestConfig(unittest.TestCase):
|
||||
def test_inherit_clips_retention(self):
|
||||
config = {
|
||||
"mqtt": {"host": "mqtt"},
|
||||
"clips": {"retain": {"default": 20, "objects": {"person": 30}}},
|
||||
"record": {
|
||||
"events": {"retain": {"default": 20, "objects": {"person": 30}}}
|
||||
},
|
||||
"cameras": {
|
||||
"back": {
|
||||
"ffmpeg": {
|
||||
@@ -321,8 +359,11 @@ class TestConfig(unittest.TestCase):
|
||||
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
@@ -330,12 +371,16 @@ class TestConfig(unittest.TestCase):
|
||||
assert config == frigate_config.dict(exclude_unset=True)
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
assert runtime_config.cameras["back"].clips.retain.objects["person"] == 30
|
||||
assert (
|
||||
runtime_config.cameras["back"].record.events.retain.objects["person"] == 30
|
||||
)
|
||||
|
||||
def test_roles_listed_twice_throws_error(self):
|
||||
config = {
|
||||
"mqtt": {"host": "mqtt"},
|
||||
"clips": {"retain": {"default": 20, "objects": {"person": 30}}},
|
||||
"record": {
|
||||
"events": {"retain": {"default": 20, "objects": {"person": 30}}}
|
||||
},
|
||||
"cameras": {
|
||||
"back": {
|
||||
"ffmpeg": {
|
||||
@@ -344,8 +389,11 @@ class TestConfig(unittest.TestCase):
|
||||
{"path": "rtsp://10.0.0.1:554/video2", "roles": ["detect"]},
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
@@ -354,7 +402,9 @@ class TestConfig(unittest.TestCase):
|
||||
def test_zone_matching_camera_name_throws_error(self):
|
||||
config = {
|
||||
"mqtt": {"host": "mqtt"},
|
||||
"clips": {"retain": {"default": 20, "objects": {"person": 30}}},
|
||||
"record": {
|
||||
"events": {"retain": {"default": 20, "objects": {"person": 30}}}
|
||||
},
|
||||
"cameras": {
|
||||
"back": {
|
||||
"ffmpeg": {
|
||||
@@ -362,8 +412,11 @@ class TestConfig(unittest.TestCase):
|
||||
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
"zones": {"back": {"coordinates": "1,1,1,1,1,1"}},
|
||||
}
|
||||
},
|
||||
@@ -373,7 +426,9 @@ class TestConfig(unittest.TestCase):
|
||||
def test_zone_assigns_color_and_contour(self):
|
||||
config = {
|
||||
"mqtt": {"host": "mqtt"},
|
||||
"clips": {"retain": {"default": 20, "objects": {"person": 30}}},
|
||||
"record": {
|
||||
"events": {"retain": {"default": 20, "objects": {"person": 30}}}
|
||||
},
|
||||
"cameras": {
|
||||
"back": {
|
||||
"ffmpeg": {
|
||||
@@ -381,8 +436,11 @@ class TestConfig(unittest.TestCase):
|
||||
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
"zones": {"test": {"coordinates": "1,1,1,1,1,1"}},
|
||||
}
|
||||
},
|
||||
@@ -399,7 +457,9 @@ class TestConfig(unittest.TestCase):
|
||||
def test_clips_should_default_to_global_objects(self):
|
||||
config = {
|
||||
"mqtt": {"host": "mqtt"},
|
||||
"clips": {"retain": {"default": 20, "objects": {"person": 30}}},
|
||||
"record": {
|
||||
"events": {"retain": {"default": 20, "objects": {"person": 30}}}
|
||||
},
|
||||
"objects": {"track": ["person", "dog"]},
|
||||
"cameras": {
|
||||
"back": {
|
||||
@@ -408,9 +468,12 @@ class TestConfig(unittest.TestCase):
|
||||
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"clips": {"enabled": True},
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
"record": {"events": {"enabled": True}},
|
||||
}
|
||||
},
|
||||
}
|
||||
@@ -419,8 +482,8 @@ class TestConfig(unittest.TestCase):
|
||||
|
||||
runtime_config = frigate_config.runtime_config
|
||||
back_camera = runtime_config.cameras["back"]
|
||||
assert back_camera.clips.objects is None
|
||||
assert back_camera.clips.retain.objects["person"] == 30
|
||||
assert back_camera.record.events.objects is None
|
||||
assert back_camera.record.events.retain.objects["person"] == 30
|
||||
|
||||
def test_role_assigned_but_not_enabled(self):
|
||||
config = {
|
||||
@@ -436,8 +499,11 @@ class TestConfig(unittest.TestCase):
|
||||
{"path": "rtsp://10.0.0.1:554/record", "roles": ["record"]},
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
@@ -463,9 +529,12 @@ class TestConfig(unittest.TestCase):
|
||||
},
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {"enabled": True},
|
||||
"detect": {
|
||||
"enabled": True,
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
@@ -490,8 +559,11 @@ class TestConfig(unittest.TestCase):
|
||||
},
|
||||
]
|
||||
},
|
||||
"height": 480,
|
||||
"width": 640,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
@@ -516,8 +588,11 @@ class TestConfig(unittest.TestCase):
|
||||
},
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
@@ -543,8 +618,11 @@ class TestConfig(unittest.TestCase):
|
||||
},
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
@@ -569,8 +647,11 @@ class TestConfig(unittest.TestCase):
|
||||
},
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
@@ -596,8 +677,11 @@ class TestConfig(unittest.TestCase):
|
||||
},
|
||||
]
|
||||
},
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"detect": {
|
||||
"height": 1080,
|
||||
"width": 1920,
|
||||
"fps": 5,
|
||||
},
|
||||
}
|
||||
},
|
||||
}
|
||||
|
||||
@@ -216,6 +216,13 @@ class CameraWatchdog(threading.Thread):
|
||||
now = datetime.datetime.now().timestamp()
|
||||
|
||||
if not self.capture_thread.is_alive():
|
||||
self.logger.error(
|
||||
f"FFMPEG process crashed unexpectedly for {self.camera_name}."
|
||||
)
|
||||
self.logger.error(
|
||||
"The following ffmpeg logs include the last 100 lines prior to exit."
|
||||
)
|
||||
self.logger.error("You may have invalid args defined for this camera.")
|
||||
self.logpipe.dump()
|
||||
self.start_ffmpeg_detect()
|
||||
elif now - self.capture_thread.current_frame.value > 20:
|
||||
@@ -398,17 +405,16 @@ def detect(
|
||||
object_detector, frame, model_shape, region, objects_to_track, object_filters
|
||||
):
|
||||
tensor_input = create_tensor_input(frame, model_shape, region)
|
||||
scale = float(region[2] - region[0]) / model_shape[0]
|
||||
|
||||
detections = []
|
||||
region_detections = object_detector.detect(tensor_input)
|
||||
for d in region_detections:
|
||||
box = d[2]
|
||||
size = region[2] - region[0]
|
||||
x_min = int(max(0, box[1]) * scale + region[0])
|
||||
y_min = int(max(0, box[0]) * scale + region[1])
|
||||
x_max = int(min(frame.shape[1], box[3]) * scale + region[0])
|
||||
y_max = int(min(frame.shape[0], box[2]) * scale + region[1])
|
||||
x_min = int((box[1] * size) + region[0])
|
||||
y_min = int((box[0] * size) + region[1])
|
||||
x_max = int((box[3] * size) + region[0])
|
||||
y_max = int((box[2] * size) + region[1])
|
||||
det = (
|
||||
d[0],
|
||||
d[1],
|
||||
|
||||
@@ -23,12 +23,11 @@ export default function App() {
|
||||
) : (
|
||||
<div className="flex flex-row min-h-screen w-full bg-white dark:bg-gray-900 text-gray-900 dark:text-white">
|
||||
<Sidebar />
|
||||
<div className="w-full flex-auto p-2 mt-24 px-4 min-w-0">
|
||||
<div className="w-full flex-auto p-2 mt-16 px-4 min-w-0">
|
||||
<Router>
|
||||
<AsyncRoute path="/cameras/:camera/editor" getComponent={Routes.getCameraMap} />
|
||||
<AsyncRoute path="/cameras/:camera" getComponent={Routes.getCamera} />
|
||||
<AsyncRoute path="/birdseye" getComponent={Routes.getBirdseye} />
|
||||
<AsyncRoute path="/events/:eventId" getComponent={Routes.getEvent} />
|
||||
<AsyncRoute path="/events" getComponent={Routes.getEvents} />
|
||||
<AsyncRoute path="/recording/:camera/:date?/:hour?/:seconds?" getComponent={Routes.getRecording} />
|
||||
<AsyncRoute path="/debug" getComponent={Routes.getDebug} />
|
||||
|
||||
@@ -63,7 +63,7 @@ export default function AppBar() {
|
||||
<MenuSeparator />
|
||||
<MenuItem icon={FrigateRestartIcon} label="Restart Frigate" onSelect={handleRestart} />
|
||||
</Menu>
|
||||
) : null},
|
||||
) : null}
|
||||
{showDialog ? (
|
||||
<Dialog
|
||||
onDismiss={handleDismissRestartDialog}
|
||||
@@ -74,7 +74,7 @@ export default function AppBar() {
|
||||
{ text: 'Cancel', onClick: handleDismissRestartDialog },
|
||||
]}
|
||||
/>
|
||||
) : null},
|
||||
) : null}
|
||||
{showDialogWait ? (
|
||||
<Dialog
|
||||
title="Restart in progress"
|
||||
|
||||
@@ -18,7 +18,7 @@ const initialState = Object.freeze({
|
||||
|
||||
const Api = createContext(initialState);
|
||||
|
||||
function reducer(state, { type, payload, meta }) {
|
||||
function reducer(state, { type, payload }) {
|
||||
switch (type) {
|
||||
case 'REQUEST': {
|
||||
const { url, fetchId } = payload;
|
||||
@@ -36,22 +36,9 @@ function reducer(state, { type, payload, meta }) {
|
||||
}
|
||||
case 'DELETE': {
|
||||
const { eventId } = payload;
|
||||
|
||||
return produce(state, (draftState) => {
|
||||
Object.keys(draftState.queries).map((url, index) => {
|
||||
// If data has no array length then just return state.
|
||||
if (!('data' in draftState.queries[url]) || !draftState.queries[url].data.length) return state;
|
||||
|
||||
//Find the index to remove
|
||||
const removeIndex = draftState.queries[url].data.map((event) => event.id).indexOf(eventId);
|
||||
if (removeIndex === -1) return state;
|
||||
|
||||
// We need to keep track of deleted items, This will be used to re-calculate "ReachEnd" for auto load new events. Events.jsx
|
||||
const totDeleted = state.queries[url].deleted || 0;
|
||||
|
||||
// Splice the deleted index.
|
||||
draftState.queries[url].data.splice(removeIndex, 1);
|
||||
draftState.queries[url].deleted = totDeleted + 1;
|
||||
Object.keys(draftState.queries).map((url) => {
|
||||
draftState.queries[url].deletedId = eventId;
|
||||
});
|
||||
});
|
||||
}
|
||||
@@ -111,9 +98,9 @@ export function useFetch(url, fetchId) {
|
||||
|
||||
const data = state.queries[url].data || null;
|
||||
const status = state.queries[url].status;
|
||||
const deleted = state.queries[url].deleted || 0;
|
||||
const deletedId = state.queries[url].deletedId || 0;
|
||||
|
||||
return { data, status, deleted };
|
||||
return { data, status, deletedId };
|
||||
}
|
||||
|
||||
export function useDelete() {
|
||||
|
||||
@@ -37,13 +37,13 @@ export default function AppBar({ title: Title, overflowRef, onOverflowClick }) {
|
||||
|
||||
return (
|
||||
<div
|
||||
className={`w-full border-b border-gray-200 dark:border-gray-700 flex items-center align-middle p-4 space-x-2 fixed left-0 right-0 z-10 bg-white dark:bg-gray-900 transform transition-all duration-200 ${
|
||||
className={`w-full border-b border-gray-200 dark:border-gray-700 flex items-center align-middle p-2 fixed left-0 right-0 z-10 bg-white dark:bg-gray-900 transform transition-all duration-200 ${
|
||||
!show ? '-translate-y-full' : 'translate-y-0'
|
||||
} ${!atZero ? 'shadow-sm' : ''}`}
|
||||
data-testid="appbar"
|
||||
>
|
||||
<div className="lg:hidden">
|
||||
<Button color="black" className="rounded-full w-12 h-12" onClick={handleShowDrawer} type="text">
|
||||
<Button color="black" className="rounded-full w-10 h-10" onClick={handleShowDrawer} type="text">
|
||||
<MenuIcon className="w-10 h-10" />
|
||||
</Button>
|
||||
</div>
|
||||
@@ -54,7 +54,7 @@ export default function AppBar({ title: Title, overflowRef, onOverflowClick }) {
|
||||
<Button
|
||||
aria-label="More options"
|
||||
color="black"
|
||||
className="rounded-full w-12 h-12"
|
||||
className="rounded-full w-9 h-9"
|
||||
onClick={onOverflowClick}
|
||||
type="text"
|
||||
>
|
||||
|
||||
@@ -12,7 +12,8 @@ export default function CameraImage({ camera, onload, searchParams = '', stretch
|
||||
const canvasRef = useRef(null);
|
||||
const [{ width: availableWidth }] = useResizeObserver(containerRef);
|
||||
|
||||
const { name, width, height } = config.cameras[camera];
|
||||
const { name } = config.cameras[camera];
|
||||
const { width, height } = config.cameras[camera].detect;
|
||||
const aspectRatio = width / height;
|
||||
|
||||
const scaledHeight = useMemo(() => {
|
||||
|
||||
@@ -19,7 +19,7 @@ export default function Dialog({ actions = [], portalRootID = 'dialogs', title,
|
||||
<div
|
||||
data-testid="scrim"
|
||||
key="scrim"
|
||||
className="absolute inset-0 z-10 flex justify-center items-center bg-black bg-opacity-40"
|
||||
className="fixed bg-fixed inset-0 z-10 flex justify-center items-center bg-black bg-opacity-40"
|
||||
>
|
||||
<div
|
||||
role="modal"
|
||||
|
||||
@@ -22,7 +22,7 @@ export default function NavigationDrawer({ children, header }) {
|
||||
onClick={handleDismiss}
|
||||
>
|
||||
{header ? (
|
||||
<div className="flex-shrink-0 p-5 flex flex-row items-center justify-between border-b border-gray-200 dark:border-gray-700">
|
||||
<div className="flex-shrink-0 p-2 flex flex-row items-center justify-between border-b border-gray-200 dark:border-gray-700">
|
||||
{header}
|
||||
</div>
|
||||
) : null}
|
||||
|
||||
@@ -79,7 +79,7 @@ export default function RelativeModal({
|
||||
}
|
||||
// too close to bottom
|
||||
if (top + menuHeight > windowHeight - WINDOW_PADDING + window.scrollY) {
|
||||
newTop = relativeToY - menuHeight;
|
||||
newTop = WINDOW_PADDING;
|
||||
}
|
||||
|
||||
if (top <= WINDOW_PADDING + window.scrollY) {
|
||||
|
||||
@@ -7,7 +7,7 @@ import { render, screen } from '@testing-library/preact';
|
||||
describe('CameraImage', () => {
|
||||
beforeEach(() => {
|
||||
jest.spyOn(Api, 'useConfig').mockImplementation(() => {
|
||||
return { data: { cameras: { front: { name: 'front', width: 1280, height: 720 } } } };
|
||||
return { data: { cameras: { front: { name: 'front', detect: { width: 1280, height: 720 } } } } };
|
||||
});
|
||||
jest.spyOn(Api, 'useApiHost').mockReturnValue('http://base-url.local:5000');
|
||||
jest.spyOn(Hooks, 'useResizeObserver').mockImplementation(() => [{ width: 0 }]);
|
||||
|
||||
13
web/src/icons/Close.jsx
Normal file
13
web/src/icons/Close.jsx
Normal file
@@ -0,0 +1,13 @@
|
||||
import { h } from 'preact';
|
||||
import { memo } from 'preact/compat';
|
||||
|
||||
export function Close({ className = '' }) {
|
||||
return (
|
||||
<svg className={`fill-current ${className}`} viewBox="0 0 24 24">
|
||||
<path d="M0 0h24v24H0z" fill="none" />
|
||||
<path d="M19 6.41L17.59 5 12 10.59 6.41 5 5 6.41 10.59 12 5 17.59 6.41 19 12 13.41 17.59 19 19 17.59 13.41 12 19 6.41z" />
|
||||
</svg>
|
||||
);
|
||||
}
|
||||
|
||||
export default memo(Close);
|
||||
@@ -29,3 +29,12 @@
|
||||
.jsmpeg canvas {
|
||||
position: static !important;
|
||||
}
|
||||
|
||||
/*
|
||||
Event.js
|
||||
Maintain aspect ratio and scale down the video container
|
||||
Could not find a proper tailwind css.
|
||||
*/
|
||||
.outer-max-width {
|
||||
max-width: 60%;
|
||||
}
|
||||
|
||||
@@ -15,13 +15,16 @@ export default function CameraMasks({ camera, url }) {
|
||||
|
||||
const cameraConfig = config.cameras[camera];
|
||||
const {
|
||||
width,
|
||||
height,
|
||||
motion: { mask: motionMask },
|
||||
objects: { filters: objectFilters },
|
||||
zones,
|
||||
} = cameraConfig;
|
||||
|
||||
const {
|
||||
width,
|
||||
height,
|
||||
} = cameraConfig.detect;
|
||||
|
||||
const [{ width: scaledWidth }] = useResizeObserver(imageRef);
|
||||
const imageScale = scaledWidth / width;
|
||||
|
||||
|
||||
@@ -1,25 +1,32 @@
|
||||
import { h, Fragment } from 'preact';
|
||||
import { useCallback, useState } from 'preact/hooks';
|
||||
import { route } from 'preact-router';
|
||||
import { useCallback, useState, useEffect } from 'preact/hooks';
|
||||
import ActivityIndicator from '../components/ActivityIndicator';
|
||||
import Button from '../components/Button';
|
||||
import Clip from '../icons/Clip';
|
||||
import Close from '../icons/Close';
|
||||
import Delete from '../icons/Delete';
|
||||
import Snapshot from '../icons/Snapshot';
|
||||
import Dialog from '../components/Dialog';
|
||||
import Heading from '../components/Heading';
|
||||
import Link from '../components/Link';
|
||||
import VideoPlayer from '../components/VideoPlayer';
|
||||
import { FetchStatus, useApiHost, useEvent, useDelete } from '../api';
|
||||
import { Table, Thead, Tbody, Th, Tr, Td } from '../components/Table';
|
||||
|
||||
export default function Event({ eventId }) {
|
||||
export default function Event({ eventId, close, scrollRef }) {
|
||||
const apiHost = useApiHost();
|
||||
const { data, status } = useEvent(eventId);
|
||||
const [showDialog, setShowDialog] = useState(false);
|
||||
const [shouldScroll, setShouldScroll] = useState(true);
|
||||
const [deleteStatus, setDeleteStatus] = useState(FetchStatus.NONE);
|
||||
const setDeleteEvent = useDelete();
|
||||
|
||||
useEffect(() => {
|
||||
// Scroll event into view when component has been mounted.
|
||||
if (shouldScroll && scrollRef && scrollRef[eventId]) {
|
||||
scrollRef[eventId].scrollIntoView();
|
||||
setShouldScroll(false);
|
||||
}
|
||||
}, [data, scrollRef, eventId, shouldScroll]);
|
||||
|
||||
const handleClickDelete = () => {
|
||||
setShowDialog(true);
|
||||
};
|
||||
@@ -40,7 +47,6 @@ export default function Event({ eventId }) {
|
||||
if (success) {
|
||||
setDeleteStatus(FetchStatus.LOADED);
|
||||
setShowDialog(false);
|
||||
route('/events', true);
|
||||
}
|
||||
}, [eventId, setShowDialog, setDeleteEvent]);
|
||||
|
||||
@@ -48,18 +54,25 @@ export default function Event({ eventId }) {
|
||||
return <ActivityIndicator />;
|
||||
}
|
||||
|
||||
const startime = new Date(data.start_time * 1000);
|
||||
const endtime = new Date(data.end_time * 1000);
|
||||
|
||||
return (
|
||||
<div className="space-y-4">
|
||||
<div className="flex">
|
||||
<Heading className="flex-grow">
|
||||
{data.camera} {data.label} <span className="text-sm">{startime.toLocaleString()}</span>
|
||||
</Heading>
|
||||
<Button className="self-start" color="red" onClick={handleClickDelete}>
|
||||
<Delete className="w-6" /> Delete event
|
||||
</Button>
|
||||
<div className="grid grid-cols-6 gap-4">
|
||||
<div class="col-start-1 col-end-8 md:space-x-4">
|
||||
<Button color="blue" href={`${apiHost}/api/events/${eventId}/clip.mp4?download=true`} download>
|
||||
<Clip className="w-6" /> Download Clip
|
||||
</Button>
|
||||
<Button color="blue" href={`${apiHost}/api/events/${eventId}/snapshot.jpg?download=true`} download>
|
||||
<Snapshot className="w-6" /> Download Snapshot
|
||||
</Button>
|
||||
</div>
|
||||
<div class="col-end-10 col-span-2 space-x-4">
|
||||
<Button className="self-start" color="red" onClick={handleClickDelete}>
|
||||
<Delete className="w-6" /> Delete event
|
||||
</Button>
|
||||
<Button color="gray" className="self-start" onClick={() => close()}>
|
||||
<Close className="w-6" /> Close
|
||||
</Button>
|
||||
</div>
|
||||
{showDialog ? (
|
||||
<Dialog
|
||||
onDismiss={handleDismissDeleteDialog}
|
||||
@@ -78,86 +91,42 @@ export default function Event({ eventId }) {
|
||||
/>
|
||||
) : null}
|
||||
</div>
|
||||
|
||||
<Table class="w-full">
|
||||
<Thead>
|
||||
<Th>Key</Th>
|
||||
<Th>Value</Th>
|
||||
</Thead>
|
||||
<Tbody>
|
||||
<Tr>
|
||||
<Td>Camera</Td>
|
||||
<Td>
|
||||
<Link href={`/cameras/${data.camera}`}>{data.camera}</Link>
|
||||
</Td>
|
||||
</Tr>
|
||||
<Tr index={1}>
|
||||
<Td>Timeframe</Td>
|
||||
<Td>
|
||||
{startime.toLocaleString()} – {endtime.toLocaleString()}
|
||||
</Td>
|
||||
</Tr>
|
||||
<Tr>
|
||||
<Td>Score</Td>
|
||||
<Td>{(data.top_score * 100).toFixed(2)}%</Td>
|
||||
</Tr>
|
||||
<Tr index={1}>
|
||||
<Td>Zones</Td>
|
||||
<Td>{data.zones.join(', ')}</Td>
|
||||
</Tr>
|
||||
</Tbody>
|
||||
</Table>
|
||||
|
||||
{data.has_clip ? (
|
||||
<Fragment>
|
||||
<Heading size="lg">Clip</Heading>
|
||||
<VideoPlayer
|
||||
options={{
|
||||
sources: [
|
||||
{
|
||||
src: `${apiHost}/vod/event/${eventId}/index.m3u8`,
|
||||
type: 'application/vnd.apple.mpegurl',
|
||||
},
|
||||
],
|
||||
poster: data.has_snapshot
|
||||
? `${apiHost}/clips/${data.camera}-${eventId}.jpg`
|
||||
: `data:image/jpeg;base64,${data.thumbnail}`,
|
||||
}}
|
||||
seekOptions={{ forward: 10, back: 5 }}
|
||||
onReady={(player) => {}}
|
||||
/>
|
||||
<div className="text-center">
|
||||
<Button
|
||||
className="mx-2"
|
||||
color="blue"
|
||||
href={`${apiHost}/api/events/${eventId}/clip.mp4?download=true`}
|
||||
download
|
||||
>
|
||||
<Clip className="w-6" /> Download Clip
|
||||
</Button>
|
||||
<Button
|
||||
className="mx-2"
|
||||
color="blue"
|
||||
href={`${apiHost}/api/events/${eventId}/snapshot.jpg?download=true`}
|
||||
download
|
||||
>
|
||||
<Snapshot className="w-6" /> Download Snapshot
|
||||
</Button>
|
||||
</div>
|
||||
</Fragment>
|
||||
) : (
|
||||
<Fragment>
|
||||
<Heading size="sm">{data.has_snapshot ? 'Best Image' : 'Thumbnail'}</Heading>
|
||||
<img
|
||||
src={
|
||||
data.has_snapshot
|
||||
? `${apiHost}/clips/${data.camera}-${eventId}.jpg`
|
||||
: `data:image/jpeg;base64,${data.thumbnail}`
|
||||
}
|
||||
alt={`${data.label} at ${(data.top_score * 100).toFixed(1)}% confidence`}
|
||||
/>
|
||||
</Fragment>
|
||||
)}
|
||||
<div className="outer-max-width m-auto">
|
||||
<div className="w-full pt-5 relative pb-20">
|
||||
{data.has_clip ? (
|
||||
<Fragment>
|
||||
<Heading size="lg">Clip</Heading>
|
||||
<VideoPlayer
|
||||
options={{
|
||||
sources: [
|
||||
{
|
||||
src: `${apiHost}/vod/event/${eventId}/index.m3u8`,
|
||||
type: 'application/vnd.apple.mpegurl',
|
||||
},
|
||||
],
|
||||
poster: data.has_snapshot
|
||||
? `${apiHost}/clips/${data.camera}-${eventId}.jpg`
|
||||
: `data:image/jpeg;base64,${data.thumbnail}`,
|
||||
}}
|
||||
seekOptions={{ forward: 10, back: 5 }}
|
||||
onReady={() => {}}
|
||||
/>
|
||||
</Fragment>
|
||||
) : (
|
||||
<Fragment>
|
||||
<Heading size="sm">{data.has_snapshot ? 'Best Image' : 'Thumbnail'}</Heading>
|
||||
<img
|
||||
src={
|
||||
data.has_snapshot
|
||||
? `${apiHost}/clips/${data.camera}-${eventId}.jpg`
|
||||
: `data:image/jpeg;base64,${data.thumbnail}`
|
||||
}
|
||||
alt={`${data.label} at ${(data.top_score * 100).toFixed(1)}% confidence`}
|
||||
/>
|
||||
</Fragment>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
import { h } from 'preact';
|
||||
import { h, Fragment } from 'preact';
|
||||
import ActivityIndicator from '../components/ActivityIndicator';
|
||||
import Heading from '../components/Heading';
|
||||
import Link from '../components/Link';
|
||||
import Select from '../components/Select';
|
||||
import produce from 'immer';
|
||||
import { route } from 'preact-router';
|
||||
import Event from './Event';
|
||||
import { useIntersectionObserver } from '../hooks';
|
||||
import { FetchStatus, useApiHost, useConfig, useEvents } from '../api';
|
||||
import { Table, Thead, Tbody, Tfoot, Th, Tr, Td } from '../components/Table';
|
||||
@@ -12,9 +13,20 @@ import { useCallback, useEffect, useMemo, useReducer, useState } from 'preact/ho
|
||||
|
||||
const API_LIMIT = 25;
|
||||
|
||||
const initialState = Object.freeze({ events: [], reachedEnd: false, searchStrings: {} });
|
||||
const initialState = Object.freeze({ events: [], reachedEnd: false, searchStrings: {}, deleted: 0 });
|
||||
const reducer = (state = initialState, action) => {
|
||||
switch (action.type) {
|
||||
case 'DELETE_EVENT': {
|
||||
const { deletedId } = action;
|
||||
|
||||
return produce(state, (draftState) => {
|
||||
const idx = draftState.events.findIndex((e) => e.id === deletedId);
|
||||
if (idx === -1) return state;
|
||||
|
||||
draftState.events.splice(idx, 1);
|
||||
draftState.deleted++;
|
||||
});
|
||||
}
|
||||
case 'APPEND_EVENTS': {
|
||||
const {
|
||||
meta: { searchString },
|
||||
@@ -24,6 +36,7 @@ const reducer = (state = initialState, action) => {
|
||||
return produce(state, (draftState) => {
|
||||
draftState.searchStrings[searchString] = true;
|
||||
draftState.events.push(...payload);
|
||||
draftState.deleted = 0;
|
||||
});
|
||||
}
|
||||
|
||||
@@ -54,11 +67,13 @@ function removeDefaultSearchKeys(searchParams) {
|
||||
|
||||
export default function Events({ path: pathname, limit = API_LIMIT } = {}) {
|
||||
const apiHost = useApiHost();
|
||||
const [{ events, reachedEnd, searchStrings }, dispatch] = useReducer(reducer, initialState);
|
||||
const [{ events, reachedEnd, searchStrings, deleted }, dispatch] = useReducer(reducer, initialState);
|
||||
const { searchParams: initialSearchParams } = new URL(window.location);
|
||||
const [viewEvent, setViewEvent] = useState(null);
|
||||
const [searchString, setSearchString] = useState(`${defaultSearchString(limit)}&${initialSearchParams.toString()}`);
|
||||
const { data, status, deleted } = useEvents(searchString);
|
||||
const { data, status, deletedId } = useEvents(searchString);
|
||||
|
||||
const scrollToRef = {};
|
||||
useEffect(() => {
|
||||
if (data && !(searchString in searchStrings)) {
|
||||
dispatch({ type: 'APPEND_EVENTS', payload: data, meta: { searchString } });
|
||||
@@ -67,7 +82,11 @@ export default function Events({ path: pathname, limit = API_LIMIT } = {}) {
|
||||
if (data && Array.isArray(data) && data.length + deleted < limit) {
|
||||
dispatch({ type: 'REACHED_END', meta: { searchString } });
|
||||
}
|
||||
}, [data, limit, searchString, searchStrings, deleted]);
|
||||
|
||||
if (deletedId) {
|
||||
dispatch({ type: 'DELETE_EVENT', deletedId });
|
||||
}
|
||||
}, [data, limit, searchString, searchStrings, deleted, deletedId]);
|
||||
|
||||
const [entry, setIntersectNode] = useIntersectionObserver();
|
||||
|
||||
@@ -100,7 +119,16 @@ export default function Events({ path: pathname, limit = API_LIMIT } = {}) {
|
||||
[limit, pathname, setSearchString]
|
||||
);
|
||||
|
||||
const viewEventHandler = (id) => {
|
||||
//Toggle event view
|
||||
if (viewEvent === id) return setViewEvent(null);
|
||||
|
||||
//Set event id to be rendered.
|
||||
setViewEvent(id);
|
||||
};
|
||||
|
||||
const searchParams = useMemo(() => new URLSearchParams(searchString), [searchString]);
|
||||
|
||||
return (
|
||||
<div className="space-y-4 w-full">
|
||||
<Heading>Events</Heading>
|
||||
@@ -123,70 +151,83 @@ export default function Events({ path: pathname, limit = API_LIMIT } = {}) {
|
||||
</Thead>
|
||||
<Tbody>
|
||||
{events.map(
|
||||
(
|
||||
{ camera, id, label, start_time: startTime, end_time: endTime, thumbnail, top_score: score, zones },
|
||||
i
|
||||
) => {
|
||||
({ camera, id, label, start_time: startTime, end_time: endTime, top_score: score, zones }, i) => {
|
||||
const start = new Date(parseInt(startTime * 1000, 10));
|
||||
const end = new Date(parseInt(endTime * 1000, 10));
|
||||
const ref = i === events.length - 1 ? lastCellRef : undefined;
|
||||
return (
|
||||
<Tr data-testid={`event-${id}`} key={id}>
|
||||
<Td className="w-40">
|
||||
<a href={`/events/${id}`} ref={ref} data-start-time={startTime} data-reached-end={reachedEnd}>
|
||||
<img
|
||||
width="150"
|
||||
height="150"
|
||||
style="min-height: 48px; min-width: 48px;"
|
||||
src={`${apiHost}/api/events/${id}/thumbnail.jpg`}
|
||||
<Fragment key={id}>
|
||||
<Tr data-testid={`event-${id}`} className={`${viewEvent === id ? 'border-none' : ''}`}>
|
||||
<Td className="w-40">
|
||||
<a
|
||||
onClick={() => viewEventHandler(id)}
|
||||
ref={ref}
|
||||
data-start-time={startTime}
|
||||
data-reached-end={reachedEnd}
|
||||
>
|
||||
<img
|
||||
ref={(el) => (scrollToRef[id] = el)}
|
||||
width="150"
|
||||
height="150"
|
||||
className="cursor-pointer"
|
||||
style="min-height: 48px; min-width: 48px;"
|
||||
src={`${apiHost}/api/events/${id}/thumbnail.jpg`}
|
||||
/>
|
||||
</a>
|
||||
</Td>
|
||||
<Td>
|
||||
<Filterable
|
||||
onFilter={handleFilter}
|
||||
pathname={pathname}
|
||||
searchParams={searchParams}
|
||||
paramName="camera"
|
||||
name={camera}
|
||||
/>
|
||||
</a>
|
||||
</Td>
|
||||
<Td>
|
||||
<Filterable
|
||||
onFilter={handleFilter}
|
||||
pathname={pathname}
|
||||
searchParams={searchParams}
|
||||
paramName="camera"
|
||||
name={camera}
|
||||
/>
|
||||
</Td>
|
||||
<Td>
|
||||
<Filterable
|
||||
onFilter={handleFilter}
|
||||
pathname={pathname}
|
||||
searchParams={searchParams}
|
||||
paramName="label"
|
||||
name={label}
|
||||
/>
|
||||
</Td>
|
||||
<Td>{(score * 100).toFixed(2)}%</Td>
|
||||
<Td>
|
||||
<ul>
|
||||
{zones.map((zone) => (
|
||||
<li>
|
||||
<Filterable
|
||||
onFilter={handleFilter}
|
||||
pathname={pathname}
|
||||
searchParams={searchString}
|
||||
paramName="zone"
|
||||
name={zone}
|
||||
/>
|
||||
</li>
|
||||
))}
|
||||
</ul>
|
||||
</Td>
|
||||
<Td>{start.toLocaleDateString()}</Td>
|
||||
<Td>{start.toLocaleTimeString()}</Td>
|
||||
<Td>{end.toLocaleTimeString()}</Td>
|
||||
</Tr>
|
||||
</Td>
|
||||
<Td>
|
||||
<Filterable
|
||||
onFilter={handleFilter}
|
||||
pathname={pathname}
|
||||
searchParams={searchParams}
|
||||
paramName="label"
|
||||
name={label}
|
||||
/>
|
||||
</Td>
|
||||
<Td>{(score * 100).toFixed(2)}%</Td>
|
||||
<Td>
|
||||
<ul>
|
||||
{zones.map((zone) => (
|
||||
<li>
|
||||
<Filterable
|
||||
onFilter={handleFilter}
|
||||
pathname={pathname}
|
||||
searchParams={searchString}
|
||||
paramName="zone"
|
||||
name={zone}
|
||||
/>
|
||||
</li>
|
||||
))}
|
||||
</ul>
|
||||
</Td>
|
||||
<Td>{start.toLocaleDateString()}</Td>
|
||||
<Td>{start.toLocaleTimeString()}</Td>
|
||||
<Td>{end.toLocaleTimeString()}</Td>
|
||||
</Tr>
|
||||
{viewEvent === id ? (
|
||||
<Tr className="border-b-1">
|
||||
<Td colSpan="8">
|
||||
<Event eventId={id} close={() => setViewEvent(null)} scrollRef={scrollToRef} />
|
||||
</Td>
|
||||
</Tr>
|
||||
) : null}
|
||||
</Fragment>
|
||||
);
|
||||
}
|
||||
)}
|
||||
</Tbody>
|
||||
<Tfoot>
|
||||
<Tr>
|
||||
<Td className="text-center p-4" colspan="8">
|
||||
<Td className="text-center p-4" colSpan="8">
|
||||
{status === FetchStatus.LOADING ? <ActivityIndicator /> : reachedEnd ? 'No more events' : null}
|
||||
</Td>
|
||||
</Tr>
|
||||
|
||||
Reference in New Issue
Block a user