Compare commits

...

78 Commits

Author SHA1 Message Date
dependabot[bot]
8d4b1bc675 Update opencv-python-headless requirement in /docker/main
---
updated-dependencies:
- dependency-name: opencv-python-headless
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-17 12:00:47 +00:00
Nicolas Mowen
91ab1071d2 Update docs to make note of go2rtc port requirement (#16013) 2025-01-16 16:14:40 -07:00
Nicolas Mowen
409e911752 Update integration docs (#15967) 2025-01-13 08:50:44 -06:00
tpjanssen
9983bd8d92 Fix API latest image quality and API MIME types (#15964)
* Fix API latest image quality

* Fix mime types

* Code formatting + media_type fix
2025-01-13 07:46:46 -06:00
Nicolas Mowen
32c71c4108 Clean up handling of ffmpeg specific params (#15956) 2025-01-12 17:47:24 -06:00
Josh Hawkins
ef6952e3ea Fix display of save button in tracked object details pane (#15946) 2025-01-11 15:23:52 -06:00
Nicolas Mowen
173b7aa308 Handle case where user has multiple manual events on same camera (#15943) 2025-01-11 07:47:45 -07:00
Blake Blackshear
c4727f19e1 Simplify plus submit (#15941)
* remove unused annotate file

* improve plus error messages

* formatting
2025-01-11 07:04:11 -07:00
Josh Hawkins
b8a74793ca Clarify motion recording (#15917)
* Clarify motion recording

* move to troubleshooting
2025-01-09 09:55:08 -07:00
Josh Hawkins
c1dede9369 Clarify reolink doorbell two way talk requirements (#15915)
* Clarify reolink doorbell two way talk requirements

* relative paths

* move to live section

* fix link
2025-01-09 09:31:16 -07:00
Nicolas Mowen
0c4ea504d8 Update proxmox docs to align with proxmox recommendation of running in VM. (#15904) 2025-01-08 17:19:04 -06:00
Nicolas Mowen
b265b6b190 Catch case where user has multiple of the same kind of GPU (#15903) 2025-01-08 17:17:57 -06:00
Nicolas Mowen
d57a61b50f Simplify model config (#15881)
* Add migration to migrate to model_path

* Simplify model config

* Cleanup docs

* Set config version

* Formatting

* Fix tests
2025-01-07 20:59:37 -07:00
Nicolas Mowen
4fc9106c17 Update for correct audio requirements (#15882) 2025-01-07 17:02:32 -06:00
Nicolas Mowen
38e098ca31 Remove extra data except from keypackets when using qsv (#15865) 2025-01-06 17:38:46 -06:00
Nicolas Mowen
e7ad38d827 Update model docs (#15779) 2025-01-02 10:04:16 -06:00
Josh Hawkins
a1ce9aacf2 Tracked object details pane bugfix (#15736)
* restore save button in tracked object details pane

* conditionally show save button
2024-12-30 08:23:25 -06:00
Nicolas Mowen
322b847356 Fix event cleanup (#15724) 2024-12-29 14:47:40 -06:00
Josh Hawkins
98338e4c7f Ensure object lifecycle ratio is re-normalized to camera aspect (#15717) 2024-12-28 13:37:39 -07:00
Josh Hawkins
171a89f37b Language consistency - use Explore instead of Search (#15709) 2024-12-27 17:38:43 -07:00
Josh Hawkins
8114b541a8 Sort camera group edit screen by ui config values (#15705) 2024-12-27 14:30:27 -06:00
Josh Hawkins
c48396c5c6 Fix crash when streams are undefined in go2rtc config password cleaning (#15695) 2024-12-27 08:36:21 -06:00
leccelecce
00371546a3 GenAI: add ability to save JPGs sent to provider (#15643)
* GenAI: add ability to save JPGs sent to provider

* Remove mention from GenAI docs

* Change config name to debug_save_thumbnails

* Change  folder structure to clips/genai-requests/{event_id}/{1.jpg}
2024-12-23 07:05:34 -07:00
Nicolas Mowen
87e7b62c85 Remove duplicated rockchip build (#15641) 2024-12-22 13:31:14 -06:00
Nicolas Mowen
15ffe5c254 Fix trt (#15640) 2024-12-22 11:56:04 -07:00
Nicolas Mowen
a767dad3a1 Simplify TensorRT image (#15638) 2024-12-22 12:13:29 -06:00
Josh Hawkins
9387246f83 Add tooltips to ptz controls (#15633) 2024-12-21 17:57:22 -06:00
Nicolas Mowen
bed20de302 Update docs deps (#15617) 2024-12-20 10:37:02 -06:00
Nicolas Mowen
70fc5393b1 Make hailo wheels support any minor version (#15616) 2024-12-20 10:36:32 -06:00
dependabot[bot]
9b80dbe014 Bump actions/setup-python from 5.1.0 to 5.3.0 (#14584)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.1.0 to 5.3.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v5.1.0...v5.3.0)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-20 09:16:21 -07:00
Josh Hawkins
78a013d63a Add "frame" to shm frame names to avoid camera name issues (#15615) 2024-12-20 08:46:40 -06:00
Gabriel de Biasi
ddfe8f3921 Fix #7944: Adds tls_insecure to the onvif configuration (#15603)
* Adds tls_insecure to the onvif configuration

* reformat using ruff
2024-12-19 12:54:33 -07:00
Nicolas Mowen
4af752028f Bug Fixes (#15598)
* Catch onvif command error

* fix review item pre and post capture

* Include severity in query
2024-12-19 09:46:14 -06:00
Nicolas Mowen
b149828c9f Catch OS error (#15590) 2024-12-18 17:45:08 -06:00
Josh Hawkins
3dc26e78ef Genai descriptions are not generated until tracked objects end (#15561) 2024-12-17 17:33:04 -06:00
Giorgio Ughini
d9ef8fa206 Fix always the same image is sent to GenAI (#15550)
* Fix always the same image is sent to GenAI

* Fix typo for bug where identical images are sent to GenAI

* Correct formatting
2024-12-17 07:44:00 -06:00
Josh Hawkins
292499aebc Improve review message again (#15538) 2024-12-16 09:18:34 -07:00
Josh Hawkins
717493e668 Improve handling of error conditions with ollama and snapshot regeneration (#15527) 2024-12-15 20:51:23 -06:00
Josh Hawkins
d49f958d4d Don't crop by region for genai snapshot for manual events (#15525) 2024-12-15 17:03:19 -06:00
Nicolas Mowen
33ee32865f Ensure that go2rtc streams are cleaned (#15524)
* Ensure that go2rtc streams are cleaned

* Formatting

* Handle go2rtc config correctly

* Set type
2024-12-15 16:56:24 -06:00
Josh Hawkins
17f8939f97 Add FAQ to explain why streams might work in VLC but not in Frigate (#15513)
* Add faq to explain why streams might work in VLC but not in Frigate

* fix go2rtc version number

* wording

* mention udp input args and preset
2024-12-14 13:58:39 -06:00
FL42
1b7fe9523d fix: use requests.Session() for DeepStack API (#15505) 2024-12-14 07:54:13 -07:00
Josh Hawkins
0763f56047 Update iframe interval recommendation (#15501)
* Update iframe interval recommendation

* clarify

* tweaks

* wording
2024-12-13 12:52:56 -07:00
Josh Hawkins
1ea282fba8 Improve the message for missing objects in review items (#15500) 2024-12-13 12:02:41 -07:00
Blake Blackshear
869fa2631e apply zizmor recommendations (#15490) 2024-12-13 07:34:09 -06:00
Nicolas Mowen
f336a91fee Cleanup handling of first object message (#15480) 2024-12-12 21:22:47 -06:00
Nicolas Mowen
d302b6e198 Cap storage bandwidth (#15473) 2024-12-12 14:46:00 -06:00
Nicolas Mowen
ed2e1f3f72 Remove debug cleanup change (#15468) 2024-12-12 07:46:06 -07:00
Nicolas Mowen
b4d82084a9 Fixes (#15465)
* Fix single event return

* Allow customizing if search is preserved for overlay state

* Remove timeout

* Cleanup

* Cleanup naming
2024-12-12 08:22:30 -06:00
Josh Hawkins
53b96dfb89 Improve semantic search docs (#15453) 2024-12-11 20:19:08 -06:00
Nicolas Mowen
0e3fb6cbdd Standardize handling of config files (#15451)
* Standardize handling of config files

* Formatting

* Remove unused
2024-12-11 18:46:42 -06:00
Blake Blackshear
6b12a45a95 return 401 for login failures (#15432)
* return 401 for login failures

* only setup the rate limiter when configured
2024-12-10 06:42:55 -07:00
Nicolas Mowen
0b9c4c18dd Refactor event cleanup to consider review severity (#15415)
* Keep track of objects max review severity

* Refactor cleanup to split snapshots and clips

* Cleanup events based on review severity

* Cleanup review imports

* Don't catch detections
2024-12-09 08:25:45 -07:00
Nicolas Mowen
d0cc8cb64b API response cleanup (#15389)
* API response cleanup

* Remove extra field definition
2024-12-06 20:07:43 -06:00
Nicolas Mowen
bb86e71e65 fix auth remote addr access (#15378) 2024-12-06 10:25:43 -06:00
Josh Hawkins
8aa6297308 Ensure label does not overlap with box or go out of frame (#15376) 2024-12-06 08:32:16 -07:00
Nicolas Mowen
d3b631a952 Api improvements (#15327)
* Organize api files

* Add more API definitions for events

* Add export select by ID

* Typing fixes

* Update openapi spec

* Change type

* Fix test

* Fix message

* Fix tests
2024-12-06 08:04:02 -06:00
Nicolas Mowen
47d495fc01 Make note of go2rtc encoded URLs (#15348)
* Make note of go2rtc encoded URLs

* clarify
2024-12-04 16:54:57 -06:00
Nicolas Mowen
32322b23b2 Update nvidia docs to reflect preset (#15347) 2024-12-04 15:43:10 -07:00
Josh Hawkins
c0ba98e26f Explore sorting (#15342)
* backend

* add type and params

* radio group in ui

* ensure search_type is cleared on reset
2024-12-04 08:54:10 -07:00
Rui Alves
a5a7cd3107 Added more unit tests for the review controller (#15162) 2024-12-04 06:52:08 -06:00
Josh Hawkins
a729408599 preserve search query in overlay state hook (#15334) 2024-12-04 06:14:53 -06:00
Josh Hawkins
4dddc53735 move label placement when overlapping small boxes (#15310) 2024-12-02 13:07:12 -06:00
Josh Hawkins
5f42caad03 Explore bulk actions (#15307)
* use id instead of index for object details and scrolling

* long press package and hook

* fix long press in review

* search action group

* multi select in explore

* add bulk deletion to backend api

* clean up

* mimic behavior of review

* don't open dialog on left click when mutli selecting

* context menu on container ref

* revert long press code

* clean up
2024-12-02 11:12:55 -07:00
Jan Čermák
5475672a9d Fix extraction of Hailo userspace libs (#15187)
The archive already has everything contained in a rootfs folder, extract
it as-is to the root folder. This also reverts changes from
33957e5360 which addressed the same issue
in a less optimal way.
2024-12-02 08:35:51 -06:00
James Livulpi
833cdcb6d2 fix audio event create (#15299) 2024-12-01 20:07:44 -06:00
Nicolas Mowen
c95bc9fe44 Handle case where camera name ends in number (#15296) 2024-12-01 12:33:10 -07:00
Josh Hawkins
a1fa9decad Fix event cleanup debug logging crash (#15293) 2024-12-01 12:37:45 -06:00
Josh Hawkins
4a5fe4138e Explore audio event tweaks (#15291) 2024-12-01 12:08:03 -06:00
Nicolas Mowen
002fdeae67 SHM tweaks (#15274)
* Use env var to control max number of frames

* Handle type

* Fix frame_name not being sent

* Formatting
2024-12-01 10:39:35 -06:00
tpjanssen
5802a66469 Fix audio events in explore section (#15286)
* Fix audio events in explore section

Make sure that audio events are listed in the explore section

* Update audio.py

* Hide other submit options

Only allow submits for objects only
2024-12-01 07:47:37 -07:00
Alessandro Genova
71e8f75a01 Let the docker container spend more time to clean up and shut down (docs) (#15275) 2024-11-30 18:27:21 -06:00
Nicolas Mowen
ee816b2251 Fix camera access and improve typing (#15272)
* Fix camera access and improve typing:

* Formatting
2024-11-30 18:22:36 -06:00
Nicolas Mowen
f094c59cd0 Fix formatting (#15271) 2024-11-30 18:21:50 -06:00
Josh Hawkins
d25ffdb292 Fix crash when consecutive underscores are used in camera name (#15257) 2024-11-29 19:44:42 -07:00
Nicolas Mowen
2207a91f7b Fix ruff (#15223) 2024-11-27 12:57:58 -07:00
Nicolas Mowen
33957e5360 Set hailo build library path (#15167) 2024-11-24 19:07:41 -07:00
Nicolas Mowen
ff92b13f35 Fix sending events (#15100) 2024-11-20 09:37:33 -07:00
116 changed files with 8143 additions and 3371 deletions

View File

@@ -7,7 +7,7 @@ on:
- dev
- master
paths-ignore:
- 'docs/**'
- "docs/**"
# only run the latest commit to avoid cache overwrites
concurrency:
@@ -24,6 +24,8 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
@@ -45,6 +47,8 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
@@ -71,21 +75,14 @@ jobs:
rpi.tags=${{ steps.setup.outputs.image-name }}-rpi
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64,mode=max
- name: Build and push Rockchip build
uses: docker/bake-action@v3
with:
push: true
targets: rk
files: docker/rockchip/rk.hcl
set: |
rk.tags=${{ steps.setup.outputs.image-name }}-rk
*.cache-from=type=gha
jetson_jp4_build:
runs-on: ubuntu-latest
name: Jetson Jetpack 4
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
@@ -112,6 +109,8 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
@@ -140,6 +139,8 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
@@ -165,6 +166,8 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
@@ -188,6 +191,8 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup

View File

@@ -1,24 +0,0 @@
name: dependabot-auto-merge
on: pull_request
permissions:
contents: write
jobs:
dependabot-auto-merge:
runs-on: ubuntu-latest
if: github.actor == 'dependabot[bot]'
steps:
- name: Get Dependabot metadata
id: metadata
uses: dependabot/fetch-metadata@v2
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Enable auto-merge for Dependabot PRs
if: steps.metadata.outputs.dependency-type == 'direct:development' && (steps.metadata.outputs.update-type == 'version-update:semver-minor' || steps.metadata.outputs.update-type == 'version-update:semver-patch')
run: |
gh pr review --approve "$PR_URL"
gh pr merge --auto --squash "$PR_URL"
env:
PR_URL: ${{ github.event.pull_request.html_url }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -3,7 +3,7 @@ name: On pull request
on:
pull_request:
paths-ignore:
- 'docs/**'
- "docs/**"
env:
DEFAULT_PYTHON: 3.9
@@ -19,6 +19,8 @@ jobs:
DOCKER_BUILDKIT: "1"
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 16.x
@@ -38,6 +40,8 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 16.x
@@ -52,6 +56,8 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 20.x
@@ -67,8 +73,10 @@ jobs:
steps:
- name: Check out the repository
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@v5.1.0
uses: actions/setup-python@v5.3.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Install requirements
@@ -88,6 +96,8 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 16.x

View File

@@ -11,6 +11,8 @@ jobs:
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- id: lowercaseRepo
uses: ASzc/change-string-case-action@v6
with:
@@ -22,10 +24,13 @@ jobs:
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create tag variables
env:
TAG: ${{ github.ref_name }}
LOWERCASE_REPO: ${{ steps.lowercaseRepo.outputs.lowercase }}
run: |
BUILD_TYPE=$([[ "${{ github.ref_name }}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]] && echo "stable" || echo "beta")
BUILD_TYPE=$([[ "${TAG}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]] && echo "stable" || echo "beta")
echo "BUILD_TYPE=${BUILD_TYPE}" >> $GITHUB_ENV
echo "BASE=ghcr.io/${{ steps.lowercaseRepo.outputs.lowercase }}" >> $GITHUB_ENV
echo "BASE=ghcr.io/${LOWERCASE_REPO}" >> $GITHUB_ENV
echo "BUILD_TAG=${GITHUB_SHA::7}" >> $GITHUB_ENV
echo "CLEAN_VERSION=$(echo ${GITHUB_REF##*/} | tr '[:upper:]' '[:lower:]' | sed 's/^[v]//')" >> $GITHUB_ENV
- name: Tag and push the main image

View File

@@ -23,7 +23,9 @@ jobs:
exempt-pr-labels: "pinned,security,dependencies"
operations-per-run: 120
- name: Print outputs
run: echo ${{ join(steps.stale.outputs.*, ',') }}
env:
STALE_OUTPUT: ${{ join(steps.stale.outputs.*, ',') }}
run: echo "$STALE_OUTPUT"
# clean_ghcr:
# name: Delete outdated dev container images
@@ -38,4 +40,3 @@ jobs:
# account-type: personal
# token: ${{ secrets.GITHUB_TOKEN }}
# token-type: github-token

View File

@@ -61,7 +61,7 @@ def start(id, num_detections, detection_queue, event):
object_detector.cleanup()
print(f"{id} - Processed for {duration:.2f} seconds.")
print(f"{id} - FPS: {object_detector.fps.eps():.2f}")
print(f"{id} - Average frame processing time: {mean(frame_times)*1000:.2f}ms")
print(f"{id} - Average frame processing time: {mean(frame_times) * 1000:.2f}ms")
######

View File

@@ -10,10 +10,8 @@ elif [[ "${TARGETARCH}" == "arm64" ]]; then
arch="aarch64"
fi
mkdir -p /rootfs
wget -qO- "https://github.com/frigate-nvr/hailort/releases/download/v${hailo_version}/hailort-${TARGETARCH}.tar.gz" |
tar -C /rootfs/ -xzf -
tar -C / -xzf -
mkdir -p /hailo-wheels

View File

@@ -1,12 +1,12 @@
appdirs==1.4.4
argcomplete==2.0.0
contextlib2==0.6.0.post1
distlib==0.3.6
filelock==3.8.0
future==0.18.2
importlib-metadata==5.1.0
importlib-resources==5.1.2
netaddr==0.8.0
netifaces==0.10.9
verboselogs==1.7
virtualenv==20.17.0
appdirs==1.4.*
argcomplete==2.0.*
contextlib2==0.6.*
distlib==0.3.*
filelock==3.8.*
future==0.18.*
importlib-metadata==5.1.*
importlib-resources==5.1.*
netaddr==0.8.*
netifaces==0.10.*
verboselogs==1.7.*
virtualenv==20.17.*

View File

@@ -13,7 +13,7 @@ markupsafe == 2.1.*
mypy == 1.6.1
numpy == 1.26.*
onvif_zeep == 0.2.12
opencv-python-headless == 4.9.0.*
opencv-python-headless == 4.11.0.*
paho-mqtt == 2.1.*
pandas == 2.2.*
peewee == 3.17.*

View File

@@ -12,26 +12,11 @@ ARG TARGETARCH
COPY docker/tensorrt/requirements-amd64.txt /requirements-tensorrt.txt
RUN mkdir -p /trt-wheels && pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt
# Build CuDNN
FROM wget AS cudnn-deps
ARG COMPUTE_LEVEL
RUN apt-get update \
&& apt-get install -y git build-essential
RUN wget https://developer.download.nvidia.com/compute/cuda/repos/debian11/x86_64/cuda-keyring_1.1-1_all.deb \
&& dpkg -i cuda-keyring_1.1-1_all.deb \
&& apt-get update \
&& apt-get -y install cuda-toolkit \
&& rm -rf /var/lib/apt/lists/*
FROM tensorrt-base AS frigate-tensorrt
ENV TRT_VER=8.5.3
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
pip3 install -U /deps/trt-wheels/*.whl && \
ldconfig
COPY --from=cudnn-deps /usr/local/cuda-12.6 /usr/local/cuda
ENV LD_LIBRARY_PATH=/usr/local/lib/python3.9/dist-packages/tensorrt:/usr/local/cuda/lib64:/usr/local/lib/python3.9/dist-packages/nvidia/cufft/lib
WORKDIR /opt/frigate/
@@ -42,7 +27,7 @@ FROM devcontainer AS devcontainer-trt
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
COPY --from=trt-deps /usr/local/src/tensorrt_demos /usr/local/src/tensorrt_demos
COPY --from=cudnn-deps /usr/local/cuda-12.6 /usr/local/cuda
COPY --from=trt-deps /usr/local/cuda-12.1 /usr/local/cuda
COPY docker/tensorrt/detector/rootfs/ /
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \

View File

@@ -24,6 +24,7 @@ ENV S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
COPY --from=trt-deps /usr/local/src/tensorrt_demos /usr/local/src/tensorrt_demos
COPY --from=trt-deps /usr/local/cuda-12.* /usr/local/cuda
COPY docker/tensorrt/detector/rootfs/ /
ENV YOLO_MODELS=""

View File

@@ -174,7 +174,7 @@ NOTE: The folder that is set for the config needs to be the folder that contains
### Custom go2rtc version
Frigate currently includes go2rtc v1.9.4, there may be certain cases where you want to run a different version of go2rtc.
Frigate currently includes go2rtc v1.9.2, there may be certain cases where you want to run a different version of go2rtc.
To do this:

View File

@@ -41,6 +41,7 @@ cameras:
...
onvif:
# Required: host of the camera being connected to.
# NOTE: HTTP is assumed by default; HTTPS is supported if you specify the scheme, ex: "https://0.0.0.0".
host: 0.0.0.0
# Optional: ONVIF port for device (default: shown below).
port: 8000
@@ -49,6 +50,8 @@ cameras:
user: admin
# Optional: password for login.
password: admin
# Optional: Skip TLS verification from the ONVIF server (default: shown below)
tls_insecure: False
# Optional: PTZ camera object autotracking. Keeps a moving object in
# the center of the frame by automatically moving the PTZ camera.
autotracking:

View File

@@ -156,7 +156,9 @@ cameras:
#### Reolink Doorbell
The reolink doorbell supports 2-way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
The reolink doorbell supports two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk).
```yaml
go2rtc:

View File

@@ -5,6 +5,8 @@ title: Generative AI
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
Requests for a description are sent off automatically to your AI provider at the end of the tracked object's lifecycle. Descriptions can also be regenerated manually via the Frigate UI.
:::info
Semantic Search must be enabled to use Generative AI.

View File

@@ -231,28 +231,11 @@ docker run -d \
### Setup Decoder
The decoder you need to pass in the `hwaccel_args` will depend on the input video.
A list of supported codecs (you can use `ffmpeg -decoders | grep cuvid` in the container to get the ones your card supports)
```
V..... h263_cuvid Nvidia CUVID H263 decoder (codec h263)
V..... h264_cuvid Nvidia CUVID H264 decoder (codec h264)
V..... hevc_cuvid Nvidia CUVID HEVC decoder (codec hevc)
V..... mjpeg_cuvid Nvidia CUVID MJPEG decoder (codec mjpeg)
V..... mpeg1_cuvid Nvidia CUVID MPEG1VIDEO decoder (codec mpeg1video)
V..... mpeg2_cuvid Nvidia CUVID MPEG2VIDEO decoder (codec mpeg2video)
V..... mpeg4_cuvid Nvidia CUVID MPEG4 decoder (codec mpeg4)
V..... vc1_cuvid Nvidia CUVID VC1 decoder (codec vc1)
V..... vp8_cuvid Nvidia CUVID VP8 decoder (codec vp8)
V..... vp9_cuvid Nvidia CUVID VP9 decoder (codec vp9)
```
For example, for H264 video, you'll select `preset-nvidia-h264`.
Using `preset-nvidia` ffmpeg will automatically select the necessary profile for the incoming video, and will log an error if the profile is not supported by your GPU.
```yaml
ffmpeg:
hwaccel_args: preset-nvidia-h264
hwaccel_args: preset-nvidia
```
If everything is working correctly, you should see a significant improvement in performance.

View File

@@ -203,14 +203,13 @@ detectors:
ov:
type: openvino
device: AUTO
model:
path: /openvino-model/ssdlite_mobilenet_v2.xml
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
path: /openvino-model/ssdlite_mobilenet_v2.xml
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
record:

View File

@@ -23,13 +23,13 @@ If you are using go2rtc, you should adjust the following settings in your camera
- Video codec: **H.264** - provides the most compatible video codec with all Live view technologies and browsers. Avoid any kind of "smart codec" or "+" codec like _H.264+_ or _H.265+_. as these non-standard codecs remove keyframes (see below).
- Audio codec: **AAC** - provides the most compatible audio codec with all Live view technologies and browsers that support audio.
- I-frame interval (sometimes called the keyframe interval, the interframe space, or the GOP length): match your camera's frame rate, or choose "1x" (for interframe space on Reolink cameras). For example, if your stream outputs 20fps, your i-frame interval should be 20 (or 1x on Reolink). Values higher than the frame rate will cause the stream to take longer to begin playback. See [this page](https://gardinal.net/understanding-the-keyframe-interval/) for more on keyframes.
- I-frame interval (sometimes called the keyframe interval, the interframe space, or the GOP length): match your camera's frame rate, or choose "1x" (for interframe space on Reolink cameras). For example, if your stream outputs 20fps, your i-frame interval should be 20 (or 1x on Reolink). Values higher than the frame rate will cause the stream to take longer to begin playback. See [this page](https://gardinal.net/understanding-the-keyframe-interval/) for more on keyframes. For many users this may not be an issue, but it should be noted that that a 1x i-frame interval will cause more storage utilization if you are using the stream for the `record` role as well.
The default video and audio codec on your camera may not always be compatible with your browser, which is why setting them to H.264 and AAC is recommended. See the [go2rtc docs](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#codecs-madness) for codec support information.
### Audio Support
MSE Requires AAC audio, WebRTC requires PCMU/PCMA, or opus audio. If you want to support both MSE and WebRTC then your restream config needs to make sure both are enabled.
MSE Requires PCMA/PCMU or AAC audio, WebRTC requires PCMA/PCMU or opus audio. If you want to support both MSE and WebRTC then your restream config needs to make sure both are enabled.
```yaml
go2rtc:
@@ -138,3 +138,13 @@ services:
:::
See [go2rtc WebRTC docs](https://github.com/AlexxIT/go2rtc/tree/v1.8.3#module-webrtc) for more information about this.
### Two way talk
For devices that support two way talk, Frigate can be configured to use the feature from the camera's Live view in the Web UI. You should:
- Set up go2rtc with [WebRTC](#webrtc-extra-configuration).
- Ensure you access Frigate via https (may require [opening port 8971](/frigate/installation/#ports)).
- For the Home Assistant Frigate card, [follow the docs](https://github.com/dermotduffy/frigate-hass-card?tab=readme-ov-file#using-2-way-audio) for the correct source.
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-doorbell)

View File

@@ -144,7 +144,9 @@ detectors:
#### SSDLite MobileNet v2
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model. Use the model configuration shown below when using the OpenVINO detector with the default model.
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model.
Use the model configuration shown below when using the OpenVINO detector with the default OpenVINO model:
```yaml
detectors:
@@ -254,6 +256,7 @@ yolov4x-mish-640
yolov7-tiny-288
yolov7-tiny-416
yolov7-640
yolov7-416
yolov7-320
yolov7x-640
yolov7x-320
@@ -282,6 +285,8 @@ The TensorRT detector can be selected by specifying `tensorrt` as the model type
The TensorRT detector uses `.trt` model files that are located in `/config/model_cache/tensorrt` by default. These model path and dimensions used will depend on which model you have generated.
Use the config below to work with generated TRT models:
```yaml
detectors:
tensorrt:
@@ -501,11 +506,12 @@ detectors:
cpu1:
type: cpu
num_threads: 3
model:
path: "/custom_model.tflite"
cpu2:
type: cpu
num_threads: 3
model:
path: "/custom_model.tflite"
```
When using CPU detectors, you can add one CPU detector per camera. Adding more detectors than the number of cameras should not improve performance.
@@ -632,8 +638,6 @@ detectors:
hailo8l:
type: hailo8l
device: PCIe
model:
path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
model:
width: 300
@@ -641,4 +645,5 @@ model:
input_tensor: nhwc
input_pixel_format: bgr
model_type: ssd
path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
```

View File

@@ -52,7 +52,7 @@ detectors:
# Required: name of the detector
detector_name:
# Required: type of the detector
# Frigate provided types include 'cpu', 'edgetpu', 'openvino' and 'tensorrt' (default: shown below)
# Frigate provides many types, see https://docs.frigate.video/configuration/object_detectors for more details (default: shown below)
# Additional detector types can also be plugged in.
# Detectors may require additional configuration.
# Refer to the Detectors configuration page for more information.
@@ -117,25 +117,27 @@ auth:
hash_iterations: 600000
# Optional: model modifications
# NOTE: The default values are for the EdgeTPU detector.
# Other detectors will require the model config to be set.
model:
# Optional: path to the model (default: automatic based on detector)
# Required: path to the model (default: automatic based on detector)
path: /edgetpu_model.tflite
# Optional: path to the labelmap (default: shown below)
# Required: path to the labelmap (default: shown below)
labelmap_path: /labelmap.txt
# Required: Object detection model input width (default: shown below)
width: 320
# Required: Object detection model input height (default: shown below)
height: 320
# Optional: Object detection model input colorspace
# Required: Object detection model input colorspace
# Valid values are rgb, bgr, or yuv. (default: shown below)
input_pixel_format: rgb
# Optional: Object detection model input tensor format
# Required: Object detection model input tensor format
# Valid values are nhwc or nchw (default: shown below)
input_tensor: nhwc
# Optional: Object detection model type, currently only used with the OpenVINO detector
# Required: Object detection model type, currently only used with the OpenVINO detector
# Valid values are ssd, yolox, yolonas (default: shown below)
model_type: ssd
# Optional: Label name modifications. These are merged into the standard labelmap.
# Required: Label name modifications. These are merged into the standard labelmap.
labelmap:
2: vehicle
# Optional: Map of object labels to their attribute labels (default: depends on model)
@@ -546,6 +548,8 @@ genai:
# Optional: Restream configuration
# Uses https://github.com/AlexxIT/go2rtc (v1.9.2)
# NOTE: The default go2rtc API port (1984) must be used,
# changing this port for the integrated go2rtc instance is not supported.
go2rtc:
# Optional: Live stream configuration for WebUI.
@@ -686,6 +690,7 @@ cameras:
# to enable PTZ controls.
onvif:
# Required: host of the camera being connected to.
# NOTE: HTTP is assumed by default; HTTPS is supported if you specify the scheme, ex: "https://0.0.0.0".
host: 0.0.0.0
# Optional: ONVIF port for device (default: shown below).
port: 8000
@@ -694,6 +699,8 @@ cameras:
user: admin
# Optional: password for login.
password: admin
# Optional: Skip TLS verification from the ONVIF server (default: shown below)
tls_insecure: False
# Optional: Ignores time synchronization mismatches between the camera and the server during authentication.
# Using NTP on both ends is recommended and this should only be set to True in a "safe" environment due to the security risk it represents.
ignore_time_mismatch: False
@@ -757,6 +764,8 @@ cameras:
- cat
# Optional: Restrict generation to objects that entered any of the listed zones (default: none, all zones qualify)
required_zones: []
# Optional: Save thumbnails sent to generative AI for review/debugging purposes (default: shown below)
debug_save_thumbnails: False
# Optional
ui:

View File

@@ -132,6 +132,28 @@ cameras:
- detect
```
## Handling Complex Passwords
go2rtc expects URL-encoded passwords in the config, [urlencoder.org](https://urlencoder.org) can be used for this purpose.
For example:
```yaml
go2rtc:
streams:
my_camera: rtsp://username:$@foo%@192.168.1.100
```
becomes
```yaml
go2rtc:
streams:
my_camera: rtsp://username:$%40foo%25@192.168.1.100
```
See [this comment(https://github.com/AlexxIT/go2rtc/issues/1217#issuecomment-2242296489) for more information.
## Advanced Restream Configurations
The [exec](https://github.com/AlexxIT/go2rtc/tree/v1.9.2#source-exec) source in go2rtc can be used for custom ffmpeg commands. An example is below:

View File

@@ -5,7 +5,7 @@ title: Using Semantic Search
Semantic Search in Frigate allows you to find tracked objects within your review items using either the image itself, a user-defined text description, or an automatically generated one. This feature works by creating _embeddings_ — numerical vector representations — for both the images and text descriptions of your tracked objects. By comparing these embeddings, Frigate assesses their similarities to deliver relevant search results.
Frigate has support for [Jina AI's CLIP model](https://huggingface.co/jinaai/jina-clip-v1) to create embeddings, which runs locally. Embeddings are then saved to Frigate's database.
Frigate uses [Jina AI's CLIP model](https://huggingface.co/jinaai/jina-clip-v1) to create and save embeddings to Frigate's database. All of this runs locally.
Semantic Search is accessed via the _Explore_ view in the Frigate UI.
@@ -19,7 +19,7 @@ For best performance, 16GB or more of RAM and a dedicated GPU are recommended.
## Configuration
Semantic Search is disabled by default, and must be enabled in your config file before it can be used. Semantic Search is a global configuration setting.
Semantic Search is disabled by default, and must be enabled in your config file or in the UI's Settings page before it can be used. Semantic Search is a global configuration setting.
```yaml
semantic_search:
@@ -29,9 +29,9 @@ semantic_search:
:::tip
The embeddings database can be re-indexed from the existing tracked objects in your database by adding `reindex: True` to your `semantic_search` configuration. Depending on the number of tracked objects you have, it can take a long while to complete and may max out your CPU while indexing. Make sure to set the config back to `False` before restarting Frigate again.
The embeddings database can be re-indexed from the existing tracked objects in your database by adding `reindex: True` to your `semantic_search` configuration or by toggling the switch on the Search Settings page in the UI and restarting Frigate. Depending on the number of tracked objects you have, it can take a long while to complete and may max out your CPU while indexing. Make sure to turn the UI's switch off or set the config back to `False` before restarting Frigate again.
If you are enabling the Search feature for the first time, be advised that Frigate does not automatically index older tracked objects. You will need to enable the `reindex` feature in order to do that.
If you are enabling Semantic Search for the first time, be advised that Frigate does not automatically index older tracked objects. You will need to enable the `reindex` feature in order to do that.
:::
@@ -39,9 +39,9 @@ If you are enabling the Search feature for the first time, be advised that Friga
The vision model is able to embed both images and text into the same vector space, which allows `image -> image` and `text -> image` similarity searches. Frigate uses this model on tracked objects to encode the thumbnail image and store it in the database. When searching for tracked objects via text in the search box, Frigate will perform a `text -> image` similarity search against this embedding. When clicking "Find Similar" in the tracked object detail pane, Frigate will perform an `image -> image` similarity search to retrieve the closest matching thumbnails.
The text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Search page when clicking on the gray tracked object chip at the top left of each review item. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.
The text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Explore page when clicking on thumbnail of a tracked object. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.
Differently weighted CLIP models are available and can be selected by setting the `model_size` config option as `small` or `large`:
Differently weighted versions of the Jina model are available and can be selected by setting the `model_size` config option as `small` or `large`:
```yaml
semantic_search:
@@ -50,7 +50,7 @@ semantic_search:
```
- Configuring the `large` model employs the full Jina model and will automatically run on the GPU if applicable.
- Configuring the `small` model employs a quantized version of the model that uses less RAM and runs on CPU with a very negligible difference in embedding quality.
- Configuring the `small` model employs a quantized version of the Jina model that uses less RAM and runs on CPU with a very negligible difference in embedding quality.
### GPU Acceleration
@@ -84,7 +84,7 @@ If the correct build is used for your GPU and the `large` model is configured, t
## Usage and Best Practices
1. Semantic Search is used in conjunction with the other filters available on the Search page. Use a combination of traditional filtering and Semantic Search for the best results.
1. Semantic Search is used in conjunction with the other filters available on the Explore page. Use a combination of traditional filtering and Semantic Search for the best results.
2. Use the thumbnail search type when searching for particular objects in the scene. Use the description search type when attempting to discern the intent of your object.
3. Because of how the AI models Frigate uses have been trained, the comparison between text and image embedding distances generally means that with multi-modal (`thumbnail` and `description`) searches, results matching `description` will appear first, even if a `thumbnail` embedding may be a better match. Play with the "Search Type" setting to help find what you are looking for. Note that if you are generating descriptions for specific objects or zones only, this may cause search results to prioritize the objects with descriptions even if the the ones without them are more relevant.
4. Make your search language and tone closely match exactly what you're looking for. If you are using thumbnail search, **phrase your query as an image caption**. Searching for "red car" may not work as well as "red sedan driving down a residential street on a sunny day".

View File

@@ -28,7 +28,7 @@ For the Dahua/Loryta 5442 camera, I use the following settings:
- Encode Mode: H.264
- Resolution: 2688\*1520
- Frame Rate(FPS): 15
- I Frame Interval: 30
- I Frame Interval: 30 (15 can also be used to prioritize streaming performance - see the [camera settings recommendations](../configuration/live) for more info)
**Sub Stream (Detection)**

View File

@@ -193,6 +193,7 @@ services:
container_name: frigate
privileged: true # this may not be necessary for all setups
restart: unless-stopped
stop_grace_period: 30s # allow enough time to shut down the various services
image: ghcr.io/blakeblackshear/frigate:stable
shm_size: "512mb" # update for your cameras based on calculation above
devices:
@@ -224,6 +225,7 @@ If you can't use docker compose, you can run the container with something simila
docker run -d \
--name frigate \
--restart=unless-stopped \
--stop-timeout 30 \
--mount type=tmpfs,target=/tmp/cache,tmpfs-size=1000000000 \
--device /dev/bus/usb:/dev/bus/usb \
--device /dev/dri/renderD128 \
@@ -303,8 +305,15 @@ To install make sure you have the [community app plugin here](https://forums.unr
## Proxmox
It is recommended to run Frigate in LXC, rather than in a VM, for maximum performance. The setup can be complex so be prepared to read the Proxmox and LXC documentation. Suggestions include:
[According to Proxmox documentation](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pct) it is recommended that you run application containers like Frigate inside a Proxmox QEMU VM. This will give you all the advantages of application containerization, while also providing the benefits that VMs offer, such as strong isolation from the host and the ability to live-migrate, which otherwise isnt possible with containers.
:::warning
If you choose to run Frigate via LXC in Proxmox the setup can be complex so be prepared to read the Proxmox and LXC documentation, Frigate does not officially support running inside of an LXC.
:::
Suggestions include:
- For Intel-based hardware acceleration, to allow access to the `/dev/dri/renderD128` device with major number 226 and minor number 128, add the following lines to the `/etc/pve/lxc/<id>.conf` LXC configuration:
- `lxc.cgroup2.devices.allow: c 226:128 rwm`
- `lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file`

View File

@@ -115,6 +115,7 @@ services:
frigate:
container_name: frigate
restart: unless-stopped
stop_grace_period: 30s
image: ghcr.io/blakeblackshear/frigate:stable
volumes:
- ./config:/config

View File

@@ -47,7 +47,7 @@ that card.
## Configuration
When configuring the integration, you will be asked for the `URL` of your Frigate instance which needs to be pointed at the internal unauthenticated port (`5000`) for your instance. This may look like `http://<host>:5000/`.
When configuring the integration, you will be asked for the `URL` of your Frigate instance which can be pointed at the internal unauthenticated port (`5000`) or the authenticated port (`8971`) for your instance. This may look like `http://<host>:5000/`.
### Docker Compose Examples
@@ -55,7 +55,7 @@ If you are running Home Assistant Core and Frigate with Docker Compose on the sa
#### Home Assistant running with host networking
It is not recommended to run Frigate in host networking mode. In this example, you would use `http://172.17.0.1:5000` when configuring the integration.
It is not recommended to run Frigate in host networking mode. In this example, you would use `http://172.17.0.1:5000` or `http://172.17.0.1:8971` when configuring the integration.
```yaml
services:
@@ -75,7 +75,7 @@ services:
#### Home Assistant _not_ running with host networking or in a separate compose file
In this example, you would use `http://frigate:5000` when configuring the integration. There is no need to map the port for the Frigate container.
In this example, it is recommended to connect to the authenticated port, for example, `http://frigate:8971` when configuring the integration. There is no need to map the port for the Frigate container.
```yaml
services:
@@ -103,14 +103,15 @@ If you are using HassOS with the addon, the URL should be one of the following d
| Frigate NVR (Full Access) | `http://ccab4aaf-frigate-fa:5000` |
| Frigate NVR Beta | `http://ccab4aaf-frigate-beta:5000` |
| Frigate NVR Beta (Full Access) | `http://ccab4aaf-frigate-fa-beta:5000` |
| Frigate NVR HailoRT Beta | `http://ccab4aaf-frigate-hailo-beta:5000` |
### Frigate running on a separate machine
If you run Frigate on a separate device within your local network, Home Assistant will need access to port 5000.
If you run Frigate on a separate device within your local network, Home Assistant will need access to port 8971.
#### Local network
Use `http://<frigate_device_ip>:5000` as the URL for the integration. If you want to protect access to port 5000, you can use firewall rules to limit access to the device running Home Assistant.
Use `http://<frigate_device_ip>:8971` as the URL for the integration so that authentication is required.
```yaml
services:
@@ -118,7 +119,7 @@ services:
image: ghcr.io/blakeblackshear/frigate:stable
...
ports:
- "5000:5000"
- "8971:8971"
...
```
@@ -195,12 +196,30 @@ To load a snapshot for a tracked object:
https://HA_URL/api/frigate/notifications/<event-id>/snapshot.jpg
```
To load a video clip of a tracked object:
To load a video clip of a tracked object using an Android device:
```
https://HA_URL/api/frigate/notifications/<event-id>/clip.mp4
```
To load a video clip of a tracked object using an iOS device:
```
https://HA_URL/api/frigate/notifications/<event-id>/master.m3u8
```
To load a preview gif of a tracked object:
```
https://HA_URL/api/frigate/notifications/<event-id>/event_preview.gif
```
To load a preview gif of a review item:
```
https://HA_URL/api/frigate/notifications/<review-id>/review_preview.gif
```
<a name="streams"></a>
## RTSP stream

View File

@@ -98,3 +98,11 @@ docker run -d \
-p 8555:8555/udp \
ghcr.io/blakeblackshear/frigate:stable
```
### My RTSP stream works fine in VLC, but it does not work when I put the same URL in my Frigate config. Is this a bug?
No. Frigate uses the TCP protocol to connect to your camera's RTSP URL. VLC automatically switches between UDP and TCP depending on network conditions and stream availability. So a stream that works in VLC but not in Frigate is likely due to VLC selecting UDP as the transfer protocol.
TCP ensures that all data packets arrive in the correct order. This is crucial for video recording, decoding, and stream processing, which is why Frigate enforces a TCP connection. UDP is faster but less reliable, as it does not guarantee packet delivery or order, and VLC does not have the same requirements as Frigate.
You can still configure Frigate to use UDP by using ffmpeg input args or the preset `preset-rtsp-udp`. See the [ffmpeg presets](/configuration/ffmpeg_presets) documentation.

View File

@@ -3,7 +3,15 @@ id: recordings
title: Troubleshooting Recordings
---
### WARNING : Unable to keep up with recording segments in cache for camera. Keeping the 5 most recent segments out of 6 and discarding the rest...
## I have Frigate configured for motion recording only, but it still seems to be recording even with no motion. Why?
You'll want to:
- Make sure your camera's timestamp is masked out with a motion mask. Even if there is no motion occurring in your scene, your motion settings may be sensitive enough to count your timestamp as motion.
- If you have audio detection enabled, keep in mind that audio that is heard above `min_volume` is considered motion.
- [Tune your motion detection settings](/configuration/motion_detection) either by editing your config file or by using the UI's Motion Tuner.
## I see the message: WARNING : Unable to keep up with recording segments in cache for camera. Keeping the 5 most recent segments out of 6 and discarding the rest...
This error can be caused by a number of different issues. The first step in troubleshooting is to enable debug logging for recording. This will enable logging showing how long it takes for recordings to be moved from RAM cache to the disk.
@@ -40,6 +48,7 @@ On linux, some helpful tools/commands in diagnosing would be:
On modern linux kernels, the system will utilize some swap if enabled. Setting vm.swappiness=1 no longer means that the kernel will only swap in order to avoid OOM. To prevent any swapping inside a container, set allocations memory and memory+swap to be the same and disable swapping by setting the following docker/podman run parameters:
**Compose example**
```yaml
version: "3.9"
services:
@@ -54,6 +63,7 @@ services:
```
**Run command example**
```
--memory=<MAXRAM> --memory-swap=<MAXSWAP> --memory-swappiness=0
```

7069
docs/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -17,15 +17,15 @@
"write-heading-ids": "docusaurus write-heading-ids"
},
"dependencies": {
"@docusaurus/core": "^3.5.2",
"@docusaurus/preset-classic": "^3.5.2",
"@docusaurus/theme-mermaid": "^3.5.2",
"@docusaurus/plugin-content-docs": "^3.5.2",
"@mdx-js/react": "^3.0.1",
"@docusaurus/core": "^3.6.3",
"@docusaurus/preset-classic": "^3.6.3",
"@docusaurus/theme-mermaid": "^3.6.3",
"@docusaurus/plugin-content-docs": "^3.6.3",
"@mdx-js/react": "^3.1.0",
"clsx": "^2.1.1",
"docusaurus-plugin-openapi-docs": "^4.1.0",
"docusaurus-theme-openapi-docs": "^4.1.0",
"prism-react-renderer": "^2.4.0",
"docusaurus-plugin-openapi-docs": "^4.3.1",
"docusaurus-theme-openapi-docs": "^4.3.1",
"prism-react-renderer": "^2.4.1",
"raw-loader": "^4.0.2",
"react": "^18.3.1",
"react-dom": "^18.3.1"

File diff suppressed because it is too large Load Diff

View File

@@ -17,17 +17,17 @@ from fastapi.responses import JSONResponse, PlainTextResponse
from markupsafe import escape
from peewee import operator
from frigate.api.defs.app_body import AppConfigSetBody
from frigate.api.defs.app_query_parameters import AppTimelineHourlyQueryParameters
from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryParameters
from frigate.api.defs.request.app_body import AppConfigSetBody
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
from frigate.const import CONFIG_DIR
from frigate.models import Event, Timeline
from frigate.util.builtin import (
clean_camera_user_pass,
get_tz_modifiers,
update_yaml_from_url,
)
from frigate.util.config import find_config_file
from frigate.util.services import (
ffprobe_stream,
get_nvidia_driver_info,
@@ -134,9 +134,27 @@ def config(request: Request):
for zone_name, zone in config_obj.cameras[camera_name].zones.items():
camera_dict["zones"][zone_name]["color"] = zone.color
# remove go2rtc stream passwords
go2rtc: dict[str, any] = config_obj.go2rtc.model_dump(
mode="json", warnings="none", exclude_none=True
)
for stream_name, stream in go2rtc.get("streams", {}).items():
if stream is None:
continue
if isinstance(stream, str):
cleaned = clean_camera_user_pass(stream)
else:
cleaned = []
for item in stream:
cleaned.append(clean_camera_user_pass(item))
config["go2rtc"]["streams"][stream_name] = cleaned
config["plus"] = {"enabled": request.app.frigate_config.plus_api.is_active()}
config["model"]["colormap"] = config_obj.model.colormap
# use merged labelamp
for detector_config in config["detectors"].values():
detector_config["model"]["labelmap"] = (
request.app.frigate_config.model.merged_labelmap
@@ -147,13 +165,7 @@ def config(request: Request):
@router.get("/config/raw")
def config_raw():
config_file = os.environ.get("CONFIG_FILE", "/config/config.yml")
# Check if we can use .yaml instead of .yml
config_file_yaml = config_file.replace(".yml", ".yaml")
if os.path.isfile(config_file_yaml):
config_file = config_file_yaml
config_file = find_config_file()
if not os.path.isfile(config_file):
return JSONResponse(
@@ -198,13 +210,7 @@ def config_save(save_option: str, body: Any = Body(media_type="text/plain")):
# Save the config to file
try:
config_file = os.environ.get("CONFIG_FILE", "/config/config.yml")
# Check if we can use .yaml instead of .yml
config_file_yaml = config_file.replace(".yml", ".yaml")
if os.path.isfile(config_file_yaml):
config_file = config_file_yaml
config_file = find_config_file()
with open(config_file, "w") as f:
f.write(new_config)
@@ -253,13 +259,7 @@ def config_save(save_option: str, body: Any = Body(media_type="text/plain")):
@router.put("/config/set")
def config_set(request: Request, body: AppConfigSetBody):
config_file = os.environ.get("CONFIG_FILE", f"{CONFIG_DIR}/config.yml")
# Check if we can use .yaml instead of .yml
config_file_yaml = config_file.replace(".yml", ".yaml")
if os.path.isfile(config_file_yaml):
config_file = config_file_yaml
config_file = find_config_file()
with open(config_file, "r") as f:
old_raw_config = f.read()

View File

@@ -18,7 +18,7 @@ from joserfc import jwt
from peewee import DoesNotExist
from slowapi import Limiter
from frigate.api.defs.app_body import (
from frigate.api.defs.request.app_body import (
AppPostLoginBody,
AppPostUsersBody,
AppPutPasswordBody,
@@ -85,7 +85,12 @@ def get_remote_addr(request: Request):
return str(ip)
# if there wasn't anything in the route, just return the default
return request.remote_addr or "127.0.0.1"
remote_addr = None
if hasattr(request, "remote_addr"):
remote_addr = request.remote_addr
return remote_addr or "127.0.0.1"
def get_jwt_secret() -> str:
@@ -324,7 +329,7 @@ def login(request: Request, body: AppPostLoginBody):
try:
db_user: User = User.get_by_id(user)
except DoesNotExist:
return JSONResponse(content={"message": "Login failed"}, status_code=400)
return JSONResponse(content={"message": "Login failed"}, status_code=401)
password_hash = db_user.password_hash
if verify_password(password, password_hash):
@@ -335,7 +340,7 @@ def login(request: Request, body: AppPostLoginBody):
response, JWT_COOKIE_NAME, encoded_jwt, expiration, JWT_COOKIE_SECURE
)
return response
return JSONResponse(content={"message": "Login failed"}, status_code=400)
return JSONResponse(content={"message": "Login failed"}, status_code=401)
@router.get("/users")

View File

@@ -3,7 +3,7 @@ from typing import Union
from pydantic import BaseModel
from pydantic.json_schema import SkipJsonSchema
from frigate.review.maintainer import SeverityEnum
from frigate.review.types import SeverityEnum
class ReviewQueryParams(BaseModel):

View File

@@ -1,4 +1,4 @@
from typing import Optional, Union
from typing import List, Optional, Union
from pydantic import BaseModel, Field
@@ -17,14 +17,18 @@ class EventsDescriptionBody(BaseModel):
class EventsCreateBody(BaseModel):
source_type: Optional[str] = "api"
sub_label: Optional[str] = None
score: Optional[int] = 0
score: Optional[float] = 0
duration: Optional[int] = 30
include_recording: Optional[bool] = True
draw: Optional[dict] = {}
class EventsEndBody(BaseModel):
end_time: Optional[int] = None
end_time: Optional[float] = None
class EventsDeleteBody(BaseModel):
event_ids: List[str] = Field(title="The event IDs to delete")
class SubmitPlusBody(BaseModel):

View File

@@ -0,0 +1,42 @@
from typing import Any, Optional
from pydantic import BaseModel, ConfigDict
class EventResponse(BaseModel):
id: str
label: str
sub_label: Optional[str]
camera: str
start_time: float
end_time: Optional[float]
false_positive: Optional[bool]
zones: list[str]
thumbnail: str
has_clip: bool
has_snapshot: bool
retain_indefinitely: bool
plus_id: Optional[str]
model_hash: Optional[str]
detector_type: Optional[str]
model_type: Optional[str]
data: dict[str, Any]
model_config = ConfigDict(protected_namespaces=())
class EventCreateResponse(BaseModel):
success: bool
message: str
event_id: str
class EventMultiDeleteResponse(BaseModel):
success: bool
deleted_events: list[str]
not_found_events: list[str]
class EventUploadPlusResponse(BaseModel):
success: bool
plus_id: str

View File

@@ -3,7 +3,7 @@ from typing import Dict
from pydantic import BaseModel, Json
from frigate.review.maintainer import SeverityEnum
from frigate.review.types import SeverityEnum
class ReviewSegmentResponse(BaseModel):

View File

@@ -14,29 +14,36 @@ from fastapi.responses import JSONResponse
from peewee import JOIN, DoesNotExist, fn, operator
from playhouse.shortcuts import model_to_dict
from frigate.api.defs.events_body import (
EventsCreateBody,
EventsDescriptionBody,
EventsEndBody,
EventsSubLabelBody,
SubmitPlusBody,
)
from frigate.api.defs.events_query_parameters import (
from frigate.api.defs.query.events_query_parameters import (
DEFAULT_TIME_RANGE,
EventsQueryParams,
EventsSearchQueryParams,
EventsSummaryQueryParams,
)
from frigate.api.defs.regenerate_query_parameters import (
from frigate.api.defs.query.regenerate_query_parameters import (
RegenerateQueryParameters,
)
from frigate.api.defs.tags import Tags
from frigate.const import (
CLIPS_DIR,
from frigate.api.defs.request.events_body import (
EventsCreateBody,
EventsDeleteBody,
EventsDescriptionBody,
EventsEndBody,
EventsSubLabelBody,
SubmitPlusBody,
)
from frigate.api.defs.response.event_response import (
EventCreateResponse,
EventMultiDeleteResponse,
EventResponse,
EventUploadPlusResponse,
)
from frigate.api.defs.response.generic_response import GenericResponse
from frigate.api.defs.tags import Tags
from frigate.const import CLIPS_DIR
from frigate.embeddings import EmbeddingsContext
from frigate.events.external import ExternalEventProcessor
from frigate.models import Event, ReviewSegment, Timeline
from frigate.object_processing import TrackedObject
from frigate.object_processing import TrackedObject, TrackedObjectProcessor
from frigate.util.builtin import get_tz_modifiers
logger = logging.getLogger(__name__)
@@ -44,7 +51,7 @@ logger = logging.getLogger(__name__)
router = APIRouter(tags=[Tags.events])
@router.get("/events")
@router.get("/events", response_model=list[EventResponse])
def events(params: EventsQueryParams = Depends()):
camera = params.camera
cameras = params.cameras
@@ -246,6 +253,8 @@ def events(params: EventsQueryParams = Depends()):
order_by = Event.start_time.asc()
elif sort == "date_desc":
order_by = Event.start_time.desc()
else:
order_by = Event.start_time.desc()
else:
order_by = Event.start_time.desc()
@@ -261,7 +270,7 @@ def events(params: EventsQueryParams = Depends()):
return JSONResponse(content=list(events))
@router.get("/events/explore")
@router.get("/events/explore", response_model=list[EventResponse])
def events_explore(limit: int = 10):
# get distinct labels for all events
distinct_labels = Event.select(Event.label).distinct().order_by(Event.label)
@@ -306,7 +315,8 @@ def events_explore(limit: int = 10):
"data": {
k: v
for k, v in event.data.items()
if k in ["type", "score", "top_score", "description"]
if k
in ["type", "score", "top_score", "description", "sub_label_score"]
},
"event_count": label_counts[event.label],
}
@@ -322,7 +332,7 @@ def events_explore(limit: int = 10):
return JSONResponse(content=processed_events)
@router.get("/event_ids")
@router.get("/event_ids", response_model=list[EventResponse])
def event_ids(ids: str):
ids = ids.split(",")
@@ -580,19 +590,17 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
processed_events.append(processed_event)
# Sort by search distance if search_results are available, otherwise by start_time as default
if search_results:
if (sort is None or sort == "relevance") and search_results:
processed_events.sort(key=lambda x: x.get("search_distance", float("inf")))
elif min_score is not None and max_score is not None and sort == "score_asc":
processed_events.sort(key=lambda x: x["score"])
elif min_score is not None and max_score is not None and sort == "score_desc":
processed_events.sort(key=lambda x: x["score"], reverse=True)
elif sort == "date_asc":
processed_events.sort(key=lambda x: x["start_time"])
else:
if sort == "score_asc":
processed_events.sort(key=lambda x: x["score"])
elif sort == "score_desc":
processed_events.sort(key=lambda x: x["score"], reverse=True)
elif sort == "date_asc":
processed_events.sort(key=lambda x: x["start_time"])
else:
# "date_desc" default
processed_events.sort(key=lambda x: x["start_time"], reverse=True)
# "date_desc" default
processed_events.sort(key=lambda x: x["start_time"], reverse=True)
# Limit the number of events returned
processed_events = processed_events[:limit]
@@ -645,7 +653,7 @@ def events_summary(params: EventsSummaryQueryParams = Depends()):
return JSONResponse(content=[e for e in groups.dicts()])
@router.get("/events/{event_id}")
@router.get("/events/{event_id}", response_model=EventResponse)
def event(event_id: str):
try:
return model_to_dict(Event.get(Event.id == event_id))
@@ -653,7 +661,7 @@ def event(event_id: str):
return JSONResponse(content="Event not found", status_code=404)
@router.post("/events/{event_id}/retain")
@router.post("/events/{event_id}/retain", response_model=GenericResponse)
def set_retain(event_id: str):
try:
event = Event.get(Event.id == event_id)
@@ -672,7 +680,7 @@ def set_retain(event_id: str):
)
@router.post("/events/{event_id}/plus")
@router.post("/events/{event_id}/plus", response_model=EventUploadPlusResponse)
def send_to_plus(request: Request, event_id: str, body: SubmitPlusBody = None):
if not request.app.frigate_config.plus_api.is_active():
message = "PLUS_API_KEY environment variable is not set"
@@ -784,7 +792,7 @@ def send_to_plus(request: Request, event_id: str, body: SubmitPlusBody = None):
)
@router.put("/events/{event_id}/false_positive")
@router.put("/events/{event_id}/false_positive", response_model=EventUploadPlusResponse)
def false_positive(request: Request, event_id: str):
if not request.app.frigate_config.plus_api.is_active():
message = "PLUS_API_KEY environment variable is not set"
@@ -873,7 +881,7 @@ def false_positive(request: Request, event_id: str):
)
@router.delete("/events/{event_id}/retain")
@router.delete("/events/{event_id}/retain", response_model=GenericResponse)
def delete_retain(event_id: str):
try:
event = Event.get(Event.id == event_id)
@@ -892,7 +900,7 @@ def delete_retain(event_id: str):
)
@router.post("/events/{event_id}/sub_label")
@router.post("/events/{event_id}/sub_label", response_model=GenericResponse)
def set_sub_label(
request: Request,
event_id: str,
@@ -944,7 +952,7 @@ def set_sub_label(
)
@router.post("/events/{event_id}/description")
@router.post("/events/{event_id}/description", response_model=GenericResponse)
def set_description(
request: Request,
event_id: str,
@@ -991,7 +999,7 @@ def set_description(
)
@router.put("/events/{event_id}/description/regenerate")
@router.put("/events/{event_id}/description/regenerate", response_model=GenericResponse)
def regenerate_description(
request: Request, event_id: str, params: RegenerateQueryParameters = Depends()
):
@@ -1035,37 +1043,67 @@ def regenerate_description(
)
@router.delete("/events/{event_id}")
def delete_event(request: Request, event_id: str):
def delete_single_event(event_id: str, request: Request) -> dict:
try:
event = Event.get(Event.id == event_id)
except DoesNotExist:
return JSONResponse(
content=({"success": False, "message": "Event " + event_id + " not found"}),
status_code=404,
)
return {"success": False, "message": f"Event {event_id} not found"}
media_name = f"{event.camera}-{event.id}"
if event.has_snapshot:
media = Path(f"{os.path.join(CLIPS_DIR, media_name)}.jpg")
media.unlink(missing_ok=True)
media = Path(f"{os.path.join(CLIPS_DIR, media_name)}-clean.png")
media.unlink(missing_ok=True)
snapshot_paths = [
Path(f"{os.path.join(CLIPS_DIR, media_name)}.jpg"),
Path(f"{os.path.join(CLIPS_DIR, media_name)}-clean.png"),
]
for media in snapshot_paths:
media.unlink(missing_ok=True)
event.delete_instance()
Timeline.delete().where(Timeline.source_id == event_id).execute()
# If semantic search is enabled, update the index
if request.app.frigate_config.semantic_search.enabled:
context: EmbeddingsContext = request.app.embeddings
context.db.delete_embeddings_thumbnail(event_ids=[event_id])
context.db.delete_embeddings_description(event_ids=[event_id])
return JSONResponse(
content=({"success": True, "message": "Event " + event_id + " deleted"}),
status_code=200,
)
return {"success": True, "message": f"Event {event_id} deleted"}
@router.post("/events/{camera_name}/{label}/create")
@router.delete("/events/{event_id}", response_model=GenericResponse)
def delete_event(request: Request, event_id: str):
result = delete_single_event(event_id, request)
status_code = 200 if result["success"] else 404
return JSONResponse(content=result, status_code=status_code)
@router.delete("/events/", response_model=EventMultiDeleteResponse)
def delete_events(request: Request, body: EventsDeleteBody):
if not body.event_ids:
return JSONResponse(
content=({"success": False, "message": "No event IDs provided."}),
status_code=404,
)
deleted_events = []
not_found_events = []
for event_id in body.event_ids:
result = delete_single_event(event_id, request)
if result["success"]:
deleted_events.append(event_id)
else:
not_found_events.append(event_id)
response = {
"success": True,
"deleted_events": deleted_events,
"not_found_events": not_found_events,
}
return JSONResponse(content=response, status_code=200)
@router.post("/events/{camera_name}/{label}/create", response_model=EventCreateResponse)
def create_event(
request: Request,
camera_name: str,
@@ -1087,9 +1125,11 @@ def create_event(
)
try:
frame = request.app.detected_frames_processor.get_current_frame(camera_name)
frame_processor: TrackedObjectProcessor = request.app.detected_frames_processor
external_processor: ExternalEventProcessor = request.app.external_processor
event_id = request.app.external_processor.create_manual_event(
frame = frame_processor.get_current_frame(camera_name)
event_id = external_processor.create_manual_event(
camera_name,
label,
body.source_type,
@@ -1119,7 +1159,7 @@ def create_event(
)
@router.put("/events/{event_id}/end")
@router.put("/events/{event_id}/end", response_model=GenericResponse)
def end_event(request: Request, event_id: str, body: EventsEndBody):
try:
end_time = body.end_time or datetime.datetime.now().timestamp()

View File

@@ -9,6 +9,7 @@ import psutil
from fastapi import APIRouter, Request
from fastapi.responses import JSONResponse
from peewee import DoesNotExist
from playhouse.shortcuts import model_to_dict
from frigate.api.defs.request.export_recordings_body import ExportRecordingsBody
from frigate.api.defs.tags import Tags
@@ -207,3 +208,14 @@ def export_delete(event_id: str):
),
status_code=200,
)
@router.get("/exports/{export_id}")
def get_export(export_id: str):
try:
return JSONResponse(content=model_to_dict(Export.get(Export.id == export_id)))
except DoesNotExist:
return JSONResponse(
content={"success": False, "message": "Export not found"},
status_code=404,
)

View File

@@ -87,7 +87,11 @@ def create_fastapi_app(
logger.info("FastAPI started")
# Rate limiter (used for login endpoint)
auth.rateLimiter.set_limit(frigate_config.auth.failed_login_rate_limit or "")
if frigate_config.auth.failed_login_rate_limit is None:
limiter.enabled = False
else:
auth.rateLimiter.set_limit(frigate_config.auth.failed_login_rate_limit)
app.state.limiter = limiter
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
app.add_middleware(SlowAPIMiddleware)

View File

@@ -20,7 +20,7 @@ from pathvalidate import sanitize_filename
from peewee import DoesNotExist, fn
from tzlocal import get_localzone_name
from frigate.api.defs.media_query_parameters import (
from frigate.api.defs.query.media_query_parameters import (
Extension,
MediaEventsSnapshotQueryParams,
MediaLatestFrameQueryParams,
@@ -36,6 +36,7 @@ from frigate.const import (
RECORD_DIR,
)
from frigate.models import Event, Previews, Recordings, Regions, ReviewSegment
from frigate.object_processing import TrackedObjectProcessor
from frigate.util.builtin import get_tz_modifiers
from frigate.util.image import get_image_from_recording
@@ -79,7 +80,11 @@ def mjpeg_feed(
def imagestream(
detected_frames_processor, camera_name: str, fps: int, height: int, draw_options
detected_frames_processor: TrackedObjectProcessor,
camera_name: str,
fps: int,
height: int,
draw_options: dict[str, any],
):
while True:
# max out at specified FPS
@@ -118,6 +123,7 @@ def latest_frame(
extension: Extension,
params: MediaLatestFrameQueryParams = Depends(),
):
frame_processor: TrackedObjectProcessor = request.app.detected_frames_processor
draw_options = {
"bounding_boxes": params.bbox,
"timestamp": params.timestamp,
@@ -127,19 +133,25 @@ def latest_frame(
"regions": params.regions,
}
quality = params.quality
mime_type = extension
if extension == "png":
quality_params = None
elif extension == "webp":
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), quality]
else:
quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), quality]
mime_type = "jpeg"
if camera_name in request.app.frigate_config.cameras:
frame = request.app.detected_frames_processor.get_current_frame(
camera_name, draw_options
)
frame = frame_processor.get_current_frame(camera_name, draw_options)
retry_interval = float(
request.app.frigate_config.cameras.get(camera_name).ffmpeg.retry_interval
or 10
)
if frame is None or datetime.now().timestamp() > (
request.app.detected_frames_processor.get_current_frame_time(camera_name)
+ retry_interval
frame_processor.get_current_frame_time(camera_name) + retry_interval
):
if request.app.camera_error_image is None:
error_image = glob.glob("/opt/frigate/frigate/images/camera-error.jpg")
@@ -170,17 +182,15 @@ def latest_frame(
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
ret, img = cv2.imencode(
f".{extension}", frame, [int(cv2.IMWRITE_WEBP_QUALITY), quality]
)
ret, img = cv2.imencode(f".{extension}", frame, quality_params)
return Response(
content=img.tobytes(),
media_type=f"image/{extension}",
headers={"Content-Type": f"image/{extension}", "Cache-Control": "no-store"},
media_type=f"image/{mime_type}",
headers={"Content-Type": f"image/{mime_type}", "Cache-Control": "no-store"},
)
elif camera_name == "birdseye" and request.app.frigate_config.birdseye.restream:
frame = cv2.cvtColor(
request.app.detected_frames_processor.get_current_frame(camera_name),
frame_processor.get_current_frame(camera_name),
cv2.COLOR_YUV2BGR_I420,
)
@@ -189,13 +199,11 @@ def latest_frame(
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
ret, img = cv2.imencode(
f".{extension}", frame, [int(cv2.IMWRITE_WEBP_QUALITY), quality]
)
ret, img = cv2.imencode(f".{extension}", frame, quality_params)
return Response(
content=img.tobytes(),
media_type=f"image/{extension}",
headers={"Content-Type": f"image/{extension}", "Cache-Control": "no-store"},
media_type=f"image/{mime_type}",
headers={"Content-Type": f"image/{mime_type}", "Cache-Control": "no-store"},
)
else:
return JSONResponse(
@@ -238,6 +246,7 @@ def get_snapshot_from_recording(
recording: Recordings = recording_query.get()
time_in_segment = frame_time - recording.start_time
codec = "png" if format == "png" else "mjpeg"
mime_type = "png" if format == "png" else "jpeg"
config: FrigateConfig = request.app.frigate_config
image_data = get_image_from_recording(
@@ -254,7 +263,7 @@ def get_snapshot_from_recording(
),
status_code=404,
)
return Response(image_data, headers={"Content-Type": f"image/{format}"})
return Response(image_data, headers={"Content-Type": f"image/{mime_type}"})
except DoesNotExist:
return JSONResponse(
content={
@@ -813,15 +822,15 @@ def grid_snapshot(
):
if camera_name in request.app.frigate_config.cameras:
detect = request.app.frigate_config.cameras[camera_name].detect
frame = request.app.detected_frames_processor.get_current_frame(camera_name, {})
frame_processor: TrackedObjectProcessor = request.app.detected_frames_processor
frame = frame_processor.get_current_frame(camera_name, {})
retry_interval = float(
request.app.frigate_config.cameras.get(camera_name).ffmpeg.retry_interval
or 10
)
if frame is None or datetime.now().timestamp() > (
request.app.detected_frames_processor.get_current_frame_time(camera_name)
+ retry_interval
frame_processor.get_current_frame_time(camera_name) + retry_interval
):
return JSONResponse(
content={"success": False, "message": "Unable to get valid frame"},

View File

@@ -12,20 +12,21 @@ from fastapi.responses import JSONResponse
from peewee import Case, DoesNotExist, fn, operator
from playhouse.shortcuts import model_to_dict
from frigate.api.defs.generic_response import GenericResponse
from frigate.api.defs.review_body import ReviewModifyMultipleBody
from frigate.api.defs.review_query_parameters import (
from frigate.api.defs.query.review_query_parameters import (
ReviewActivityMotionQueryParams,
ReviewQueryParams,
ReviewSummaryQueryParams,
)
from frigate.api.defs.review_responses import (
from frigate.api.defs.request.review_body import ReviewModifyMultipleBody
from frigate.api.defs.response.generic_response import GenericResponse
from frigate.api.defs.response.review_response import (
ReviewActivityMotionResponse,
ReviewSegmentResponse,
ReviewSummaryResponse,
)
from frigate.api.defs.tags import Tags
from frigate.models import Recordings, ReviewSegment
from frigate.review.types import SeverityEnum
from frigate.util.builtin import get_tz_modifiers
logger = logging.getLogger(__name__)
@@ -161,7 +162,7 @@ def review_summary(params: ReviewSummaryQueryParams = Depends()):
None,
[
(
(ReviewSegment.severity == "alert"),
(ReviewSegment.severity == SeverityEnum.alert),
ReviewSegment.has_been_reviewed,
)
],
@@ -173,7 +174,7 @@ def review_summary(params: ReviewSummaryQueryParams = Depends()):
None,
[
(
(ReviewSegment.severity == "detection"),
(ReviewSegment.severity == SeverityEnum.detection),
ReviewSegment.has_been_reviewed,
)
],
@@ -185,7 +186,7 @@ def review_summary(params: ReviewSummaryQueryParams = Depends()):
None,
[
(
(ReviewSegment.severity == "alert"),
(ReviewSegment.severity == SeverityEnum.alert),
1,
)
],
@@ -197,7 +198,7 @@ def review_summary(params: ReviewSummaryQueryParams = Depends()):
None,
[
(
(ReviewSegment.severity == "detection"),
(ReviewSegment.severity == SeverityEnum.detection),
1,
)
],
@@ -230,6 +231,7 @@ def review_summary(params: ReviewSummaryQueryParams = Depends()):
label_clause = reduce(operator.or_, label_clauses)
clauses.append((label_clause))
day_in_seconds = 60 * 60 * 24
last_month = (
ReviewSegment.select(
fn.strftime(
@@ -246,7 +248,7 @@ def review_summary(params: ReviewSummaryQueryParams = Depends()):
None,
[
(
(ReviewSegment.severity == "alert"),
(ReviewSegment.severity == SeverityEnum.alert),
ReviewSegment.has_been_reviewed,
)
],
@@ -258,7 +260,7 @@ def review_summary(params: ReviewSummaryQueryParams = Depends()):
None,
[
(
(ReviewSegment.severity == "detection"),
(ReviewSegment.severity == SeverityEnum.detection),
ReviewSegment.has_been_reviewed,
)
],
@@ -270,7 +272,7 @@ def review_summary(params: ReviewSummaryQueryParams = Depends()):
None,
[
(
(ReviewSegment.severity == "alert"),
(ReviewSegment.severity == SeverityEnum.alert),
1,
)
],
@@ -282,7 +284,7 @@ def review_summary(params: ReviewSummaryQueryParams = Depends()):
None,
[
(
(ReviewSegment.severity == "detection"),
(ReviewSegment.severity == SeverityEnum.detection),
1,
)
],
@@ -292,7 +294,7 @@ def review_summary(params: ReviewSummaryQueryParams = Depends()):
)
.where(reduce(operator.and_, clauses))
.group_by(
(ReviewSegment.start_time + seconds_offset).cast("int") / (3600 * 24),
(ReviewSegment.start_time + seconds_offset).cast("int") / day_in_seconds,
)
.order_by(ReviewSegment.start_time.desc())
)
@@ -362,7 +364,7 @@ def delete_reviews(body: ReviewModifyMultipleBody):
ReviewSegment.delete().where(ReviewSegment.id << list_of_ids).execute()
return JSONResponse(
content=({"success": True, "message": "Delete reviews"}), status_code=200
content=({"success": True, "message": "Deleted review items."}), status_code=200
)

View File

@@ -36,6 +36,7 @@ from frigate.const import (
EXPORT_DIR,
MODEL_CACHE_DIR,
RECORD_DIR,
SHM_FRAMES_VAR,
)
from frigate.db.sqlitevecq import SqliteVecQueueDatabase
from frigate.embeddings import EmbeddingsContext, manage_embeddings
@@ -436,7 +437,7 @@ class FrigateApp:
# pre-create shms
for i in range(shm_frame_count):
frame_size = config.frame_shape_yuv[0] * config.frame_shape_yuv[1]
self.frame_manager.create(f"{config.name}{i}", frame_size)
self.frame_manager.create(f"{config.name}_frame{i}", frame_size)
capture_process = util.Process(
target=capture_camera,
@@ -523,7 +524,10 @@ class FrigateApp:
if cam_total_frame_size == 0.0:
return 0
shm_frame_count = min(200, int(available_shm / (cam_total_frame_size)))
shm_frame_count = min(
int(os.environ.get(SHM_FRAMES_VAR, "50")),
int(available_shm / (cam_total_frame_size)),
)
logger.debug(
f"Calculated total camera size {available_shm} / {cam_total_frame_size} :: {shm_frame_count} frames for each camera in SHM"

View File

@@ -151,7 +151,7 @@ class WebPushClient(Communicator): # type: ignore[misc]
camera: str = payload["after"]["camera"]
title = f"{', '.join(sorted_objects).replace('_', ' ').title()}{' was' if state == 'end' else ''} detected in {', '.join(payload['after']['data']['zones']).replace('_', ' ').title()}"
message = f"Detected on {camera.replace('_', ' ').title()}"
image = f'{payload["after"]["thumb_path"].replace("/media/frigate", "")}'
image = f"{payload['after']['thumb_path'].replace('/media/frigate', '')}"
# if event is ongoing open to live view otherwise open to recordings view
direct_url = f"/review?id={reviewId}" if state == "end" else f"/#{camera}"

View File

@@ -38,6 +38,10 @@ class GenAICameraConfig(BaseModel):
default_factory=list,
title="List of required zones to be entered in order to run generative AI.",
)
debug_save_thumbnails: bool = Field(
default=False,
title="Save thumbnails sent to generative AI for debugging purposes.",
)
@field_validator("required_zones", mode="before")
@classmethod

View File

@@ -74,6 +74,7 @@ class OnvifConfig(FrigateBaseModel):
port: int = Field(default=8000, title="Onvif Port")
user: Optional[EnvString] = Field(default=None, title="Onvif Username")
password: Optional[EnvString] = Field(default=None, title="Onvif Password")
tls_insecure: bool = Field(default=False, title="Onvif Disable TLS verification")
autotracking: PtzAutotrackConfig = Field(
default_factory=PtzAutotrackConfig,
title="PTZ auto tracking config.",

View File

@@ -4,6 +4,7 @@ from typing import Optional
from pydantic import Field
from frigate.const import MAX_PRE_CAPTURE
from frigate.review.types import SeverityEnum
from ..base import FrigateBaseModel
@@ -101,3 +102,15 @@ class RecordConfig(FrigateBaseModel):
self.alerts.pre_capture,
self.detections.pre_capture,
)
def get_review_pre_capture(self, severity: SeverityEnum) -> int:
if severity == SeverityEnum.alert:
return self.alerts.pre_capture
else:
return self.detections.pre_capture
def get_review_post_capture(self, severity: SeverityEnum) -> int:
if severity == SeverityEnum.alert:
return self.alerts.post_capture
else:
return self.detections.post_capture

View File

@@ -85,7 +85,7 @@ class ZoneConfig(BaseModel):
if explicit:
self.coordinates = ",".join(
[
f'{round(int(p.split(",")[0]) / frame_shape[1], 3)},{round(int(p.split(",")[1]) / frame_shape[0], 3)}'
f"{round(int(p.split(',')[0]) / frame_shape[1], 3)},{round(int(p.split(',')[1]) / frame_shape[0], 3)}"
for p in coordinates
]
)

View File

@@ -29,6 +29,7 @@ from frigate.util.builtin import (
)
from frigate.util.config import (
StreamInfoRetriever,
find_config_file,
get_relative_coordinates,
migrate_frigate_config,
)
@@ -67,7 +68,6 @@ logger = logging.getLogger(__name__)
yaml = YAML()
DEFAULT_CONFIG_FILE = "/config/config.yml"
DEFAULT_CONFIG = """
mqtt:
enabled: False
@@ -230,12 +230,16 @@ def verify_recording_segments_setup_with_reasonable_time(
try:
seg_arg_index = record_args.index("-segment_time")
except ValueError:
raise ValueError(f"Camera {camera_config.name} has no segment_time in \
recording output args, segment args are required for record.")
raise ValueError(
f"Camera {camera_config.name} has no segment_time in \
recording output args, segment args are required for record."
)
if int(record_args[seg_arg_index + 1]) > 60:
raise ValueError(f"Camera {camera_config.name} has invalid segment_time output arg, \
segment_time must be 60 or less.")
raise ValueError(
f"Camera {camera_config.name} has invalid segment_time output arg, \
segment_time must be 60 or less."
)
def verify_zone_objects_are_tracked(camera_config: CameraConfig) -> None:
@@ -590,35 +594,27 @@ class FrigateConfig(FrigateBaseModel):
if isinstance(detector, dict)
else detector.model_dump(warnings="none")
)
detector_config: DetectorConfig = adapter.validate_python(model_dict)
if detector_config.model is None:
detector_config.model = self.model.model_copy()
else:
path = detector_config.model.path
detector_config.model = self.model.model_copy()
detector_config.model.path = path
detector_config: BaseDetectorConfig = adapter.validate_python(model_dict)
if "path" not in model_dict or len(model_dict.keys()) > 1:
logger.warning(
"Customizing more than a detector model path is unsupported."
)
# users should not set model themselves
if detector_config.model:
detector_config.model = None
merged_model = deep_merge(
detector_config.model.model_dump(exclude_unset=True, warnings="none"),
self.model.model_dump(exclude_unset=True, warnings="none"),
)
model_config = self.model.model_dump(exclude_unset=True, warnings="none")
if "path" not in merged_model:
if detector_config.model_path:
model_config["path"] = detector_config.model_path
if "path" not in model_config:
if detector_config.type == "cpu":
merged_model["path"] = "/cpu_model.tflite"
model_config["path"] = "/cpu_model.tflite"
elif detector_config.type == "edgetpu":
merged_model["path"] = "/edgetpu_model.tflite"
model_config["path"] = "/edgetpu_model.tflite"
detector_config.model = ModelConfig.model_validate(merged_model)
detector_config.model.check_and_load_plus_model(
self.plus_api, detector_config.type
)
detector_config.model.compute_model_hash()
model = ModelConfig.model_validate(model_config)
model.check_and_load_plus_model(self.plus_api, detector_config.type)
model.compute_model_hash()
detector_config.model = model
self.detectors[key] = detector_config
return self
@@ -634,16 +630,13 @@ class FrigateConfig(FrigateBaseModel):
@classmethod
def load(cls, **kwargs):
config_path = os.environ.get("CONFIG_FILE", DEFAULT_CONFIG_FILE)
if not os.path.isfile(config_path):
config_path = config_path.replace("yml", "yaml")
config_path = find_config_file()
# No configuration file found, create one.
new_config = False
if not os.path.isfile(config_path):
logger.info("No config file found, saving default config")
config_path = DEFAULT_CONFIG_FILE
config_path = config_path
new_config = True
else:
# Check if the config file needs to be migrated.

View File

@@ -13,6 +13,8 @@ FRIGATE_LOCALHOST = "http://127.0.0.1:5000"
PLUS_ENV_VAR = "PLUS_API_KEY"
PLUS_API_HOST = "https://api.frigate.video"
SHM_FRAMES_VAR = "SHM_MAX_FRAMES"
# Attribute & Object constants
DEFAULT_ATTRIBUTE_LABEL_MAP = {

View File

@@ -194,6 +194,9 @@ class BaseDetectorConfig(BaseModel):
model: Optional[ModelConfig] = Field(
default=None, title="Detector specific model configuration."
)
model_path: Optional[str] = Field(
default=None, title="Detector specific model path."
)
model_config = ConfigDict(
extra="allow", arbitrary_types_allowed=True, protected_namespaces=()
)

View File

@@ -32,6 +32,7 @@ class DeepStack(DetectionApi):
self.api_timeout = detector_config.api_timeout
self.api_key = detector_config.api_key
self.labels = detector_config.model.merged_labelmap
self.session = requests.Session()
def get_label_index(self, label_value):
if label_value.lower() == "truck":
@@ -51,7 +52,7 @@ class DeepStack(DetectionApi):
data = {"api_key": self.api_key}
try:
response = requests.post(
response = self.session.post(
self.api_url,
data=data,
files={"image": image_bytes},

View File

@@ -136,17 +136,17 @@ class Rknn(DetectionApi):
def check_config(self, config):
if (config.model.width != 320) or (config.model.height != 320):
raise Exception(
"Make sure to set the model width and height to 320 in your config.yml."
"Make sure to set the model width and height to 320 in your config."
)
if config.model.input_pixel_format != "bgr":
raise Exception(
'Make sure to set the model input_pixel_format to "bgr" in your config.yml.'
'Make sure to set the model input_pixel_format to "bgr" in your config.'
)
if config.model.input_tensor != "nhwc":
raise Exception(
'Make sure to set the model input_tensor to "nhwc" in your config.yml.'
'Make sure to set the model input_tensor to "nhwc" in your config.'
)
def detect_raw(self, tensor_input):

View File

@@ -219,19 +219,19 @@ class TensorRtDetector(DetectionApi):
]
def __init__(self, detector_config: TensorRTDetectorConfig):
assert (
TRT_SUPPORT
), f"TensorRT libraries not found, {DETECTOR_KEY} detector not present"
assert TRT_SUPPORT, (
f"TensorRT libraries not found, {DETECTOR_KEY} detector not present"
)
(cuda_err,) = cuda.cuInit(0)
assert (
cuda_err == cuda.CUresult.CUDA_SUCCESS
), f"Failed to initialize cuda {cuda_err}"
assert cuda_err == cuda.CUresult.CUDA_SUCCESS, (
f"Failed to initialize cuda {cuda_err}"
)
err, dev_count = cuda.cuDeviceGetCount()
logger.debug(f"Num Available Devices: {dev_count}")
assert (
detector_config.device < dev_count
), f"Invalid TensorRT Device Config. Device {detector_config.device} Invalid."
assert detector_config.device < dev_count, (
f"Invalid TensorRT Device Config. Device {detector_config.device} Invalid."
)
err, self.cu_ctx = cuda.cuCtxCreate(
cuda.CUctx_flags.CU_CTX_MAP_HOST, detector_config.device
)

View File

@@ -5,6 +5,7 @@ import logging
import os
import threading
from multiprocessing.synchronize import Event as MpEvent
from pathlib import Path
from typing import Optional
import cv2
@@ -217,16 +218,47 @@ class EmbeddingMaintainer(threading.Thread):
_, buffer = cv2.imencode(".jpg", cropped_image)
snapshot_image = buffer.tobytes()
num_thumbnails = len(self.tracked_events.get(event_id, []))
embed_image = (
[snapshot_image]
if event.has_snapshot and camera_config.genai.use_snapshot
else (
[thumbnail for data in self.tracked_events[event_id]]
if len(self.tracked_events.get(event_id, [])) > 0
[
data["thumbnail"]
for data in self.tracked_events[event_id]
]
if num_thumbnails > 0
else [thumbnail]
)
)
if camera_config.genai.debug_save_thumbnails and num_thumbnails > 0:
logger.debug(
f"Saving {num_thumbnails} thumbnails for event {event.id}"
)
Path(
os.path.join(CLIPS_DIR, f"genai-requests/{event.id}")
).mkdir(parents=True, exist_ok=True)
for idx, data in enumerate(self.tracked_events[event_id], 1):
jpg_bytes: bytes = data["thumbnail"]
if jpg_bytes is None:
logger.warning(
f"Unable to save thumbnail {idx} for {event.id}."
)
else:
with open(
os.path.join(
CLIPS_DIR,
f"genai-requests/{event.id}/{idx}.jpg",
),
"wb",
) as j:
j.write(jpg_bytes)
# Generate the description. Call happens in a thread since it is network bound.
threading.Thread(
target=self._embed_description,
@@ -325,18 +357,25 @@ class EmbeddingMaintainer(threading.Thread):
)
if event.has_snapshot and source == "snapshot":
with open(
os.path.join(CLIPS_DIR, f"{event.camera}-{event.id}.jpg"),
"rb",
) as image_file:
snapshot_file = os.path.join(CLIPS_DIR, f"{event.camera}-{event.id}.jpg")
if not os.path.isfile(snapshot_file):
logger.error(
f"Cannot regenerate description for {event.id}, snapshot file not found: {snapshot_file}"
)
return
with open(snapshot_file, "rb") as image_file:
snapshot_image = image_file.read()
img = cv2.imdecode(
np.frombuffer(snapshot_image, dtype=np.int8), cv2.IMREAD_COLOR
)
# crop snapshot based on region before sending off to genai
# provide full image if region doesn't exist (manual events)
region = event.data.get("region", [0, 0, 1, 1])
height, width = img.shape[:2]
x1_rel, y1_rel, width_rel, height_rel = event.data["region"]
x1_rel, y1_rel, width_rel, height_rel = region
x1, y1 = int(x1_rel * width), int(y1_rel * height)
cropped_image = img[
@@ -350,7 +389,7 @@ class EmbeddingMaintainer(threading.Thread):
[snapshot_image]
if event.has_snapshot and source == "snapshot"
else (
[thumbnail for data in self.tracked_events[event_id]]
[data["thumbnail"] for data in self.tracked_events[event_id]]
if len(self.tracked_events.get(event_id, [])) > 0
else [thumbnail]
)

View File

@@ -216,6 +216,10 @@ class AudioEventMaintainer(threading.Thread):
"label": label,
"last_detection": datetime.datetime.now().timestamp(),
}
else:
self.logger.warning(
f"Failed to create audio event with status code {resp.status_code}"
)
def expire_detections(self) -> None:
now = datetime.datetime.now().timestamp()

View File

@@ -4,7 +4,6 @@ import datetime
import logging
import os
import threading
from enum import Enum
from multiprocessing.synchronize import Event as MpEvent
from pathlib import Path
@@ -16,11 +15,6 @@ from frigate.models import Event, Timeline
logger = logging.getLogger(__name__)
class EventCleanupType(str, Enum):
clips = "clips"
snapshots = "snapshots"
CHUNK_SIZE = 50
@@ -67,19 +61,11 @@ class EventCleanup(threading.Thread):
return self.camera_labels[camera]["labels"]
def expire(self, media_type: EventCleanupType) -> list[str]:
def expire_snapshots(self) -> list[str]:
## Expire events from unlisted cameras based on the global config
if media_type == EventCleanupType.clips:
expire_days = max(
self.config.record.alerts.retain.days,
self.config.record.detections.retain.days,
)
file_extension = None # mp4 clips are no longer stored in /clips
update_params = {"has_clip": False}
else:
retain_config = self.config.snapshots.retain
file_extension = "jpg"
update_params = {"has_snapshot": False}
retain_config = self.config.snapshots.retain
file_extension = "jpg"
update_params = {"has_snapshot": False}
distinct_labels = self.get_removed_camera_labels()
@@ -87,10 +73,7 @@ class EventCleanup(threading.Thread):
# loop over object types in db
for event in distinct_labels:
# get expiration time for this label
if media_type == EventCleanupType.snapshots:
expire_days = retain_config.objects.get(
event.label, retain_config.default
)
expire_days = retain_config.objects.get(event.label, retain_config.default)
expire_after = (
datetime.datetime.now() - datetime.timedelta(days=expire_days)
@@ -110,7 +93,7 @@ class EventCleanup(threading.Thread):
.namedtuples()
.iterator()
)
logger.debug(f"{len(expired_events)} events can be expired")
logger.debug(f"{len(list(expired_events))} events can be expired")
# delete the media from disk
for expired in expired_events:
media_name = f"{expired.camera}-{expired.id}"
@@ -138,8 +121,8 @@ class EventCleanup(threading.Thread):
events_to_update = []
for batch in query.iterator():
events_to_update.extend([event.id for event in batch])
for event in query.iterator():
events_to_update.append(event.id)
if len(events_to_update) >= CHUNK_SIZE:
logger.debug(
f"Updating {update_params} for {len(events_to_update)} events"
@@ -162,13 +145,7 @@ class EventCleanup(threading.Thread):
## Expire events from cameras based on the camera config
for name, camera in self.config.cameras.items():
if media_type == EventCleanupType.clips:
expire_days = max(
camera.record.alerts.retain.days,
camera.record.detections.retain.days,
)
else:
retain_config = camera.snapshots.retain
retain_config = camera.snapshots.retain
# get distinct objects in database for this camera
distinct_labels = self.get_camera_labels(name)
@@ -176,10 +153,9 @@ class EventCleanup(threading.Thread):
# loop over object types in db
for event in distinct_labels:
# get expiration time for this label
if media_type == EventCleanupType.snapshots:
expire_days = retain_config.objects.get(
event.label, retain_config.default
)
expire_days = retain_config.objects.get(
event.label, retain_config.default
)
expire_after = (
datetime.datetime.now() - datetime.timedelta(days=expire_days)
@@ -206,19 +182,144 @@ class EventCleanup(threading.Thread):
for event in expired_events:
events_to_update.append(event.id)
if media_type == EventCleanupType.snapshots:
try:
media_name = f"{event.camera}-{event.id}"
media_path = Path(
f"{os.path.join(CLIPS_DIR, media_name)}.{file_extension}"
)
media_path.unlink(missing_ok=True)
media_path = Path(
f"{os.path.join(CLIPS_DIR, media_name)}-clean.png"
)
media_path.unlink(missing_ok=True)
except OSError as e:
logger.warning(f"Unable to delete event images: {e}")
try:
media_name = f"{event.camera}-{event.id}"
media_path = Path(
f"{os.path.join(CLIPS_DIR, media_name)}.{file_extension}"
)
media_path.unlink(missing_ok=True)
media_path = Path(
f"{os.path.join(CLIPS_DIR, media_name)}-clean.png"
)
media_path.unlink(missing_ok=True)
except OSError as e:
logger.warning(f"Unable to delete event images: {e}")
# update the clips attribute for the db entry
for i in range(0, len(events_to_update), CHUNK_SIZE):
batch = events_to_update[i : i + CHUNK_SIZE]
logger.debug(f"Updating {update_params} for {len(batch)} events")
Event.update(update_params).where(Event.id << batch).execute()
return events_to_update
def expire_clips(self) -> list[str]:
## Expire events from unlisted cameras based on the global config
expire_days = max(
self.config.record.alerts.retain.days,
self.config.record.detections.retain.days,
)
file_extension = None # mp4 clips are no longer stored in /clips
update_params = {"has_clip": False}
# get expiration time for this label
expire_after = (
datetime.datetime.now() - datetime.timedelta(days=expire_days)
).timestamp()
# grab all events after specific time
expired_events: list[Event] = (
Event.select(
Event.id,
Event.camera,
)
.where(
Event.camera.not_in(self.camera_keys),
Event.start_time < expire_after,
Event.retain_indefinitely == False,
)
.namedtuples()
.iterator()
)
logger.debug(f"{len(list(expired_events))} events can be expired")
# delete the media from disk
for expired in expired_events:
media_name = f"{expired.camera}-{expired.id}"
media_path = Path(f"{os.path.join(CLIPS_DIR, media_name)}.{file_extension}")
try:
media_path.unlink(missing_ok=True)
if file_extension == "jpg":
media_path = Path(
f"{os.path.join(CLIPS_DIR, media_name)}-clean.png"
)
media_path.unlink(missing_ok=True)
except OSError as e:
logger.warning(f"Unable to delete event images: {e}")
# update the clips attribute for the db entry
query = Event.select(Event.id).where(
Event.camera.not_in(self.camera_keys),
Event.start_time < expire_after,
Event.retain_indefinitely == False,
)
events_to_update = []
for event in query.iterator():
events_to_update.append(event.id)
if len(events_to_update) >= CHUNK_SIZE:
logger.debug(
f"Updating {update_params} for {len(events_to_update)} events"
)
Event.update(update_params).where(
Event.id << events_to_update
).execute()
events_to_update = []
# Update any remaining events
if events_to_update:
logger.debug(
f"Updating clips/snapshots attribute for {len(events_to_update)} events"
)
Event.update(update_params).where(Event.id << events_to_update).execute()
events_to_update = []
now = datetime.datetime.now()
## Expire events from cameras based on the camera config
for name, camera in self.config.cameras.items():
expire_days = max(
camera.record.alerts.retain.days,
camera.record.detections.retain.days,
)
alert_expire_date = (
now - datetime.timedelta(days=camera.record.alerts.retain.days)
).timestamp()
detection_expire_date = (
now - datetime.timedelta(days=camera.record.detections.retain.days)
).timestamp()
# grab all events after specific time
expired_events = (
Event.select(
Event.id,
Event.camera,
)
.where(
Event.camera == name,
Event.retain_indefinitely == False,
(
(
(Event.data["max_severity"] != "detection")
| (Event.data["max_severity"].is_null())
)
& (Event.end_time < alert_expire_date)
)
| (
(Event.data["max_severity"] == "detection")
& (Event.end_time < detection_expire_date)
),
)
.namedtuples()
.iterator()
)
# delete the grabbed clips from disk
# only snapshots are stored in /clips
# so no need to delete mp4 files
for event in expired_events:
events_to_update.append(event.id)
# update the clips attribute for the db entry
for i in range(0, len(events_to_update), CHUNK_SIZE):
@@ -231,7 +332,7 @@ class EventCleanup(threading.Thread):
def run(self) -> None:
# only expire events every 5 minutes
while not self.stop_event.wait(300):
events_with_expired_clips = self.expire(EventCleanupType.clips)
events_with_expired_clips = self.expire_clips()
# delete timeline entries for events that have expired recordings
# delete up to 100,000 at a time
@@ -242,7 +343,7 @@ class EventCleanup(threading.Thread):
Timeline.source_id << deleted_events_list[i : i + max_deletes]
).execute()
self.expire(EventCleanupType.snapshots)
self.expire_snapshots()
# drop events from db where has_clip and has_snapshot are false
events = (

View File

@@ -10,6 +10,7 @@ from enum import Enum
from typing import Optional
import cv2
from numpy import ndarray
from frigate.comms.detections_updater import DetectionPublisher, DetectionTypeEnum
from frigate.comms.events_updater import EventUpdatePublisher
@@ -45,7 +46,7 @@ class ExternalEventProcessor:
duration: Optional[int],
include_recording: bool,
draw: dict[str, any],
snapshot_frame: any,
snapshot_frame: Optional[ndarray],
) -> str:
now = datetime.datetime.now().timestamp()
camera_config = self.config.cameras.get(camera)
@@ -64,6 +65,7 @@ class ExternalEventProcessor:
EventTypeEnum.api,
EventStateEnum.start,
camera,
"",
{
"id": event_id,
"label": label,
@@ -106,6 +108,7 @@ class ExternalEventProcessor:
EventTypeEnum.api,
EventStateEnum.end,
None,
"",
{"id": event_id, "end_time": end_time},
)
)
@@ -130,8 +133,11 @@ class ExternalEventProcessor:
label: str,
event_id: str,
draw: dict[str, any],
img_frame: any,
) -> str:
img_frame: Optional[ndarray],
) -> Optional[str]:
if img_frame is None:
return None
# write clean snapshot if enabled
if camera_config.snapshots.clean_copy:
ret, png = cv2.imencode(".png", img_frame)

View File

@@ -82,18 +82,23 @@ class EventProcessor(threading.Thread):
)
if source_type == EventTypeEnum.tracked_object:
id = event_data["id"]
self.timeline_queue.put(
(
camera,
source_type,
event_type,
self.events_in_process.get(event_data["id"]),
self.events_in_process.get(id),
event_data,
)
)
if event_type == EventStateEnum.start:
self.events_in_process[event_data["id"]] = event_data
# if this is the first message, just store it and continue, its not time to insert it in the db
if (
event_type == EventStateEnum.start
or id not in self.events_in_process
):
self.events_in_process[id] = event_data
continue
self.handle_object_detection(event_type, camera, event_data)
@@ -123,10 +128,6 @@ class EventProcessor(threading.Thread):
"""handle tracked object event updates."""
updated_db = False
# if this is the first message, just store it and continue, its not time to insert it in the db
if event_type == EventStateEnum.start:
self.events_in_process[event_data["id"]] = event_data
if should_update_db(self.events_in_process[event_data["id"]], event_data):
updated_db = True
camera_config = self.config.cameras[camera]
@@ -210,6 +211,7 @@ class EventProcessor(threading.Thread):
"top_score": event_data["top_score"],
"attributes": attributes,
"type": "object",
"max_severity": event_data.get("max_severity"),
},
}

View File

@@ -50,16 +50,9 @@ class LibvaGpuSelector:
return ""
FPS_VFR_PARAM = (
"-fps_mode vfr"
if int(os.getenv("LIBAVFORMAT_VERSION_MAJOR", "59") or "59") >= 59
else "-vsync 2"
)
TIMEOUT_PARAM = (
"-timeout"
if int(os.getenv("LIBAVFORMAT_VERSION_MAJOR", "59") or "59") >= 59
else "-stimeout"
)
LIBAV_VERSION = int(os.getenv("LIBAVFORMAT_VERSION_MAJOR", "59") or "59")
FPS_VFR_PARAM = "-fps_mode vfr" if LIBAV_VERSION >= 59 else "-vsync 2"
TIMEOUT_PARAM = "-timeout" if LIBAV_VERSION >= 59 else "-stimeout"
_gpu_selector = LibvaGpuSelector()
_user_agent_args = [
@@ -71,8 +64,8 @@ PRESETS_HW_ACCEL_DECODE = {
"preset-rpi-64-h264": "-c:v:1 h264_v4l2m2m",
"preset-rpi-64-h265": "-c:v:1 hevc_v4l2m2m",
FFMPEG_HWACCEL_VAAPI: f"-hwaccel_flags allow_profile_mismatch -hwaccel vaapi -hwaccel_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format vaapi",
"preset-intel-qsv-h264": f"-hwaccel qsv -qsv_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format qsv -c:v h264_qsv",
"preset-intel-qsv-h265": f"-load_plugin hevc_hw -hwaccel qsv -qsv_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format qsv -c:v hevc_qsv",
"preset-intel-qsv-h264": f"-hwaccel qsv -qsv_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format qsv -c:v h264_qsv{' -bsf:v dump_extra' if LIBAV_VERSION >= 61 else ''}", # https://trac.ffmpeg.org/ticket/9766#comment:17
"preset-intel-qsv-h265": f"-load_plugin hevc_hw -hwaccel qsv -qsv_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format qsv{' -bsf:v dump_extra' if LIBAV_VERSION >= 61 else ''}", # https://trac.ffmpeg.org/ticket/9766#comment:17
FFMPEG_HWACCEL_NVIDIA: "-hwaccel cuda -hwaccel_output_format cuda",
"preset-jetson-h264": "-c:v h264_nvmpi -resize {1}x{2}",
"preset-jetson-h265": "-c:v hevc_nvmpi -resize {1}x{2}",

View File

@@ -38,6 +38,11 @@ class OllamaClient(GenAIClient):
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
"""Submit a request to Ollama"""
if self.provider is None:
logger.warning(
"Ollama provider has not been initialized, a description will not be generated. Check your Ollama configuration."
)
return None
try:
result = self.provider.generate(
self.genai_config.model,

View File

@@ -6,7 +6,7 @@ import queue
import threading
from collections import Counter, defaultdict
from multiprocessing.synchronize import Event as MpEvent
from typing import Callable
from typing import Callable, Optional
import cv2
import numpy as np
@@ -702,30 +702,7 @@ class TrackedObjectProcessor(threading.Thread):
return False
# If the object is not considered an alert or detection
review_config = self.config.cameras[camera].review
if not (
(
obj.obj_data["label"] in review_config.alerts.labels
and (
not review_config.alerts.required_zones
or set(obj.entered_zones) & set(review_config.alerts.required_zones)
)
)
or (
(
not review_config.detections.labels
or obj.obj_data["label"] in review_config.detections.labels
)
and (
not review_config.detections.required_zones
or set(obj.entered_zones)
& set(review_config.detections.required_zones)
)
)
):
logger.debug(
f"Not creating clip for {obj.obj_data['id']} because it did not qualify as an alert or detection"
)
if obj.max_severity is None:
return False
return True
@@ -784,13 +761,18 @@ class TrackedObjectProcessor(threading.Thread):
else:
return {}
def get_current_frame(self, camera, draw_options={}):
def get_current_frame(
self, camera: str, draw_options: dict[str, any] = {}
) -> Optional[np.ndarray]:
if camera == "birdseye":
return self.frame_manager.get(
"birdseye",
(self.config.birdseye.height * 3 // 2, self.config.birdseye.width),
)
if camera not in self.camera_states:
return None
return self.camera_states[camera].get_current_frame(draw_options)
def get_current_frame_time(self, camera) -> int:

View File

@@ -68,11 +68,13 @@ class PlusApi:
or self._token_data["expires"] - datetime.datetime.now().timestamp() < 60
):
if self.key is None:
raise Exception("Plus API not activated")
raise Exception(
"Plus API key not set. See https://docs.frigate.video/integrations/plus#set-your-api-key"
)
parts = self.key.split(":")
r = requests.get(f"{self.host}/v1/auth/token", auth=(parts[0], parts[1]))
if not r.ok:
raise Exception("Unable to refresh API token")
raise Exception(f"Unable to refresh API token: {r.text}")
self._token_data = r.json()
def _get_authorization_header(self) -> dict:
@@ -116,15 +118,6 @@ class PlusApi:
logger.error(f"Failed to upload original: {r.status_code} {r.text}")
raise Exception(r.text)
# resize and submit annotate
files = {"file": get_jpg_bytes(image, 640, 70)}
data = presigned_urls["annotate"]["fields"]
data["content-type"] = "image/jpeg"
r = requests.post(presigned_urls["annotate"]["url"], files=files, data=data)
if not r.ok:
logger.error(f"Failed to upload annotate: {r.status_code} {r.text}")
raise Exception(r.text)
# resize and submit thumbnail
files = {"file": get_jpg_bytes(image, 200, 70)}
data = presigned_urls["thumbnail"]["fields"]

View File

@@ -2,7 +2,6 @@
import copy
import logging
import os
import queue
import threading
import time
@@ -29,11 +28,11 @@ from frigate.const import (
AUTOTRACKING_ZOOM_EDGE_THRESHOLD,
AUTOTRACKING_ZOOM_IN_HYSTERESIS,
AUTOTRACKING_ZOOM_OUT_HYSTERESIS,
CONFIG_DIR,
)
from frigate.ptz.onvif import OnvifController
from frigate.track.tracked_object import TrackedObject
from frigate.util.builtin import update_yaml_file
from frigate.util.config import find_config_file
from frigate.util.image import SharedMemoryFrameManager, intersection_over_union
logger = logging.getLogger(__name__)
@@ -136,7 +135,7 @@ class PtzMotionEstimator:
try:
logger.debug(
f"{camera}: Motion estimator transformation: {self.coord_transformations.rel_to_abs([[0,0]])}"
f"{camera}: Motion estimator transformation: {self.coord_transformations.rel_to_abs([[0, 0]])}"
)
except Exception:
pass
@@ -328,13 +327,7 @@ class PtzAutoTracker:
self.autotracker_init[camera] = True
def _write_config(self, camera):
config_file = os.environ.get("CONFIG_FILE", f"{CONFIG_DIR}/config.yml")
# Check if we can use .yaml instead of .yml
config_file_yaml = config_file.replace(".yml", ".yaml")
if os.path.isfile(config_file_yaml):
config_file = config_file_yaml
config_file = find_config_file()
logger.debug(
f"{camera}: Writing new config with autotracker motion coefficients: {self.config.cameras[camera].onvif.autotracking.movement_weights}"
@@ -478,7 +471,7 @@ class PtzAutoTracker:
self.onvif.get_camera_status(camera)
logger.info(
f"Calibration for {camera} in progress: {round((step/num_steps)*100)}% complete"
f"Calibration for {camera} in progress: {round((step / num_steps) * 100)}% complete"
)
self.calibrating[camera] = False
@@ -697,7 +690,7 @@ class PtzAutoTracker:
f"{camera}: Predicted movement time: {self._predict_movement_time(camera, pan, tilt)}"
)
logger.debug(
f"{camera}: Actual movement time: {self.ptz_metrics[camera].stop_time.value-self.ptz_metrics[camera].start_time.value}"
f"{camera}: Actual movement time: {self.ptz_metrics[camera].stop_time.value - self.ptz_metrics[camera].start_time.value}"
)
# save metrics for better estimate calculations
@@ -990,10 +983,10 @@ class PtzAutoTracker:
logger.debug(f"{camera}: Zoom test: at max zoom: {at_max_zoom}")
logger.debug(f"{camera}: Zoom test: at min zoom: {at_min_zoom}")
logger.debug(
f'{camera}: Zoom test: zoom in hysteresis limit: {zoom_in_hysteresis} value: {AUTOTRACKING_ZOOM_IN_HYSTERESIS} original: {self.tracked_object_metrics[camera]["original_target_box"]} max: {self.tracked_object_metrics[camera]["max_target_box"]} target: {calculated_target_box if calculated_target_box else self.tracked_object_metrics[camera]["target_box"]}'
f"{camera}: Zoom test: zoom in hysteresis limit: {zoom_in_hysteresis} value: {AUTOTRACKING_ZOOM_IN_HYSTERESIS} original: {self.tracked_object_metrics[camera]['original_target_box']} max: {self.tracked_object_metrics[camera]['max_target_box']} target: {calculated_target_box if calculated_target_box else self.tracked_object_metrics[camera]['target_box']}"
)
logger.debug(
f'{camera}: Zoom test: zoom out hysteresis limit: {zoom_out_hysteresis} value: {AUTOTRACKING_ZOOM_OUT_HYSTERESIS} original: {self.tracked_object_metrics[camera]["original_target_box"]} max: {self.tracked_object_metrics[camera]["max_target_box"]} target: {calculated_target_box if calculated_target_box else self.tracked_object_metrics[camera]["target_box"]}'
f"{camera}: Zoom test: zoom out hysteresis limit: {zoom_out_hysteresis} value: {AUTOTRACKING_ZOOM_OUT_HYSTERESIS} original: {self.tracked_object_metrics[camera]['original_target_box']} max: {self.tracked_object_metrics[camera]['max_target_box']} target: {calculated_target_box if calculated_target_box else self.tracked_object_metrics[camera]['target_box']}"
)
# Zoom in conditions (and)
@@ -1076,7 +1069,7 @@ class PtzAutoTracker:
pan = ((centroid_x / camera_width) - 0.5) * 2
tilt = (0.5 - (centroid_y / camera_height)) * 2
logger.debug(f'{camera}: Original box: {obj.obj_data["box"]}')
logger.debug(f"{camera}: Original box: {obj.obj_data['box']}")
logger.debug(f"{camera}: Predicted box: {tuple(predicted_box)}")
logger.debug(
f"{camera}: Velocity: {tuple(np.round(average_velocity).flatten().astype(int))}"
@@ -1186,7 +1179,7 @@ class PtzAutoTracker:
)
zoom = (ratio - 1) / (ratio + 1)
logger.debug(
f'{camera}: limit: {self.tracked_object_metrics[camera]["max_target_box"]}, ratio: {ratio} zoom calculation: {zoom}'
f"{camera}: limit: {self.tracked_object_metrics[camera]['max_target_box']}, ratio: {ratio} zoom calculation: {zoom}"
)
if not result:
# zoom out with special condition if zooming out because of velocity, edges, etc.

View File

@@ -6,6 +6,7 @@ from importlib.util import find_spec
from pathlib import Path
import numpy
import requests
from onvif import ONVIFCamera, ONVIFError
from zeep.exceptions import Fault, TransportError
from zeep.transports import Transport
@@ -48,7 +49,11 @@ class OnvifController:
if cam.onvif.host:
try:
transport = Transport(timeout=10, operation_timeout=10)
session = requests.Session()
session.verify = not cam.onvif.tls_insecure
transport = Transport(
timeout=10, operation_timeout=10, session=session
)
self.cams[cam_name] = {
"onvif": ONVIFCamera(
cam.onvif.host,
@@ -558,22 +563,26 @@ class OnvifController:
if not self._init_onvif(camera_name):
return
if command == OnvifCommandEnum.init:
# already init
return
elif command == OnvifCommandEnum.stop:
self._stop(camera_name)
elif command == OnvifCommandEnum.preset:
self._move_to_preset(camera_name, param)
elif command == OnvifCommandEnum.move_relative:
_, pan, tilt = param.split("_")
self._move_relative(camera_name, float(pan), float(tilt), 0, 1)
elif (
command == OnvifCommandEnum.zoom_in or command == OnvifCommandEnum.zoom_out
):
self._zoom(camera_name, command)
else:
self._move(camera_name, command)
try:
if command == OnvifCommandEnum.init:
# already init
return
elif command == OnvifCommandEnum.stop:
self._stop(camera_name)
elif command == OnvifCommandEnum.preset:
self._move_to_preset(camera_name, param)
elif command == OnvifCommandEnum.move_relative:
_, pan, tilt = param.split("_")
self._move_relative(camera_name, float(pan), float(tilt), 0, 1)
elif (
command == OnvifCommandEnum.zoom_in
or command == OnvifCommandEnum.zoom_out
):
self._zoom(camera_name, command)
else:
self._move(camera_name, command)
except ONVIFError as e:
logger.error(f"Unable to handle onvif command: {e}")
def get_camera_info(self, camera_name: str) -> dict[str, any]:
if camera_name not in self.cams.keys():

View File

@@ -29,6 +29,7 @@ from frigate.const import (
RECORD_DIR,
)
from frigate.models import Recordings, ReviewSegment
from frigate.review.types import SeverityEnum
from frigate.util.services import get_video_properties
logger = logging.getLogger(__name__)
@@ -194,6 +195,7 @@ class RecordingMaintainer(threading.Thread):
ReviewSegment.select(
ReviewSegment.start_time,
ReviewSegment.end_time,
ReviewSegment.severity,
ReviewSegment.data,
)
.where(
@@ -219,11 +221,15 @@ class RecordingMaintainer(threading.Thread):
[r for r in recordings_to_insert if r is not None],
)
def drop_segment(self, cache_path: str) -> None:
Path(cache_path).unlink(missing_ok=True)
self.end_time_cache.pop(cache_path, None)
async def validate_and_move_segment(
self, camera: str, reviews: list[ReviewSegment], recording: dict[str, any]
) -> None:
cache_path = recording["cache_path"]
start_time = recording["start_time"]
cache_path: str = recording["cache_path"]
start_time: datetime.datetime = recording["start_time"]
record_config = self.config.cameras[camera].record
# Just delete files if recordings are turned off
@@ -231,8 +237,7 @@ class RecordingMaintainer(threading.Thread):
camera not in self.config.cameras
or not self.config.cameras[camera].record.enabled
):
Path(cache_path).unlink(missing_ok=True)
self.end_time_cache.pop(cache_path, None)
self.drop_segment(cache_path)
return
if cache_path in self.end_time_cache:
@@ -260,24 +265,34 @@ class RecordingMaintainer(threading.Thread):
return
# if cached file's start_time is earlier than the retain days for the camera
# meaning continuous recording is not enabled
if start_time <= (
datetime.datetime.now().astimezone(datetime.timezone.utc)
- datetime.timedelta(days=self.config.cameras[camera].record.retain.days)
):
# if the cached segment overlaps with the events:
# if the cached segment overlaps with the review items:
overlaps = False
for review in reviews:
# if the event starts in the future, stop checking events
severity = SeverityEnum[review.severity]
# if the review item starts in the future, stop checking review items
# and remove this segment
if review.start_time > end_time.timestamp():
if (
review.start_time - record_config.get_review_pre_capture(severity)
) > end_time.timestamp():
overlaps = False
Path(cache_path).unlink(missing_ok=True)
self.end_time_cache.pop(cache_path, None)
break
# if the event is in progress or ends after the recording starts, keep it
# and stop looking at events
if review.end_time is None or review.end_time >= start_time.timestamp():
# if the review item is in progress or ends after the recording starts, keep it
# and stop looking at review items
if (
review.end_time is None
or (
review.end_time
+ record_config.get_review_post_capture(severity)
)
>= start_time.timestamp()
):
overlaps = True
break
@@ -296,7 +311,7 @@ class RecordingMaintainer(threading.Thread):
cache_path,
record_mode,
)
# if it doesn't overlap with an event, go ahead and drop the segment
# if it doesn't overlap with an review item, go ahead and drop the segment
# if it ends more than the configured pre_capture for the camera
else:
camera_info = self.object_recordings_info[camera]
@@ -307,9 +322,9 @@ class RecordingMaintainer(threading.Thread):
most_recently_processed_frame_time - record_config.event_pre_capture
).astimezone(datetime.timezone.utc)
if end_time < retain_cutoff:
Path(cache_path).unlink(missing_ok=True)
self.end_time_cache.pop(cache_path, None)
self.drop_segment(cache_path)
# else retain days includes this segment
# meaning continuous recording is enabled
else:
# assume that empty means the relevant recording info has not been received yet
camera_info = self.object_recordings_info[camera]
@@ -390,8 +405,7 @@ class RecordingMaintainer(threading.Thread):
# check if the segment shouldn't be stored
if segment_info.should_discard_segment(store_mode):
Path(cache_path).unlink(missing_ok=True)
self.end_time_cache.pop(cache_path, None)
self.drop_segment(cache_path)
return
# directory will be in utc due to start_time being in utc
@@ -435,7 +449,7 @@ class RecordingMaintainer(threading.Thread):
return None
else:
logger.debug(
f"Copied {file_path} in {datetime.datetime.now().timestamp()-start_frame} seconds."
f"Copied {file_path} in {datetime.datetime.now().timestamp() - start_frame} seconds."
)
try:

View File

@@ -7,7 +7,6 @@ import random
import string
import sys
import threading
from enum import Enum
from multiprocessing.synchronize import Event as MpEvent
from pathlib import Path
from typing import Optional
@@ -27,6 +26,7 @@ from frigate.const import (
from frigate.events.external import ManualEventState
from frigate.models import ReviewSegment
from frigate.object_processing import TrackedObject
from frigate.review.types import SeverityEnum
from frigate.util.image import SharedMemoryFrameManager, calculate_16_9_crop
logger = logging.getLogger(__name__)
@@ -39,11 +39,6 @@ THRESHOLD_ALERT_ACTIVITY = 120
THRESHOLD_DETECTION_ACTIVITY = 30
class SeverityEnum(str, Enum):
alert = "alert"
detection = "detection"
class PendingReviewSegment:
def __init__(
self,
@@ -261,7 +256,7 @@ class ReviewSegmentMaintainer(threading.Thread):
elif object["sub_label"][0] in self.config.model.all_attributes:
segment.detections[object["id"]] = object["sub_label"][0]
else:
segment.detections[object["id"]] = f'{object["label"]}-verified'
segment.detections[object["id"]] = f"{object['label']}-verified"
segment.sub_labels[object["id"]] = object["sub_label"][0]
# if object is alert label
@@ -357,7 +352,7 @@ class ReviewSegmentMaintainer(threading.Thread):
elif object["sub_label"][0] in self.config.model.all_attributes:
detections[object["id"]] = object["sub_label"][0]
else:
detections[object["id"]] = f'{object["label"]}-verified'
detections[object["id"]] = f"{object['label']}-verified"
sub_labels[object["id"]] = object["sub_label"][0]
# if object is alert label
@@ -480,7 +475,9 @@ class ReviewSegmentMaintainer(threading.Thread):
if not self.config.cameras[camera].record.enabled:
if current_segment:
self.update_existing_segment(current_segment, frame_time, [])
self.update_existing_segment(
current_segment, frame_name, frame_time, []
)
continue
@@ -530,7 +527,9 @@ class ReviewSegmentMaintainer(threading.Thread):
if event_id in self.indefinite_events[camera]:
self.indefinite_events[camera].pop(event_id)
current_segment.last_update = manual_info["end_time"]
if len(self.indefinite_events[camera]) == 0:
current_segment.last_update = manual_info["end_time"]
else:
logger.error(
f"Event with ID {event_id} has a set duration and can not be ended manually."

6
frigate/review/types.py Normal file
View File

@@ -0,0 +1,6 @@
from enum import Enum
class SeverityEnum(str, Enum):
alert = "alert"
detection = "detection"

View File

@@ -72,8 +72,7 @@ class BaseServiceProcess(Service, ABC):
running = False
except TimeoutError:
self.manager.logger.warning(
f"{self.name} is still running after "
f"{timeout} seconds. Killing."
f"{self.name} is still running after {timeout} seconds. Killing."
)
if running:

View File

@@ -293,7 +293,7 @@ def stats_snapshot(
for path in [RECORD_DIR, CLIPS_DIR, CACHE_DIR, "/dev/shm"]:
try:
storage_stats = shutil.disk_usage(path)
except FileNotFoundError:
except (FileNotFoundError, OSError):
stats["service"]["storage"][path] = {}
continue

View File

@@ -17,6 +17,8 @@ bandwidth_equation = Recordings.segment_size / (
Recordings.end_time - Recordings.start_time
)
MAX_CALCULATED_BANDWIDTH = 10000 # 10Gb/hr
class StorageMaintainer(threading.Thread):
"""Maintain frigates recording storage."""
@@ -52,6 +54,12 @@ class StorageMaintainer(threading.Thread):
* 3600,
2,
)
if bandwidth > MAX_CALCULATED_BANDWIDTH:
logger.warning(
f"{camera} has a bandwidth of {bandwidth} MB/hr which exceeds the expected maximum. This typically indicates an issue with the cameras recordings."
)
bandwidth = MAX_CALCULATED_BANDWIDTH
except TypeError:
bandwidth = 0

View File

@@ -9,8 +9,8 @@ from playhouse.sqliteq import SqliteQueueDatabase
from frigate.api.fastapi_app import create_fastapi_app
from frigate.config import FrigateConfig
from frigate.models import Event, ReviewSegment
from frigate.review.maintainer import SeverityEnum
from frigate.models import Event, Recordings, ReviewSegment
from frigate.review.types import SeverityEnum
from frigate.test.const import TEST_DB, TEST_DB_CLEANUPS
@@ -146,17 +146,35 @@ class BaseTestHttp(unittest.TestCase):
def insert_mock_review_segment(
self,
id: str,
start_time: datetime.datetime = datetime.datetime.now().timestamp(),
end_time: datetime.datetime = datetime.datetime.now().timestamp() + 20,
start_time: float = datetime.datetime.now().timestamp(),
end_time: float = datetime.datetime.now().timestamp() + 20,
severity: SeverityEnum = SeverityEnum.alert,
has_been_reviewed: bool = False,
) -> Event:
"""Inserts a basic event model with a given id."""
"""Inserts a review segment model with a given id."""
return ReviewSegment.insert(
id=id,
camera="front_door",
start_time=start_time,
end_time=end_time,
has_been_reviewed=False,
severity=SeverityEnum.alert,
has_been_reviewed=has_been_reviewed,
severity=severity,
thumb_path=False,
data={},
).execute()
def insert_mock_recording(
self,
id: str,
start_time: float = datetime.datetime.now().timestamp(),
end_time: float = datetime.datetime.now().timestamp() + 20,
) -> Event:
"""Inserts a recording model with a given id."""
return Recordings.insert(
id=id,
path=id,
camera="front_door",
start_time=start_time,
end_time=end_time,
duration=end_time - start_time,
).execute()

View File

@@ -1,76 +1,89 @@
import datetime
from datetime import datetime, timedelta
from fastapi.testclient import TestClient
from frigate.models import Event, ReviewSegment
from frigate.models import Event, Recordings, ReviewSegment
from frigate.review.types import SeverityEnum
from frigate.test.http_api.base_http_test import BaseTestHttp
class TestHttpReview(BaseTestHttp):
def setUp(self):
super().setUp([Event, ReviewSegment])
super().setUp([Event, Recordings, ReviewSegment])
self.app = super().create_app()
def _get_reviews(self, ids: list[str]):
return list(
ReviewSegment.select(ReviewSegment.id)
.where(ReviewSegment.id.in_(ids))
.execute()
)
def _get_recordings(self, ids: list[str]):
return list(
Recordings.select(Recordings.id).where(Recordings.id.in_(ids)).execute()
)
####################################################################################################################
################################### GET /review Endpoint ########################################################
####################################################################################################################
# Does not return any data point since the end time (before parameter) is not passed and the review segment end_time is 2 seconds from now
def test_get_review_no_filters_no_matches(self):
app = super().create_app()
now = datetime.datetime.now().timestamp()
now = datetime.now().timestamp()
with TestClient(app) as client:
with TestClient(self.app) as client:
super().insert_mock_review_segment("123456.random", now, now + 2)
reviews_response = client.get("/review")
assert reviews_response.status_code == 200
reviews_in_response = reviews_response.json()
assert len(reviews_in_response) == 0
response = client.get("/review")
assert response.status_code == 200
response_json = response.json()
assert len(response_json) == 0
def test_get_review_no_filters(self):
app = super().create_app()
now = datetime.datetime.now().timestamp()
now = datetime.now().timestamp()
with TestClient(app) as client:
with TestClient(self.app) as client:
super().insert_mock_review_segment("123456.random", now - 2, now - 1)
reviews_response = client.get("/review")
assert reviews_response.status_code == 200
reviews_in_response = reviews_response.json()
assert len(reviews_in_response) == 1
response = client.get("/review")
assert response.status_code == 200
response_json = response.json()
assert len(response_json) == 1
def test_get_review_with_time_filter_no_matches(self):
app = super().create_app()
now = datetime.datetime.now().timestamp()
now = datetime.now().timestamp()
with TestClient(app) as client:
with TestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id, now, now + 2)
params = {
"after": now,
"before": now + 3,
}
reviews_response = client.get("/review", params=params)
assert reviews_response.status_code == 200
reviews_in_response = reviews_response.json()
assert len(reviews_in_response) == 0
response = client.get("/review", params=params)
assert response.status_code == 200
response_json = response.json()
assert len(response_json) == 0
def test_get_review_with_time_filter(self):
app = super().create_app()
now = datetime.datetime.now().timestamp()
now = datetime.now().timestamp()
with TestClient(app) as client:
with TestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id, now, now + 2)
params = {
"after": now - 1,
"before": now + 3,
}
reviews_response = client.get("/review", params=params)
assert reviews_response.status_code == 200
reviews_in_response = reviews_response.json()
assert len(reviews_in_response) == 1
assert reviews_in_response[0]["id"] == id
response = client.get("/review", params=params)
assert response.status_code == 200
response_json = response.json()
assert len(response_json) == 1
assert response_json[0]["id"] == id
def test_get_review_with_limit_filter(self):
app = super().create_app()
now = datetime.datetime.now().timestamp()
now = datetime.now().timestamp()
with TestClient(app) as client:
with TestClient(self.app) as client:
id = "123456.random"
id2 = "654321.random"
super().insert_mock_review_segment(id, now, now + 2)
@@ -80,17 +93,49 @@ class TestHttpReview(BaseTestHttp):
"after": now,
"before": now + 3,
}
reviews_response = client.get("/review", params=params)
assert reviews_response.status_code == 200
reviews_in_response = reviews_response.json()
assert len(reviews_in_response) == 1
assert reviews_in_response[0]["id"] == id2
response = client.get("/review", params=params)
assert response.status_code == 200
response_json = response.json()
assert len(response_json) == 1
assert response_json[0]["id"] == id2
def test_get_review_with_severity_filters_no_matches(self):
now = datetime.now().timestamp()
with TestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id, now, now + 2, SeverityEnum.detection)
params = {
"severity": "detection",
"after": now - 1,
"before": now + 3,
}
response = client.get("/review", params=params)
assert response.status_code == 200
response_json = response.json()
assert len(response_json) == 1
assert response_json[0]["id"] == id
def test_get_review_with_severity_filters(self):
now = datetime.now().timestamp()
with TestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id, now, now + 2, SeverityEnum.detection)
params = {
"severity": "alert",
"after": now - 1,
"before": now + 3,
}
response = client.get("/review", params=params)
assert response.status_code == 200
response_json = response.json()
assert len(response_json) == 0
def test_get_review_with_all_filters(self):
app = super().create_app()
now = datetime.datetime.now().timestamp()
now = datetime.now().timestamp()
with TestClient(app) as client:
with TestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id, now, now + 2)
params = {
@@ -103,8 +148,424 @@ class TestHttpReview(BaseTestHttp):
"after": now - 1,
"before": now + 3,
}
reviews_response = client.get("/review", params=params)
assert reviews_response.status_code == 200
reviews_in_response = reviews_response.json()
assert len(reviews_in_response) == 1
assert reviews_in_response[0]["id"] == id
response = client.get("/review", params=params)
assert response.status_code == 200
response_json = response.json()
assert len(response_json) == 1
assert response_json[0]["id"] == id
####################################################################################################################
################################### GET /review/summary Endpoint #################################################
####################################################################################################################
def test_get_review_summary_all_filters(self):
with TestClient(self.app) as client:
super().insert_mock_review_segment("123456.random")
params = {
"cameras": "front_door",
"labels": "all",
"zones": "all",
"timezone": "utc",
}
response = client.get("/review/summary", params=params)
assert response.status_code == 200
response_json = response.json()
# e.g. '2024-11-24'
today_formatted = datetime.today().strftime("%Y-%m-%d")
expected_response = {
"last24Hours": {
"reviewed_alert": 0,
"reviewed_detection": 0,
"total_alert": 1,
"total_detection": 0,
},
today_formatted: {
"day": today_formatted,
"reviewed_alert": 0,
"reviewed_detection": 0,
"total_alert": 1,
"total_detection": 0,
},
}
self.assertEqual(response_json, expected_response)
def test_get_review_summary_no_filters(self):
with TestClient(self.app) as client:
super().insert_mock_review_segment("123456.random")
response = client.get("/review/summary")
assert response.status_code == 200
response_json = response.json()
# e.g. '2024-11-24'
today_formatted = datetime.today().strftime("%Y-%m-%d")
expected_response = {
"last24Hours": {
"reviewed_alert": 0,
"reviewed_detection": 0,
"total_alert": 1,
"total_detection": 0,
},
today_formatted: {
"day": today_formatted,
"reviewed_alert": 0,
"reviewed_detection": 0,
"total_alert": 1,
"total_detection": 0,
},
}
self.assertEqual(response_json, expected_response)
def test_get_review_summary_multiple_days(self):
now = datetime.now()
five_days_ago = datetime.today() - timedelta(days=5)
with TestClient(self.app) as client:
super().insert_mock_review_segment(
"123456.random", now.timestamp() - 2, now.timestamp() - 1
)
super().insert_mock_review_segment(
"654321.random",
five_days_ago.timestamp(),
five_days_ago.timestamp() + 1,
)
response = client.get("/review/summary")
assert response.status_code == 200
response_json = response.json()
# e.g. '2024-11-24'
today_formatted = now.strftime("%Y-%m-%d")
# e.g. '2024-11-19'
five_days_ago_formatted = five_days_ago.strftime("%Y-%m-%d")
expected_response = {
"last24Hours": {
"reviewed_alert": 0,
"reviewed_detection": 0,
"total_alert": 1,
"total_detection": 0,
},
today_formatted: {
"day": today_formatted,
"reviewed_alert": 0,
"reviewed_detection": 0,
"total_alert": 1,
"total_detection": 0,
},
five_days_ago_formatted: {
"day": five_days_ago_formatted,
"reviewed_alert": 0,
"reviewed_detection": 0,
"total_alert": 1,
"total_detection": 0,
},
}
self.assertEqual(response_json, expected_response)
def test_get_review_summary_multiple_days_edge_cases(self):
now = datetime.now()
five_days_ago = datetime.today() - timedelta(days=5)
twenty_days_ago = datetime.today() - timedelta(days=20)
one_month_ago = datetime.today() - timedelta(days=30)
one_month_ago_ts = one_month_ago.timestamp()
with TestClient(self.app) as client:
super().insert_mock_review_segment("123456.random", now.timestamp())
super().insert_mock_review_segment(
"123457.random", five_days_ago.timestamp()
)
super().insert_mock_review_segment(
"123458.random",
twenty_days_ago.timestamp(),
None,
SeverityEnum.detection,
)
# One month ago plus 5 seconds fits within the condition (review.start_time > month_ago). Assuming that the endpoint does not take more than 5 seconds to be invoked
super().insert_mock_review_segment(
"123459.random",
one_month_ago_ts + 5,
None,
SeverityEnum.detection,
)
# This won't appear in the output since it's not within last month start_time clause (review.start_time > month_ago)
super().insert_mock_review_segment("123450.random", one_month_ago_ts)
response = client.get("/review/summary")
assert response.status_code == 200
response_json = response.json()
# e.g. '2024-11-24'
today_formatted = now.strftime("%Y-%m-%d")
# e.g. '2024-11-19'
five_days_ago_formatted = five_days_ago.strftime("%Y-%m-%d")
# e.g. '2024-11-04'
twenty_days_ago_formatted = twenty_days_ago.strftime("%Y-%m-%d")
# e.g. '2024-10-24'
one_month_ago_formatted = one_month_ago.strftime("%Y-%m-%d")
expected_response = {
"last24Hours": {
"reviewed_alert": 0,
"reviewed_detection": 0,
"total_alert": 1,
"total_detection": 0,
},
today_formatted: {
"day": today_formatted,
"reviewed_alert": 0,
"reviewed_detection": 0,
"total_alert": 1,
"total_detection": 0,
},
five_days_ago_formatted: {
"day": five_days_ago_formatted,
"reviewed_alert": 0,
"reviewed_detection": 0,
"total_alert": 1,
"total_detection": 0,
},
twenty_days_ago_formatted: {
"day": twenty_days_ago_formatted,
"reviewed_alert": 0,
"reviewed_detection": 0,
"total_alert": 0,
"total_detection": 1,
},
one_month_ago_formatted: {
"day": one_month_ago_formatted,
"reviewed_alert": 0,
"reviewed_detection": 0,
"total_alert": 0,
"total_detection": 1,
},
}
self.assertEqual(response_json, expected_response)
def test_get_review_summary_multiple_in_same_day(self):
now = datetime.now()
five_days_ago = datetime.today() - timedelta(days=5)
with TestClient(self.app) as client:
super().insert_mock_review_segment("123456.random", now.timestamp())
five_days_ago_ts = five_days_ago.timestamp()
for i in range(20):
super().insert_mock_review_segment(
f"123456_{i}.random_alert",
five_days_ago_ts,
five_days_ago_ts,
SeverityEnum.alert,
)
for i in range(15):
super().insert_mock_review_segment(
f"123456_{i}.random_detection",
five_days_ago_ts,
five_days_ago_ts,
SeverityEnum.detection,
)
response = client.get("/review/summary")
assert response.status_code == 200
response_json = response.json()
# e.g. '2024-11-24'
today_formatted = now.strftime("%Y-%m-%d")
# e.g. '2024-11-19'
five_days_ago_formatted = five_days_ago.strftime("%Y-%m-%d")
expected_response = {
"last24Hours": {
"reviewed_alert": 0,
"reviewed_detection": 0,
"total_alert": 1,
"total_detection": 0,
},
today_formatted: {
"day": today_formatted,
"reviewed_alert": 0,
"reviewed_detection": 0,
"total_alert": 1,
"total_detection": 0,
},
five_days_ago_formatted: {
"day": five_days_ago_formatted,
"reviewed_alert": 0,
"reviewed_detection": 0,
"total_alert": 20,
"total_detection": 15,
},
}
self.assertEqual(response_json, expected_response)
def test_get_review_summary_multiple_in_same_day_with_reviewed(self):
five_days_ago = datetime.today() - timedelta(days=5)
with TestClient(self.app) as client:
five_days_ago_ts = five_days_ago.timestamp()
for i in range(10):
super().insert_mock_review_segment(
f"123456_{i}.random_alert_not_reviewed",
five_days_ago_ts,
five_days_ago_ts,
SeverityEnum.alert,
False,
)
for i in range(10):
super().insert_mock_review_segment(
f"123456_{i}.random_alert_reviewed",
five_days_ago_ts,
five_days_ago_ts,
SeverityEnum.alert,
True,
)
for i in range(10):
super().insert_mock_review_segment(
f"123456_{i}.random_detection_not_reviewed",
five_days_ago_ts,
five_days_ago_ts,
SeverityEnum.detection,
False,
)
for i in range(5):
super().insert_mock_review_segment(
f"123456_{i}.random_detection_reviewed",
five_days_ago_ts,
five_days_ago_ts,
SeverityEnum.detection,
True,
)
response = client.get("/review/summary")
assert response.status_code == 200
response_json = response.json()
# e.g. '2024-11-19'
five_days_ago_formatted = five_days_ago.strftime("%Y-%m-%d")
expected_response = {
"last24Hours": {
"reviewed_alert": None,
"reviewed_detection": None,
"total_alert": None,
"total_detection": None,
},
five_days_ago_formatted: {
"day": five_days_ago_formatted,
"reviewed_alert": 10,
"reviewed_detection": 5,
"total_alert": 20,
"total_detection": 15,
},
}
self.assertEqual(response_json, expected_response)
####################################################################################################################
################################### POST reviews/viewed Endpoint ################################################
####################################################################################################################
def test_post_reviews_viewed_no_body(self):
with TestClient(self.app) as client:
super().insert_mock_review_segment("123456.random")
response = client.post("/reviews/viewed")
# Missing ids
assert response.status_code == 422
def test_post_reviews_viewed_no_body_ids(self):
with TestClient(self.app) as client:
super().insert_mock_review_segment("123456.random")
body = {"ids": [""]}
response = client.post("/reviews/viewed", json=body)
# Missing ids
assert response.status_code == 422
def test_post_reviews_viewed_non_existent_id(self):
with TestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id)
body = {"ids": ["1"]}
response = client.post("/reviews/viewed", json=body)
assert response.status_code == 200
response = response.json()
assert response["success"] == True
assert response["message"] == "Reviewed multiple items"
# Verify that in DB the review segment was not changed
review_segment_in_db = (
ReviewSegment.select(ReviewSegment.has_been_reviewed)
.where(ReviewSegment.id == id)
.get()
)
assert review_segment_in_db.has_been_reviewed == False
def test_post_reviews_viewed(self):
with TestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id)
body = {"ids": [id]}
response = client.post("/reviews/viewed", json=body)
assert response.status_code == 200
response = response.json()
assert response["success"] == True
assert response["message"] == "Reviewed multiple items"
# Verify that in DB the review segment was changed
review_segment_in_db = (
ReviewSegment.select(ReviewSegment.has_been_reviewed)
.where(ReviewSegment.id == id)
.get()
)
assert review_segment_in_db.has_been_reviewed == True
####################################################################################################################
################################### POST reviews/delete Endpoint ################################################
####################################################################################################################
def test_post_reviews_delete_no_body(self):
with TestClient(self.app) as client:
super().insert_mock_review_segment("123456.random")
response = client.post("/reviews/delete")
# Missing ids
assert response.status_code == 422
def test_post_reviews_delete_no_body_ids(self):
with TestClient(self.app) as client:
super().insert_mock_review_segment("123456.random")
body = {"ids": [""]}
response = client.post("/reviews/delete", json=body)
# Missing ids
assert response.status_code == 422
def test_post_reviews_delete_non_existent_id(self):
with TestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id)
body = {"ids": ["1"]}
response = client.post("/reviews/delete", json=body)
assert response.status_code == 200
response_json = response.json()
assert response_json["success"] == True
assert response_json["message"] == "Deleted review items."
# Verify that in DB the review segment was not deleted
review_ids_in_db_after = self._get_reviews([id])
assert len(review_ids_in_db_after) == 1
assert review_ids_in_db_after[0].id == id
def test_post_reviews_delete(self):
with TestClient(self.app) as client:
id = "123456.random"
super().insert_mock_review_segment(id)
body = {"ids": [id]}
response = client.post("/reviews/delete", json=body)
assert response.status_code == 200
response_json = response.json()
assert response_json["success"] == True
assert response_json["message"] == "Deleted review items."
# Verify that in DB the review segment was deleted
review_ids_in_db_after = self._get_reviews([id])
assert len(review_ids_in_db_after) == 0
def test_post_reviews_delete_many(self):
with TestClient(self.app) as client:
ids = ["123456.random", "654321.random"]
for id in ids:
super().insert_mock_review_segment(id)
super().insert_mock_recording(id)
review_ids_in_db_before = self._get_reviews(ids)
recordings_ids_in_db_before = self._get_recordings(ids)
assert len(review_ids_in_db_before) == 2
assert len(recordings_ids_in_db_before) == 2
body = {"ids": ids}
response = client.post("/reviews/delete", json=body)
assert response.status_code == 200
response_json = response.json()
assert response_json["success"] == True
assert response_json["message"] == "Deleted review items."
# Verify that in DB all review segments and recordings that were passed were deleted
review_ids_in_db_after = self._get_reviews(ids)
recording_ids_in_db_after = self._get_recordings(ids)
assert len(review_ids_in_db_after) == 0
assert len(recording_ids_in_db_after) == 0

View File

@@ -75,11 +75,11 @@ class TestConfig(unittest.TestCase):
"detectors": {
"cpu": {
"type": "cpu",
"model": {"path": "/cpu_model.tflite"},
"model_path": "/cpu_model.tflite",
},
"edgetpu": {
"type": "edgetpu",
"model": {"path": "/edgetpu_model.tflite"},
"model_path": "/edgetpu_model.tflite",
},
"openvino": {
"type": "openvino",

View File

@@ -168,7 +168,7 @@ class TestHttp(unittest.TestCase):
assert event
assert event["id"] == id
assert event == model_to_dict(Event.get(Event.id == id))
assert event["id"] == model_to_dict(Event.get(Event.id == id))["id"]
def test_get_bad_event(self):
app = create_fastapi_app(

View File

@@ -13,6 +13,7 @@ from frigate.config import (
CameraConfig,
ModelConfig,
)
from frigate.review.types import SeverityEnum
from frigate.util.image import (
area,
calculate_region,
@@ -59,6 +60,27 @@ class TrackedObject:
self.pending_loitering = False
self.previous = self.to_dict()
@property
def max_severity(self) -> Optional[str]:
review_config = self.camera_config.review
if self.obj_data["label"] in review_config.alerts.labels and (
not review_config.alerts.required_zones
or set(self.entered_zones) & set(review_config.alerts.required_zones)
):
return SeverityEnum.alert
if (
not review_config.detections.labels
or self.obj_data["label"] in review_config.detections.labels
) and (
not review_config.detections.required_zones
or set(self.entered_zones) & set(review_config.detections.required_zones)
):
return SeverityEnum.detection
return None
def _is_false_positive(self):
# once a true positive, always a true positive
if not self.false_positive:
@@ -232,6 +254,7 @@ class TrackedObject:
"attributes": self.attributes,
"current_attributes": self.obj_data["attributes"],
"pending_loitering": self.pending_loitering,
"max_severity": self.max_severity,
}
if include_thumbnail:
@@ -316,7 +339,7 @@ class TrackedObject:
box[2],
box[3],
self.obj_data["label"],
f"{int(self.thumbnail_data['score']*100)}% {int(self.thumbnail_data['area'])}",
f"{int(self.thumbnail_data['score'] * 100)}% {int(self.thumbnail_data['area'])}",
thickness=thickness,
color=color,
)

View File

@@ -13,12 +13,12 @@ import urllib.parse
from collections.abc import Mapping
from pathlib import Path
from typing import Any, Optional, Tuple, Union
from zoneinfo import ZoneInfoNotFoundError
import numpy as np
import pytz
from ruamel.yaml import YAML
from tzlocal import get_localzone
from zoneinfo import ZoneInfoNotFoundError
from frigate.const import REGEX_HTTP_CAMERA_USER_PASS, REGEX_RTSP_CAMERA_USER_PASS

View File

@@ -13,7 +13,17 @@ from frigate.util.services import get_video_properties
logger = logging.getLogger(__name__)
CURRENT_CONFIG_VERSION = "0.15-0"
CURRENT_CONFIG_VERSION = "0.15-1"
DEFAULT_CONFIG_FILE = "/config/config.yml"
def find_config_file() -> str:
config_path = os.environ.get("CONFIG_FILE", DEFAULT_CONFIG_FILE)
if not os.path.isfile(config_path):
config_path = config_path.replace("yml", "yaml")
return config_path
def migrate_frigate_config(config_file: str):
@@ -67,6 +77,13 @@ def migrate_frigate_config(config_file: str):
yaml.dump(new_config, f)
previous_version = "0.15-0"
if previous_version < "0.15-1":
logger.info(f"Migrating frigate config from {previous_version} to 0.15-1...")
new_config = migrate_015_1(config)
with open(config_file, "w") as f:
yaml.dump(new_config, f)
previous_version = "0.15-1"
logger.info("Finished frigate config migration...")
@@ -257,6 +274,21 @@ def migrate_015_0(config: dict[str, dict[str, any]]) -> dict[str, dict[str, any]
return new_config
def migrate_015_1(config: dict[str, dict[str, any]]) -> dict[str, dict[str, any]]:
"""Handle migrating frigate config to 0.15-1"""
new_config = config.copy()
for detector, detector_config in config.get("detectors", {}).items():
path = detector_config.get("model", {}).get("path")
if path:
new_config["detectors"][detector]["model_path"] = path
del new_config["detectors"][detector]["model"]
new_config["version"] = "0.15-1"
return new_config
def get_relative_coordinates(
mask: Optional[Union[str, list]], frame_shape: tuple[int, int]
) -> Union[str, list]:
@@ -282,7 +314,7 @@ def get_relative_coordinates(
continue
rel_points.append(
f"{round(x / frame_shape[1], 3)},{round(y / frame_shape[0], 3)}"
f"{round(x / frame_shape[1], 3)},{round(y / frame_shape[0], 3)}"
)
relative_masks.append(",".join(rel_points))
@@ -305,7 +337,7 @@ def get_relative_coordinates(
return []
rel_points.append(
f"{round(x / frame_shape[1], 3)},{round(y / frame_shape[0], 3)}"
f"{round(x / frame_shape[1], 3)},{round(y / frame_shape[0], 3)}"
)
mask = ",".join(rel_points)

View File

@@ -219,19 +219,35 @@ def draw_box_with_label(
text_width = size[0][0]
text_height = size[0][1]
line_height = text_height + size[1]
# get frame height
frame_height = frame.shape[0]
# set the text start position
if position == "ul":
text_offset_x = x_min
text_offset_y = 0 if y_min < line_height else y_min - (line_height + 8)
text_offset_y = max(0, y_min - (line_height + 8))
elif position == "ur":
text_offset_x = x_max - (text_width + 8)
text_offset_y = 0 if y_min < line_height else y_min - (line_height + 8)
text_offset_x = max(0, x_max - (text_width + 8))
text_offset_y = max(0, y_min - (line_height + 8))
elif position == "bl":
text_offset_x = x_min
text_offset_y = y_max
text_offset_y = min(frame_height - line_height, y_max)
elif position == "br":
text_offset_x = x_max - (text_width + 8)
text_offset_y = y_max
text_offset_x = max(0, x_max - (text_width + 8))
text_offset_y = min(frame_height - line_height, y_max)
# Adjust position if it overlaps with the box or goes out of frame
if position in {"ul", "ur"}:
if text_offset_y < y_min + thickness: # Label overlaps with the box
if y_min - (line_height + 8) < 0 and y_max + line_height <= frame_height:
# Not enough space above, and there is space below
text_offset_y = y_max
elif y_min - (line_height + 8) >= 0:
# Enough space above, keep the label at the top
text_offset_y = max(0, y_min - (line_height + 8))
elif position in {"bl", "br"}:
if text_offset_y + line_height > frame_height:
# If there's not enough space below, try above the box
text_offset_y = max(0, y_min - (line_height + 8))
# make the coords of the box with a small padding of two pixels
textbox_coords = (
(text_offset_x, text_offset_y),

View File

@@ -390,12 +390,22 @@ def try_get_info(f, h, default="N/A"):
def get_nvidia_gpu_stats() -> dict[int, dict]:
names: dict[str, int] = {}
results = {}
try:
nvml.nvmlInit()
deviceCount = nvml.nvmlDeviceGetCount()
for i in range(deviceCount):
handle = nvml.nvmlDeviceGetHandleByIndex(i)
gpu_name = nvml.nvmlDeviceGetName(handle)
# handle case where user has multiple of same GPU
if gpu_name in names:
names[gpu_name] += 1
gpu_name += f" ({names.get(gpu_name)})"
else:
names[gpu_name] = 1
meminfo = try_get_info(nvml.nvmlDeviceGetMemoryInfo, handle)
util = try_get_info(nvml.nvmlDeviceGetUtilizationRates, handle)
enc = try_get_info(nvml.nvmlDeviceGetEncoderUtilization, handle)
@@ -423,7 +433,7 @@ def get_nvidia_gpu_stats() -> dict[int, dict]:
dec_util = -1
results[i] = {
"name": nvml.nvmlDeviceGetName(handle),
"name": gpu_name,
"gpu": gpu_util,
"mem": gpu_mem_util,
"enc": enc_util,

View File

@@ -113,7 +113,7 @@ def capture_frames(
fps.value = frame_rate.eps()
skipped_fps.value = skipped_eps.eps()
current_frame.value = datetime.datetime.now().timestamp()
frame_name = f"{config.name}{frame_index}"
frame_name = f"{config.name}_frame{frame_index}"
frame_buffer = frame_manager.write(frame_name)
try:
frame_buffer[:] = ffmpeg_process.stdout.read(frame_size)

View File

@@ -208,7 +208,7 @@ class ProcessClip:
box[2],
box[3],
obj["id"],
f"{int(obj['score']*100)}% {int(obj['area'])}",
f"{int(obj['score'] * 100)}% {int(obj['area'])}",
thickness=thickness,
color=color,
)
@@ -227,7 +227,7 @@ class ProcessClip:
)
cv2.imwrite(
f"{os.path.join(debug_path, os.path.basename(self.clip_path))}.{int(frame_time*1000000)}.jpg",
f"{os.path.join(debug_path, os.path.basename(self.clip_path))}.{int(frame_time * 1000000)}.jpg",
current_frame,
)
@@ -290,7 +290,7 @@ def process(path, label, output, debug_path):
1 for result in results if result[1]["true_positive_objects"] > 0
)
print(
f"Objects were detected in {positive_count}/{len(results)}({positive_count/len(results)*100:.2f}%) clip(s)."
f"Objects were detected in {positive_count}/{len(results)}({positive_count / len(results) * 100:.2f}%) clip(s)."
)
if output:

10
web/package-lock.json generated
View File

@@ -72,6 +72,7 @@
"tailwind-merge": "^2.4.0",
"tailwind-scrollbar": "^3.1.0",
"tailwindcss-animate": "^1.0.7",
"use-long-press": "^3.2.0",
"vaul": "^0.9.1",
"vite-plugin-monaco-editor": "^1.1.0",
"zod": "^3.23.8"
@@ -8709,6 +8710,15 @@
"scheduler": ">=0.19.0"
}
},
"node_modules/use-long-press": {
"version": "3.2.0",
"resolved": "https://registry.npmjs.org/use-long-press/-/use-long-press-3.2.0.tgz",
"integrity": "sha512-uq5o2qFR1VRjHn8Of7Fl344/AGvgk7C5Mcb4aSb1ZRVp6PkgdXJJLdRrlSTJQVkkQcDuqFbFc3mDX4COg7mRTA==",
"license": "MIT",
"peerDependencies": {
"react": ">=16.8.0"
}
},
"node_modules/use-sidecar": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/use-sidecar/-/use-sidecar-1.1.2.tgz",

View File

@@ -78,6 +78,7 @@
"tailwind-merge": "^2.4.0",
"tailwind-scrollbar": "^3.1.0",
"tailwindcss-animate": "^1.0.7",
"use-long-press": "^3.2.0",
"vaul": "^0.9.1",
"vite-plugin-monaco-editor": "^1.1.0",
"zod": "^3.23.8"

View File

@@ -29,8 +29,11 @@ export function ApiProvider({ children, options }: ApiProviderType) {
error.response &&
[401, 302, 307].includes(error.response.status)
) {
window.location.href =
error.response.headers.get("location") ?? "login";
// redirect to the login page if not already there
const loginPage = error.response.headers.get("location") ?? "login";
if (window.location.href !== loginPage) {
window.location.href = loginPage;
}
}
},
...options,

View File

@@ -63,7 +63,7 @@ export function UserAuthForm({ className, ...props }: UserAuthFormProps) {
toast.error("Exceeded rate limit. Try again later.", {
position: "top-center",
});
} else if (err.response?.status === 400) {
} else if (err.response?.status === 401) {
toast.error("Login failed", {
position: "top-center",
});

View File

@@ -1,4 +1,4 @@
import { useCallback, useMemo } from "react";
import { useMemo } from "react";
import { useApiHost } from "@/api";
import { getIconForLabel } from "@/utils/iconUtil";
import useSWR from "swr";
@@ -12,10 +12,11 @@ import { capitalizeFirstLetter } from "@/utils/stringUtil";
import { SearchResult } from "@/types/search";
import { cn } from "@/lib/utils";
import { TooltipPortal } from "@radix-ui/react-tooltip";
import useContextMenu from "@/hooks/use-contextmenu";
type SearchThumbnailProps = {
searchResult: SearchResult;
onClick: (searchResult: SearchResult) => void;
onClick: (searchResult: SearchResult, ctrl: boolean, detail: boolean) => void;
};
export default function SearchThumbnail({
@@ -28,9 +29,9 @@ export default function SearchThumbnail({
// interactions
const handleOnClick = useCallback(() => {
onClick(searchResult);
}, [searchResult, onClick]);
useContextMenu(imgRef, () => {
onClick(searchResult, true, false);
});
const objectLabel = useMemo(() => {
if (
@@ -45,7 +46,10 @@ export default function SearchThumbnail({
}, [config, searchResult]);
return (
<div className="relative size-full cursor-pointer" onClick={handleOnClick}>
<div
className="relative size-full cursor-pointer"
onClick={() => onClick(searchResult, false, true)}
>
<ImageLoadingIndicator
className="absolute inset-0"
imgLoaded={imgLoaded}
@@ -79,7 +83,7 @@ export default function SearchThumbnail({
<div className="mx-3 pb-1 text-sm text-white">
<Chip
className={`z-0 flex items-center justify-between gap-1 space-x-1 bg-gray-500 bg-gradient-to-br from-gray-400 to-gray-500 text-xs`}
onClick={() => onClick(searchResult)}
onClick={() => onClick(searchResult, false, true)}
>
{getIconForLabel(objectLabel, "size-3 text-white")}
{Math.round(

View File

@@ -755,7 +755,11 @@ export function CameraGroupEdit({
<FormMessage />
{[
...(birdseyeConfig?.enabled ? ["birdseye"] : []),
...Object.keys(config?.cameras ?? {}),
...Object.keys(config?.cameras ?? {}).sort(
(a, b) =>
(config?.cameras[a]?.ui?.order ?? 0) -
(config?.cameras[b]?.ui?.order ?? 0),
),
].map((camera) => (
<FormControl key={camera}>
<FilterSwitch

View File

@@ -0,0 +1,132 @@
import { useCallback, useState } from "react";
import axios from "axios";
import { Button, buttonVariants } from "../ui/button";
import { isDesktop } from "react-device-detect";
import { HiTrash } from "react-icons/hi";
import {
AlertDialog,
AlertDialogAction,
AlertDialogCancel,
AlertDialogContent,
AlertDialogDescription,
AlertDialogFooter,
AlertDialogHeader,
AlertDialogTitle,
} from "../ui/alert-dialog";
import useKeyboardListener from "@/hooks/use-keyboard-listener";
import { toast } from "sonner";
type SearchActionGroupProps = {
selectedObjects: string[];
setSelectedObjects: (ids: string[]) => void;
pullLatestData: () => void;
};
export default function SearchActionGroup({
selectedObjects,
setSelectedObjects,
pullLatestData,
}: SearchActionGroupProps) {
const onClearSelected = useCallback(() => {
setSelectedObjects([]);
}, [setSelectedObjects]);
const onDelete = useCallback(async () => {
await axios
.delete(`events/`, {
data: { event_ids: selectedObjects },
})
.then((resp) => {
if (resp.status == 200) {
toast.success("Tracked objects deleted successfully.", {
position: "top-center",
});
setSelectedObjects([]);
pullLatestData();
}
})
.catch(() => {
toast.error("Failed to delete tracked objects.", {
position: "top-center",
});
});
}, [selectedObjects, setSelectedObjects, pullLatestData]);
const [deleteDialogOpen, setDeleteDialogOpen] = useState(false);
const [bypassDialog, setBypassDialog] = useState(false);
useKeyboardListener(["Shift"], (_, modifiers) => {
setBypassDialog(modifiers.shift);
});
const handleDelete = useCallback(() => {
if (bypassDialog) {
onDelete();
} else {
setDeleteDialogOpen(true);
}
}, [bypassDialog, onDelete]);
return (
<>
<AlertDialog
open={deleteDialogOpen}
onOpenChange={() => setDeleteDialogOpen(!deleteDialogOpen)}
>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle>Confirm Delete</AlertDialogTitle>
</AlertDialogHeader>
<AlertDialogDescription>
Deleting these {selectedObjects.length} tracked objects removes the
snapshot, any saved embeddings, and any associated object lifecycle
entries. Recorded footage of these tracked objects in History view
will <em>NOT</em> be deleted.
<br />
<br />
Are you sure you want to proceed?
<br />
<br />
Hold the <em>Shift</em> key to bypass this dialog in the future.
</AlertDialogDescription>
<AlertDialogFooter>
<AlertDialogCancel>Cancel</AlertDialogCancel>
<AlertDialogAction
className={buttonVariants({ variant: "destructive" })}
onClick={onDelete}
>
Delete
</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>
</AlertDialog>
<div className="absolute inset-x-2 inset-y-0 flex items-center justify-between gap-2 bg-background py-2 md:left-auto">
<div className="mx-1 flex items-center justify-center text-sm text-muted-foreground">
<div className="p-1">{`${selectedObjects.length} selected`}</div>
<div className="p-1">{"|"}</div>
<div
className="cursor-pointer p-2 text-primary hover:rounded-lg hover:bg-secondary"
onClick={onClearSelected}
>
Unselect
</div>
</div>
<div className="flex items-center gap-1 md:gap-2">
<Button
className="flex items-center gap-2 p-2"
aria-label="Delete"
size="sm"
onClick={handleDelete}
>
<HiTrash className="text-secondary-foreground" />
{isDesktop && (
<div className="text-primary">
{bypassDialog ? "Delete Now" : "Delete"}
</div>
)}
</Button>
</div>
</div>
</>
);
}

View File

@@ -15,13 +15,15 @@ import {
SearchFilter,
SearchFilters,
SearchSource,
SearchSortType,
} from "@/types/search";
import { DateRange } from "react-day-picker";
import { cn } from "@/lib/utils";
import { MdLabel } from "react-icons/md";
import { MdLabel, MdSort } from "react-icons/md";
import PlatformAwareDialog from "../overlay/dialog/PlatformAwareDialog";
import SearchFilterDialog from "../overlay/dialog/SearchFilterDialog";
import { CalendarRangeFilterButton } from "./CalendarFilterButton";
import { RadioGroup, RadioGroupItem } from "@/components/ui/radio-group";
type SearchFilterGroupProps = {
className: string;
@@ -107,6 +109,25 @@ export default function SearchFilterGroup({
[config, allLabels, allZones],
);
const availableSortTypes = useMemo(() => {
const sortTypes = ["date_asc", "date_desc"];
if (filter?.min_score || filter?.max_score) {
sortTypes.push("score_desc", "score_asc");
}
if (filter?.event_id || filter?.query) {
sortTypes.push("relevance");
}
return sortTypes as SearchSortType[];
}, [filter]);
const defaultSortType = useMemo<SearchSortType>(() => {
if (filter?.query || filter?.event_id) {
return "relevance";
} else {
return "date_desc";
}
}, [filter]);
const groups = useMemo(() => {
if (!config) {
return [];
@@ -179,6 +200,16 @@ export default function SearchFilterGroup({
filterValues={filterValues}
onUpdateFilter={onUpdateFilter}
/>
{filters.includes("sort") && Object.keys(filter ?? {}).length > 0 && (
<SortTypeButton
availableSortTypes={availableSortTypes ?? []}
defaultSortType={defaultSortType}
selectedSortType={filter?.sort}
updateSortType={(newSort) => {
onUpdateFilter({ ...filter, sort: newSort });
}}
/>
)}
</div>
);
}
@@ -362,3 +393,176 @@ export function GeneralFilterContent({
</>
);
}
type SortTypeButtonProps = {
availableSortTypes: SearchSortType[];
defaultSortType: SearchSortType;
selectedSortType: SearchSortType | undefined;
updateSortType: (sortType: SearchSortType | undefined) => void;
};
function SortTypeButton({
availableSortTypes,
defaultSortType,
selectedSortType,
updateSortType,
}: SortTypeButtonProps) {
const [open, setOpen] = useState(false);
const [currentSortType, setCurrentSortType] = useState<
SearchSortType | undefined
>(selectedSortType as SearchSortType);
// ui
useEffect(() => {
setCurrentSortType(selectedSortType);
// only refresh when state changes
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [selectedSortType]);
const trigger = (
<Button
size="sm"
variant={
selectedSortType != defaultSortType && selectedSortType != undefined
? "select"
: "default"
}
className="flex items-center gap-2 capitalize"
aria-label="Labels"
>
<MdSort
className={`${selectedSortType != defaultSortType && selectedSortType != undefined ? "text-selected-foreground" : "text-secondary-foreground"}`}
/>
<div
className={`${selectedSortType != defaultSortType && selectedSortType != undefined ? "text-selected-foreground" : "text-primary"}`}
>
Sort
</div>
</Button>
);
const content = (
<SortTypeContent
availableSortTypes={availableSortTypes ?? []}
defaultSortType={defaultSortType}
selectedSortType={selectedSortType}
currentSortType={currentSortType}
setCurrentSortType={setCurrentSortType}
updateSortType={updateSortType}
onClose={() => setOpen(false)}
/>
);
return (
<PlatformAwareDialog
trigger={trigger}
content={content}
contentClassName={
isDesktop
? "scrollbar-container h-auto max-h-[80dvh] overflow-y-auto"
: "max-h-[75dvh] overflow-hidden p-4"
}
open={open}
onOpenChange={(open) => {
if (!open) {
setCurrentSortType(selectedSortType);
}
setOpen(open);
}}
/>
);
}
type SortTypeContentProps = {
availableSortTypes: SearchSortType[];
defaultSortType: SearchSortType;
selectedSortType: SearchSortType | undefined;
currentSortType: SearchSortType | undefined;
updateSortType: (sort_type: SearchSortType | undefined) => void;
setCurrentSortType: (sort_type: SearchSortType | undefined) => void;
onClose: () => void;
};
export function SortTypeContent({
availableSortTypes,
defaultSortType,
selectedSortType,
currentSortType,
updateSortType,
setCurrentSortType,
onClose,
}: SortTypeContentProps) {
const sortLabels = {
date_asc: "Date (Ascending)",
date_desc: "Date (Descending)",
score_asc: "Object Score (Ascending)",
score_desc: "Object Score (Descending)",
relevance: "Relevance",
};
return (
<>
<div className="overflow-x-hidden">
<div className="my-2.5 flex flex-col gap-2.5">
<RadioGroup
value={
Array.isArray(currentSortType)
? currentSortType?.[0]
: (currentSortType ?? defaultSortType)
}
defaultValue={defaultSortType}
onValueChange={(value) =>
setCurrentSortType(value as SearchSortType)
}
className="w-full space-y-1"
>
{availableSortTypes.map((value) => (
<div className="flex flex-row gap-2">
<RadioGroupItem
key={value}
value={value}
id={`sort-${value}`}
className={
value == (currentSortType ?? defaultSortType)
? "bg-selected from-selected/50 to-selected/90 text-selected"
: "bg-secondary from-secondary/50 to-secondary/90 text-secondary"
}
/>
<Label
htmlFor={`sort-${value}`}
className="flex cursor-pointer items-center space-x-2"
>
<span>{sortLabels[value]}</span>
</Label>
</div>
))}
</RadioGroup>
</div>
</div>
<DropdownMenuSeparator />
<div className="flex items-center justify-evenly p-2">
<Button
aria-label="Apply"
variant="select"
onClick={() => {
if (selectedSortType != currentSortType) {
updateSortType(currentSortType);
}
onClose();
}}
>
Apply
</Button>
<Button
aria-label="Reset"
onClick={() => {
setCurrentSortType(undefined);
updateSortType(undefined);
}}
>
Reset
</Button>
</div>
</>
);
}

View File

@@ -18,6 +18,7 @@ import {
FilterType,
SavedSearchQuery,
SearchFilter,
SearchSortType,
SearchSource,
} from "@/types/search";
import useSuggestions from "@/hooks/use-suggestions";
@@ -323,6 +324,9 @@ export default function InputWithTags({
case "event_id":
newFilters.event_id = value;
break;
case "sort":
newFilters.sort = value as SearchSortType;
break;
default:
// Handle array types (cameras, labels, subLabels, zones)
if (!newFilters[type]) newFilters[type] = [];

View File

@@ -108,13 +108,15 @@ export default function SearchResultActions({
</a>
</MenuItem>
)}
<MenuItem
aria-label="Show the object lifecycle"
onClick={showObjectLifecycle}
>
<FaArrowsRotate className="mr-2 size-4" />
<span>View object lifecycle</span>
</MenuItem>
{searchResult.data.type == "object" && (
<MenuItem
aria-label="Show the object lifecycle"
onClick={showObjectLifecycle}
>
<FaArrowsRotate className="mr-2 size-4" />
<span>View object lifecycle</span>
</MenuItem>
)}
{config?.semantic_search?.enabled && isContextMenu && (
<MenuItem
aria-label="Find similar tracked objects"
@@ -128,6 +130,7 @@ export default function SearchResultActions({
config?.plus?.enabled &&
searchResult.has_snapshot &&
searchResult.end_time &&
searchResult.data.type == "object" &&
!searchResult.plus_id && (
<MenuItem aria-label="Submit to Frigate Plus" onClick={showSnapshot}>
<FrigatePlusIcon className="mr-2 size-4 cursor-pointer text-primary" />
@@ -181,22 +184,24 @@ export default function SearchResultActions({
</ContextMenu>
) : (
<>
{config?.semantic_search?.enabled && (
<Tooltip>
<TooltipTrigger>
<MdImageSearch
className="size-5 cursor-pointer text-primary-variant hover:text-primary"
onClick={findSimilar}
/>
</TooltipTrigger>
<TooltipContent>Find similar</TooltipContent>
</Tooltip>
)}
{config?.semantic_search?.enabled &&
searchResult.data.type == "object" && (
<Tooltip>
<TooltipTrigger>
<MdImageSearch
className="size-5 cursor-pointer text-primary-variant hover:text-primary"
onClick={findSimilar}
/>
</TooltipTrigger>
<TooltipContent>Find similar</TooltipContent>
</Tooltip>
)}
{!isMobileOnly &&
config?.plus?.enabled &&
searchResult.has_snapshot &&
searchResult.end_time &&
searchResult.data.type == "object" &&
!searchResult.plus_id && (
<Tooltip>
<TooltipTrigger>

View File

@@ -477,7 +477,10 @@ export default function ObjectLifecycle({
</p>
{Array.isArray(item.data.box) &&
item.data.box.length >= 4
? (item.data.box[2] / item.data.box[3]).toFixed(2)
? (
aspectRatio *
(item.data.box[2] / item.data.box[3])
).toFixed(2)
: "N/A"}
</div>
</div>

Some files were not shown because too many files have changed in this diff Show More