Compare commits

..

79 Commits

Author SHA1 Message Date
dependabot[bot]
6a77055930 Bump future from 0.18.2 to 0.18.3 in /docker/hailo8l
Bumps [future](https://github.com/PythonCharmers/python-future) from 0.18.2 to 0.18.3.
- [Release notes](https://github.com/PythonCharmers/python-future/releases)
- [Changelog](https://github.com/PythonCharmers/python-future/blob/master/docs/changelog.rst)
- [Commits](https://github.com/PythonCharmers/python-future/compare/v0.18.2...v0.18.3)

---
updated-dependencies:
- dependency-name: future
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2024-08-30 02:20:19 +00:00
Nicolas Mowen
cf7718132a Update python deps (#13413) 2024-08-29 20:19:50 -06:00
Nicolas Mowen
939a055d46 Fix mobile scroll behavior (#13201) 2024-08-29 20:19:50 -06:00
Nicolas Mowen
01fa1777ac Fix ZMQ race condition with events (#13198) 2024-08-29 20:19:50 -06:00
Nicolas Mowen
a77436eec3 Add button for downloading full set of logs (#13188) 2024-08-29 20:19:50 -06:00
Nicolas Mowen
c268a126dc Make review detail scrollable on mobile and ensure F+ is enabled (#13119) 2024-08-29 20:19:50 -06:00
Nicolas Mowen
29e86d4eeb Add ability to upload to Frigate+ from review side panel (#13071)
* Add ability to submit to frigate+ from review panel

* Add separator

* Use consistent ID
2024-08-29 20:19:50 -06:00
Nicolas Mowen
9d18061d0f Move plus dialog to separate component 2024-08-29 20:19:50 -06:00
Nicolas Mowen
943114c052 Add support for review information side panel (#13063) 2024-08-29 20:19:50 -06:00
Nicolas Mowen
2cb81ef116 Use review item thumbnail for export (#12998)
* Use review item thumbnail for export

* Formatting
2024-08-29 20:19:50 -06:00
Nicolas Mowen
c16450adc8 Hailo amd64 support (#12820)
* Support building docker image for amd64 and arm64

* Support installations on amd64 and arm64

* Run build in new job

* Update docs
2024-08-29 20:19:50 -06:00
Nicolas Mowen
347d54f388 Chunk timeline deletes (#12900) 2024-08-29 20:19:50 -06:00
Nicolas Mowen
3428baa3fa Fix embeddings failing to start 2024-08-29 20:19:50 -06:00
Nicolas Mowen
4f8066a35a Update version 2024-08-29 20:19:50 -06:00
Nicolas Mowen
04fd05bc7d Notification action (#12742) 2024-08-29 20:19:50 -06:00
Nicolas Mowen
3abf89596a Disable semantic search by default (#12568)
* Disable semantic search by default and don't start processes unless enabled

* Conditionally create embeddings

* Fix typing
2024-08-29 20:19:50 -06:00
Nicolas Mowen
690ee3dc15 Implement support for notifications (#12523)
* Setup basic notification page

* Add basic notification implementation

* Register for push notifications

* Implement dispatching

* Add fields

* Handle image and link

* Add notification config

* Add field for users notification tokens

* Implement saving of notification tokens

* Implement VAPID key generation

* Implement public key encoding

* Implement webpush from server

* Implement push notification handling

* Make notifications config only

* Add maskable icon

* Use zod form to control notification settings in the UI

* Use js

* Always open notification

* Support multiple endpoints

* Handle cleaning up expired notification registrations

* Correctly unsubscribe notifications

* Change ttl dynamically

* Add note about notification latency and features

* Cleanup docs

* Fix firefox pushes

* Add links to docs and improve formatting

* Improve wording

* Fix docstring

Co-authored-by: Blake Blackshear <blake@frigate.video>

* Handle case where native auth is not enabled

* Show errors in UI

---------

Co-authored-by: Blake Blackshear <blake@frigate.video>
2024-08-29 20:19:50 -06:00
Nicolas Mowen
331c882af2 Catch hailo initialization error (#12558) 2024-08-29 20:19:50 -06:00
Nicolas Mowen
b4eb83d892 Fix calendar 2024-08-29 20:19:50 -06:00
spanner3003
4a35573210 Initial support for Hailo-8L (#12431)
* Initial support for Hailo-8L

Added file for Hailo-8L detector including dockerfile, h8l.mk, h8l.hcl, hailo8l.py, ci.yml and ssd_mobilenat_v1.hef as the inference network.

Added files to help with the installation of Hailo-8L dependences like generate_wheel_conf.py, requirements-wheel-h8l.txt and modified setup.py to try and work with any hardware.

Updated docs to reflect Initial Hailo-8L support including oject_detectors.md,  hardware.md and installation.md.

* Update .github/workflows/ci.yml

typo h8l not arm64

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* Update docs/docs/configuration/object_detectors.md

Clarity for the end user and correct uses of words

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* Update docs/docs/frigate/installation.md

typo

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* update Installation.md to clarify Hailo-8L installation process.

* Update docs/docs/frigate/hardware.md

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update hardware.md add Inference time.

* Oops no new line at the end of the file.

* Update docs/docs/frigate/hardware.md typo

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* Update dockerfile to download the ssd_modilenet_v1 model instead of having it in the repo.

* Updated dockerfile so it dose not download the model file.

add function to download it at runtime.

update model path.

* fix formatting according to ruff and removed unnecessary functions.

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2024-08-29 20:19:50 -06:00
Nicolas Mowen
e7fabce4e0 Use grid for searches (#12386) 2024-08-29 20:19:50 -06:00
Nicolas Mowen
feb2c9fc62 Use thumbnails instead of review images for search (#12381) 2024-08-29 20:19:50 -06:00
Jason Hunter
dd7fd16b69 Chroma logs in Frontend (#12131)
* Chroma logs in frontend

* fix lint
2024-08-29 20:19:50 -06:00
Daniel
d93d6262ce Use 127.0.0.1 for chroma (#12135) 2024-08-29 20:19:50 -06:00
Nicolas Mowen
9d7e499adb Semantic Search Frontend (#12112)
* Add basic search page

* Abstract filters to separate components

* Make searching functional

* Add loading and no results indicators

* Implement searching

* Combine account and settings menus on mobile

* Support using thumbnail for in progress detections

* Fetch previews

* Move recordings view and open recordings when search is selected

* Implement detail pane

* Implement saving of description

* Implement similarity search

* Fix clicking

* Add date range picker

* Fix

* Fix iOS zoom bug

* Mobile fixes

* Use text area

* Fix spacing for drawer

* Fix fetching previews incorrectly
2024-08-29 20:19:50 -06:00
Jason Hunter
0d7a148897 reindex events in batches to reduce memory and cpu load (#12124) 2024-08-29 20:19:50 -06:00
Jason Hunter
9e825811f2 Semantic Search API (#12105)
* initial event search api implementation

* fix lint

* fix tests

* move chromadb imports and pysqlite hotswap to fix tests

* remove unused import

* switch default limit to 50

* fix events accidently pulling inside chroma results loop
2024-08-29 20:19:50 -06:00
Jason Hunter
36cbffcc5e Semantic Search for Detections (#11899)
* Initial re-implementation of semantic search

* put docker-compose back and make reindex match docs

* remove debug code and fix import

* fix docs

* manually build pysqlite3 as binaries are only available for x86-64

* update comment in build_pysqlite3.sh

* only embed objects

* better error handling when genai fails

* ask ollama to pull requested model at startup

* update ollama docs

* address some PR review comments

* fix lint

* use IPC to write description, update docs for reindex

* remove gemini-pro-vision from docs as it will be unavailable soon

* fix OpenAI doc available models

* fix api error in gemini and metadata for embeddings
2024-08-29 20:19:50 -06:00
Josh Hawkins
f4f3cfa911 Don't allow periods in zone or camera group names (#13400) 2024-08-28 06:26:50 -06:00
Josh Hawkins
ca0f6e4c0a Add portal the live player tooltip (#13389) 2024-08-27 19:14:22 -06:00
Marc Altmann
a7ccabd8f1 update go2rtc version in reference config (#13367) 2024-08-26 15:17:24 -06:00
Nicolas Mowen
453a8d794e Add tooltip for icons in review event list (#13334) 2024-08-25 07:57:10 -05:00
Blake Blackshear
ce79898cae fix default build (#13321) 2024-08-24 07:44:15 -05:00
Blake Blackshear
bf90daae2b update actions for release (#13318) 2024-08-24 07:25:24 -05:00
Josh Hawkins
fdb5d53960 Update discussion templates (#13303)
* Update discussion templates

* camera support go2rtc
2024-08-23 18:05:14 -05:00
Nicolas Mowen
2dc5a7f767 Fix delayed preview not showing (#13295) 2024-08-23 09:51:59 -05:00
Josh Hawkins
65ca3c8fa3 Fix discussion templates (#13292)
* Fix yaml spacing for discussion templates

* Remove browser question from detectors
2024-08-23 07:58:39 -05:00
Josh Hawkins
ff34af2c1f Update discussion templates (#13291)
* Revamp support discussion templates

* move text to description

* remove duplicate logs box

* ffprobe on camera support

* longer description on config support
2024-08-23 06:44:31 -06:00
Nicolas Mowen
e01b6ee76b Fix case where user's cgroup says it has 0 cpu cores (#13271) 2024-08-22 08:06:26 -05:00
Nicolas Mowen
1c7ee5f4e4 UI fixes (#13246)
* Fix bad data in stats

* Add support for changes dialog when leaving without saving config editor

* Fix scrolling into view
2024-08-21 08:19:07 -06:00
Nicolas Mowen
d96f76c27f Ensure only enabled birdseye cameras are considered active (#13194)
* Ensure only enabled birdseye cameras are considered active

* Cleanup
2024-08-19 16:01:48 -05:00
Nicolas Mowen
1da934e63c Dynamically detect if full screen is supported (#13197) 2024-08-19 16:01:21 -05:00
Nicolas Mowen
38a8d34ba5 Preview fixes (#13193)
* Handle case where preview was saved late

* fix timing
2024-08-19 10:45:55 -06:00
Josh Hawkins
8e31244fb3 Adjust MSE player playback rate logic (#13164)
* Fix MSE playback rate logic

* don't adjust playback rate if we just started streaming

* memoize onprogress
2024-08-18 12:13:21 -06:00
Nicolas Mowen
3a124dbb84 Fix plus view resetting (#13160) 2024-08-18 07:41:10 -06:00
Josh Hawkins
8c23ede683 Live player fixes (#13143)
* Jump to live when exceeding buffer time threshold in MSE player

* clean up

* Try adjusting playback rate instead of jumping to live

* clean up

* fallback to webrtc if enabled before jsmpeg

* baseline

* clean up

* remove comments

* adaptive playback rate and intelligent switching improvements

* increase logging and reset live mode after camera is no longer active on dashboard only

* jump to live on safari/iOS

* clean up

* clean up

* refactor camera live mode hook

* remove key listener

* resolve conflicts
2024-08-17 12:16:48 -06:00
Josh Hawkins
4133e454c4 Remove dashboard keyboard listener (#13102) 2024-08-15 16:13:11 -05:00
Josh Hawkins
4dce8ff60a Add shortcut key "r" to mark selected items as reviewed (#13087)
* Add shortcut key "r" to mark selected items as reviewed

* unselect after keypress
2024-08-15 09:51:44 -05:00
Nicolas Mowen
2e724291db Catch case where github sends bad json data (#13077) 2024-08-14 20:41:41 -05:00
Nicolas Mowen
f6b61c26ae Rename bug report (#13039) 2024-08-13 14:26:01 -05:00
Nicolas Mowen
1b876bf8d3 UI fixes (#13030)
* Fix difficulty overwriting export name

* Fix NaN for score selector
2024-08-13 10:12:06 -05:00
Nicolas Mowen
b0d42ea116 Fix last hour preview (#13027) 2024-08-13 08:23:46 -06:00
Nicolas Mowen
05bc3839cc Reset recordings when changing the date (#13009) 2024-08-12 15:12:49 -06:00
Nicolas Mowen
281482927a Recordings Fixes (#13005)
* If recordings don't exist mark as no recordings

* Fix reloading recordings failing

* Fix mark items not clearing selected

* Cleanup

* Default to last full hour when error occurs

* Remove check

* Cleanup

* Handle empty recordings list case

* Ensure that the start time is within the time range

* Catch other reset cases
2024-08-12 14:30:16 -06:00
Nicolas Mowen
132a712341 Hide record switch when disabled (#12997) 2024-08-12 08:21:21 -05:00
Nicolas Mowen
13d121f443 Catch case where recording starts right at end of request (#12956) 2024-08-11 08:32:17 -05:00
Josh Hawkins
67ba3dbd8b Add pan/pinch/zoom capability on plus snapshots (#12953) 2024-08-11 07:15:04 -06:00
Nicolas Mowen
4afa7bf4e1 Catch case where user tries to end definite manual event (#12951)
* Catch case where user tries to end definite manual event

* Formatting
2024-08-11 07:32:39 -05:00
Josh Hawkins
77bf710299 Add confirmation dialog before deleting review items (#12950) 2024-08-11 06:25:09 -06:00
Stavros Kois
9b96211faf add shortcut and query for fullscreen in live view (#12924)
* add shortcut and query for live view

* Update web/src/views/live/LiveDashboardView.tsx

* Update web/src/views/live/LiveDashboardView.tsx

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* Apply suggestions from code review

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>

* Update LiveDashboardView.tsx

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2024-08-10 10:25:13 -06:00
Nicolas Mowen
99e03576bf Remove user args from http jpeg (#12909) 2024-08-09 16:22:24 -06:00
Nicolas Mowen
78d67484e1 Web deps (#12908)
* Update web compnent deps

* Update other web deps
2024-08-09 16:12:07 -06:00
Nicolas Mowen
e9e86cc5af Fix use experimental migrator (#12906) 2024-08-09 16:59:55 -05:00
Nicolas Mowen
70618e93b7 Add button to mark review item as reviewed in filmstrip (#12878)
* Add button to mark review item as reviewd in filmstrip

* Add tooltip
2024-08-09 08:29:35 -05:00
Soren L. Hansen
c84511de16 Fix auth when serving Frigate at a subpath (#12815)
Ensure axios.defaults.baseURL is set when accessing login form.

Drop `/api` prefix in login form's `axios.post` call, since `/api` is
part of the baseURL.

Redirect to subpath on succesful authentication.

Prepend subpath to default logout url.

Fixes #12814
2024-08-09 07:26:26 -06:00
Josh Hawkins
6d9590b4ec Persist live view muted/unmuted for session only (#12727)
* Persist live view muted/unmuted for session only

* consistent naming
2024-08-09 06:46:39 -06:00
Josh Hawkins
33e04fe61f Add right click to delete points in desktop mask/zone editor (#12744) 2024-08-09 06:46:18 -06:00
Josh Hawkins
9f43d10ba7 Ensure review card icon color for event view is visible in light mode (#12812) 2024-08-08 07:54:13 -06:00
Marc Altmann
57503cc318 fix default model for rknn detector (#12807) 2024-08-08 07:54:13 -06:00
Nicolas Mowen
e563692fa2 Add camera name to audio debug line (#12799)
* Add camera name to audio debug line

* Formatting
2024-08-08 07:54:13 -06:00
Nicolas Mowen
9c2974438d Handle case where user stops scrubbing but remains hovering (#12794)
* Handle case where user stops scrubbing but remains hovering

* Add type
2024-08-08 07:54:13 -06:00
Josh Hawkins
54e1bd9eeb Ensure review cameras are sorted by config ui order if specified (#12789) 2024-08-08 07:54:13 -06:00
Nicolas Mowen
8212b66ee0 Use camera status to get state of camera config (#12787)
* Use camera status to get state of camera config

* Fix spelling
2024-08-08 07:54:13 -06:00
Nicolas Mowen
43d2986208 Handle case where sub label was null (#12785) 2024-08-08 07:54:13 -06:00
Nicolas Mowen
f8f7b74792 Update version 2024-08-08 07:54:13 -06:00
Nicolas Mowen
5069072a84 Fix iOS export buttons (#12755)
* Fix iOS export buttons

* Use layering instead of z index
2024-08-08 07:54:13 -06:00
Josh Hawkins
93b81756c6 Only use dense property on phones for motion review timeline (#12768) 2024-08-08 07:54:13 -06:00
Josh Hawkins
4a867ddd56 Use radix css var to limit desktop menu height (#12743) 2024-08-08 07:54:13 -06:00
Josh Hawkins
a347cb5a42 Fix large tablet recording view layout (#12753) 2024-08-08 07:54:13 -06:00
132 changed files with 6539 additions and 1466 deletions

View File

@@ -155,6 +155,30 @@ jobs:
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64,mode=max
combined_extra_builds:
runs-on: ubuntu-latest
name: Combined Extra Builds
needs:
- amd64_build
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Hailo-8l build
uses: docker/bake-action@v4
with:
push: true
targets: h8l
files: docker/hailo8l/h8l.hcl
set: |
h8l.tags=${{ steps.setup.outputs.image-name }}-h8l
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-h8l
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-h8l,mode=max
#- name: AMD/ROCm general build
# env:
# AMDGPU: gfx

View File

@@ -4,3 +4,4 @@
/docker/tensorrt/*jetson* @madsciencetist
/docker/rockchip/ @MarcA711
/docker/rocm/ @harakas
/docker/hailo8l/ @spanner3003

View File

@@ -1,7 +1,7 @@
default_target: local
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
VERSION = 0.14.1
VERSION = 0.15.0
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
CURRENT_UID := $(shell id -u)

104
docker/hailo8l/Dockerfile Normal file
View File

@@ -0,0 +1,104 @@
# syntax=docker/dockerfile:1.6
ARG DEBIAN_FRONTEND=noninteractive
# Build Python wheels
FROM wheels AS h8l-wheels
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
COPY docker/hailo8l/requirements-wheels-h8l.txt /requirements-wheels-h8l.txt
RUN sed -i "/https:\/\//d" /requirements-wheels.txt
# Create a directory to store the built wheels
RUN mkdir /h8l-wheels
# Build the wheels
RUN pip3 wheel --wheel-dir=/h8l-wheels -c /requirements-wheels.txt -r /requirements-wheels-h8l.txt
# Build HailoRT and create wheel
FROM wheels AS build-hailort
ARG TARGETARCH
SHELL ["/bin/bash", "-c"]
# Install necessary APT packages
RUN apt-get -qq update \
&& apt-get -qq install -y \
apt-transport-https \
gnupg \
wget \
# the key fingerprint can be obtained from https://ftp-master.debian.org/keys.html
&& wget -qO- "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xA4285295FC7B1A81600062A9605C66F00D6C9793" | \
gpg --dearmor > /usr/share/keyrings/debian-archive-bullseye-stable.gpg \
&& echo "deb [signed-by=/usr/share/keyrings/debian-archive-bullseye-stable.gpg] http://deb.debian.org/debian bullseye main contrib non-free" | \
tee /etc/apt/sources.list.d/debian-bullseye-nonfree.list \
&& apt-get -qq update \
&& apt-get -qq install -y \
python3.9 \
python3.9-dev \
build-essential cmake git \
&& rm -rf /var/lib/apt/lists/*
# Extract Python version and set environment variables
RUN PYTHON_VERSION=$(python3 --version 2>&1 | awk '{print $2}' | cut -d. -f1,2) && \
PYTHON_VERSION_NO_DOT=$(echo $PYTHON_VERSION | sed 's/\.//') && \
echo "PYTHON_VERSION=$PYTHON_VERSION" > /etc/environment && \
echo "PYTHON_VERSION_NO_DOT=$PYTHON_VERSION_NO_DOT" >> /etc/environment
# Clone and build HailoRT
RUN . /etc/environment && \
git clone https://github.com/hailo-ai/hailort.git /opt/hailort && \
cd /opt/hailort && \
git checkout v4.17.0 && \
cmake -H. -Bbuild -DCMAKE_BUILD_TYPE=Release -DHAILO_BUILD_PYBIND=1 -DPYBIND11_PYTHON_VERSION=${PYTHON_VERSION} && \
cmake --build build --config release --target libhailort && \
cmake --build build --config release --target _pyhailort && \
cp build/hailort/libhailort/bindings/python/src/_pyhailort.cpython-${PYTHON_VERSION_NO_DOT}-$(if [ $TARGETARCH == "amd64" ]; then echo 'x86_64'; else echo 'aarch64'; fi )-linux-gnu.so hailort/libhailort/bindings/python/platform/hailo_platform/pyhailort/ && \
cp build/hailort/libhailort/src/libhailort.so hailort/libhailort/bindings/python/platform/hailo_platform/pyhailort/
RUN ls -ahl /opt/hailort/build/hailort/libhailort/src/
RUN ls -ahl /opt/hailort/hailort/libhailort/bindings/python/platform/hailo_platform/pyhailort/
# Remove the existing setup.py if it exists in the target directory
RUN rm -f /opt/hailort/hailort/libhailort/bindings/python/platform/setup.py
# Copy generate_wheel_conf.py and setup.py
COPY docker/hailo8l/pyhailort_build_scripts/generate_wheel_conf.py /opt/hailort/hailort/libhailort/bindings/python/platform/generate_wheel_conf.py
COPY docker/hailo8l/pyhailort_build_scripts/setup.py /opt/hailort/hailort/libhailort/bindings/python/platform/setup.py
# Run the generate_wheel_conf.py script
RUN python3 /opt/hailort/hailort/libhailort/bindings/python/platform/generate_wheel_conf.py
# Create a wheel file using pip3 wheel
RUN cd /opt/hailort/hailort/libhailort/bindings/python/platform && \
python3 setup.py bdist_wheel --dist-dir /hailo-wheels
# Use deps as the base image
FROM deps AS h8l-frigate
# Copy the wheels from the wheels stage
COPY --from=h8l-wheels /h8l-wheels /deps/h8l-wheels
COPY --from=build-hailort /hailo-wheels /deps/hailo-wheels
COPY --from=build-hailort /etc/environment /etc/environment
RUN CC=$(python3 -c "import sysconfig; import shlex; cc = sysconfig.get_config_var('CC'); cc_cmd = shlex.split(cc)[0]; print(cc_cmd[:-4] if cc_cmd.endswith('-gcc') else cc_cmd)") && \
echo "CC=$CC" >> /etc/environment
# Install the wheels
RUN pip3 install -U /deps/h8l-wheels/*.whl
RUN pip3 install -U /deps/hailo-wheels/*.whl
RUN . /etc/environment && \
mv /usr/local/lib/python${PYTHON_VERSION}/dist-packages/hailo_platform/pyhailort/libhailort.so /usr/lib/${CC} && \
cd /usr/lib/${CC}/ && \
ln -s libhailort.so libhailort.so.4.17.0
# Copy base files from the rootfs stage
COPY --from=rootfs / /
# Set environment variables for Hailo SDK
ENV PATH="/opt/hailort/bin:${PATH}"
ENV LD_LIBRARY_PATH="/usr/lib/$(if [ $TARGETARCH == "amd64" ]; then echo 'x86_64'; else echo 'aarch64'; fi )-linux-gnu:${LD_LIBRARY_PATH}"
# Set workdir
WORKDIR /opt/frigate/

27
docker/hailo8l/h8l.hcl Normal file
View File

@@ -0,0 +1,27 @@
target wheels {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64","linux/amd64"]
target = "wheels"
}
target deps {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64","linux/amd64"]
target = "deps"
}
target rootfs {
dockerfile = "docker/main/Dockerfile"
platforms = ["linux/arm64","linux/amd64"]
target = "rootfs"
}
target h8l {
dockerfile = "docker/hailo8l/Dockerfile"
contexts = {
wheels = "target:wheels"
deps = "target:deps"
rootfs = "target:rootfs"
}
platforms = ["linux/arm64","linux/amd64"]
}

10
docker/hailo8l/h8l.mk Normal file
View File

@@ -0,0 +1,10 @@
BOARDS += h8l
local-h8l: version
docker buildx bake --load --file=docker/hailo8l/h8l.hcl --set h8l.tags=frigate:latest-h8l h8l
build-h8l: version
docker buildx bake --file=docker/hailo8l/h8l.hcl --set h8l.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-h8l h8l
push-h8l: build-h8l
docker buildx bake --push --file=docker/hailo8l/h8l.hcl --set h8l.tags=$(IMAGE_REPO):${GITHUB_REF_NAME}-$(COMMIT_HASH)-h8l h8l

View File

@@ -0,0 +1,67 @@
import json
import os
import platform
import sys
import sysconfig
def extract_toolchain_info(compiler):
# Remove the "-gcc" or "-g++" suffix if present
if compiler.endswith("-gcc") or compiler.endswith("-g++"):
compiler = compiler.rsplit("-", 1)[0]
# Extract the toolchain and ABI part (e.g., "gnu")
toolchain_parts = compiler.split("-")
abi_conventions = next(
(part for part in toolchain_parts if part in ["gnu", "musl", "eabi", "uclibc"]),
"",
)
return abi_conventions
def generate_wheel_conf():
conf_file_path = os.path.join(
os.path.abspath(os.path.dirname(__file__)), "wheel_conf.json"
)
# Extract current system and Python version information
py_version = f"cp{sys.version_info.major}{sys.version_info.minor}"
arch = platform.machine()
system = platform.system().lower()
libc_version = platform.libc_ver()[1]
# Get the compiler information
compiler = sysconfig.get_config_var("CC")
abi_conventions = extract_toolchain_info(compiler)
# Create the new configuration data
new_conf_data = {
"py_version": py_version,
"arch": arch,
"system": system,
"libc_version": libc_version,
"abi": abi_conventions,
"extension": {
"posix": "so",
"nt": "pyd", # Windows
}[os.name],
}
# If the file exists, load the existing data
if os.path.isfile(conf_file_path):
with open(conf_file_path, "r") as conf_file:
conf_data = json.load(conf_file)
# Update the existing data with the new data
conf_data.update(new_conf_data)
else:
# If the file does not exist, use the new data
conf_data = new_conf_data
# Write the updated data to the file
with open(conf_file_path, "w") as conf_file:
json.dump(conf_data, conf_file, indent=4)
if __name__ == "__main__":
generate_wheel_conf()

View File

@@ -0,0 +1,111 @@
import json
import os
from setuptools import find_packages, setup
from wheel.bdist_wheel import bdist_wheel as orig_bdist_wheel
class NonPurePythonBDistWheel(orig_bdist_wheel):
"""Makes the wheel platform-dependent so it can be based on the _pyhailort architecture"""
def finalize_options(self):
orig_bdist_wheel.finalize_options(self)
self.root_is_pure = False
def _get_hailort_lib_path():
lib_filename = "libhailort.so"
lib_path = os.path.join(
os.path.abspath(os.path.dirname(__file__)),
f"hailo_platform/pyhailort/{lib_filename}",
)
if os.path.exists(lib_path):
print(f"Found libhailort shared library at: {lib_path}")
else:
print(f"Error: libhailort shared library not found at: {lib_path}")
raise FileNotFoundError(f"libhailort shared library not found at: {lib_path}")
return lib_path
def _get_pyhailort_lib_path():
conf_file_path = os.path.join(
os.path.abspath(os.path.dirname(__file__)), "wheel_conf.json"
)
if not os.path.isfile(conf_file_path):
raise FileNotFoundError(f"Configuration file not found: {conf_file_path}")
with open(conf_file_path, "r") as conf_file:
content = json.load(conf_file)
py_version = content["py_version"]
arch = content["arch"]
system = content["system"]
extension = content["extension"]
abi = content["abi"]
# Construct the filename directly
lib_filename = f"_pyhailort.cpython-{py_version.split('cp')[1]}-{arch}-{system}-{abi}.{extension}"
lib_path = os.path.join(
os.path.abspath(os.path.dirname(__file__)),
f"hailo_platform/pyhailort/{lib_filename}",
)
if os.path.exists(lib_path):
print(f"Found _pyhailort shared library at: {lib_path}")
else:
print(f"Error: _pyhailort shared library not found at: {lib_path}")
raise FileNotFoundError(
f"_pyhailort shared library not found at: {lib_path}"
)
return lib_path
def _get_package_paths():
packages = []
pyhailort_lib = _get_pyhailort_lib_path()
hailort_lib = _get_hailort_lib_path()
if pyhailort_lib:
packages.append(pyhailort_lib)
if hailort_lib:
packages.append(hailort_lib)
packages.append(os.path.abspath("hailo_tutorials/notebooks/*"))
packages.append(os.path.abspath("hailo_tutorials/hefs/*"))
return packages
if __name__ == "__main__":
setup(
author="Hailo team",
author_email="contact@hailo.ai",
cmdclass={
"bdist_wheel": NonPurePythonBDistWheel,
},
description="HailoRT",
entry_points={
"console_scripts": [
"hailo=hailo_platform.tools.hailocli.main:main",
]
},
install_requires=[
"argcomplete",
"contextlib2",
"future",
"netaddr",
"netifaces",
"verboselogs",
"numpy==1.23.3",
],
name="hailort",
package_data={
"hailo_platform": _get_package_paths(),
},
packages=find_packages(),
platforms=[
"linux_x86_64",
"linux_aarch64",
"win_amd64",
],
url="https://hailo.ai/",
version="4.17.0",
zip_safe=False,
)

View File

@@ -0,0 +1,12 @@
appdirs==1.4.4
argcomplete==2.0.0
contextlib2==0.6.0.post1
distlib==0.3.6
filelock==3.8.0
future==0.18.3
importlib-metadata==5.1.0
importlib-resources==5.1.2
netaddr==0.8.0
netifaces==0.10.9
verboselogs==1.7
virtualenv==20.17.0

View File

@@ -0,0 +1,35 @@
#!/bin/bash
# Update package list and install dependencies
sudo apt-get update
sudo apt-get install -y build-essential cmake git wget linux-modules-extra-$(uname -r)
arch=$(uname -m)
if [[ $arch == "x86_64" ]]; then
sudo apt install -y linux-headers-$(uname -r);
else
sudo apt install -y linux-modules-extra-$(uname -r);
fi
# Clone the HailoRT driver repository
git clone --depth 1 --branch v4.17.0 https://github.com/hailo-ai/hailort-drivers.git
# Build and install the HailoRT driver
cd hailort-drivers/linux/pcie
sudo make all
sudo make install
# Load the Hailo PCI driver
sudo modprobe hailo_pci
# Download and install the firmware
cd ../../
./download_firmware.sh
sudo mv hailo8_fw.4.17.0.bin /lib/firmware/hailo/hailo8_fw.bin
# Install udev rules
sudo cp ./linux/pcie/51-hailo-udev.rules /etc/udev/rules.d/
sudo udevadm control --reload-rules && sudo udevadm trigger
echo "HailoRT driver installation complete."

View File

@@ -148,6 +148,8 @@ RUN apt-get -qq update \
gfortran openexr libatlas-base-dev libssl-dev\
libtbb2 libtbb-dev libdc1394-22-dev libopenexr-dev \
libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev \
# sqlite3 dependencies
tclsh \
# scipy dependencies
gcc gfortran libopenblas-dev liblapack-dev && \
rm -rf /var/lib/apt/lists/*
@@ -161,6 +163,10 @@ RUN wget -q https://bootstrap.pypa.io/get-pip.py -O get-pip.py \
COPY docker/main/requirements.txt /requirements.txt
RUN pip3 install -r /requirements.txt
# Build pysqlite3 from source to support ChromaDB
COPY docker/main/build_pysqlite3.sh /build_pysqlite3.sh
RUN /build_pysqlite3.sh
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
RUN pip3 wheel --wheel-dir=/wheels -r /requirements-wheels.txt
@@ -188,6 +194,13 @@ ARG APT_KEY_DONT_WARN_ON_DANGEROUS_USAGE=DontWarn
ENV NVIDIA_VISIBLE_DEVICES=all
ENV NVIDIA_DRIVER_CAPABILITIES="compute,video,utility"
# Turn off Chroma Telemetry: https://docs.trychroma.com/telemetry#opting-out
ENV ANONYMIZED_TELEMETRY=False
# Allow resetting the chroma database
ENV ALLOW_RESET=True
# Disable tokenizer parallelism warning
ENV TOKENIZERS_PARALLELISM=true
ENV PATH="/usr/lib/btbn-ffmpeg/bin:/usr/local/go2rtc/bin:/usr/local/tempio/bin:/usr/local/nginx/sbin:${PATH}"
# Install dependencies

35
docker/main/build_pysqlite3.sh Executable file
View File

@@ -0,0 +1,35 @@
#!/bin/bash
set -euxo pipefail
SQLITE3_VERSION="96c92aba00c8375bc32fafcdf12429c58bd8aabfcadab6683e35bbb9cdebf19e" # 3.46.0
PYSQLITE3_VERSION="0.5.3"
# Fetch the source code for the latest release of Sqlite.
if [[ ! -d "sqlite" ]]; then
wget https://www.sqlite.org/src/tarball/sqlite.tar.gz?r=${SQLITE3_VERSION} -O sqlite.tar.gz
tar xzf sqlite.tar.gz
cd sqlite/
LIBS="-lm" ./configure --disable-tcl --enable-tempstore=always
make sqlite3.c
cd ../
rm sqlite.tar.gz
fi
# Grab the pysqlite3 source code.
if [[ ! -d "./pysqlite3" ]]; then
git clone https://github.com/coleifer/pysqlite3.git
fi
cd pysqlite3/
git checkout ${PYSQLITE3_VERSION}
# Copy the sqlite3 source amalgamation into the pysqlite3 directory so we can
# create a self-contained extension module.
cp "../sqlite/sqlite3.c" ./
cp "../sqlite/sqlite3.h" ./
# Create the wheel and put it in the /wheels dir.
sed -i "s|name='pysqlite3-binary'|name=PACKAGE_NAME|g" setup.py
python3 setup.py build_static
pip3 wheel . -w /wheels

View File

@@ -1,8 +1,8 @@
click == 8.1.*
Flask == 3.0.*
Flask_Limiter == 3.7.*
Flask_Limiter == 3.8.*
imutils == 0.5.*
joserfc == 0.11.*
joserfc == 1.0.*
markupsafe == 2.1.*
mypy == 1.6.1
numpy == 1.26.*
@@ -11,13 +11,13 @@ opencv-python-headless == 4.9.0.*
paho-mqtt == 2.1.*
pandas == 2.2.*
peewee == 3.17.*
peewee_migrate == 1.12.*
peewee_migrate == 1.13.*
psutil == 5.9.*
pydantic == 2.7.*
pydantic == 2.8.*
git+https://github.com/fbcotter/py3nvml#egg=py3nvml
PyYAML == 6.0.*
pytz == 2024.1
pyzmq == 26.0.*
pyzmq == 26.2.*
ruamel.yaml == 0.18.*
tzlocal == 5.2
types-PyYAML == 6.0.*
@@ -30,3 +30,13 @@ ws4py == 0.5.*
unidecode == 1.3.*
onnxruntime == 1.18.*
openvino == 2024.1.*
# Embeddings
onnx_clip == 4.0.*
chromadb == 0.5.0
# Generative AI
google-generativeai == 0.6.*
ollama == 0.2.*
openai == 1.30.*
# push notifications
py-vapid == 1.9.*
pywebpush == 2.0.*

View File

@@ -0,0 +1 @@
chroma

View File

@@ -0,0 +1 @@
chroma-pipeline

View File

@@ -0,0 +1,4 @@
#!/command/with-contenv bash
# shellcheck shell=bash
exec logutil-service /dev/shm/logs/chroma

View File

@@ -0,0 +1 @@
longrun

View File

@@ -0,0 +1,28 @@
#!/command/with-contenv bash
# shellcheck shell=bash
# Take down the S6 supervision tree when the service exits
set -o errexit -o nounset -o pipefail
# Logs should be sent to stdout so that s6 can collect them
declare exit_code_container
exit_code_container=$(cat /run/s6-linux-init-container-results/exitcode)
readonly exit_code_container
readonly exit_code_service="${1}"
readonly exit_code_signal="${2}"
readonly service="ChromaDB"
echo "[INFO] Service ${service} exited with code ${exit_code_service} (by signal ${exit_code_signal})"
if [[ "${exit_code_service}" -eq 256 ]]; then
if [[ "${exit_code_container}" -eq 0 ]]; then
echo $((128 + exit_code_signal)) >/run/s6-linux-init-container-results/exitcode
fi
elif [[ "${exit_code_service}" -ne 0 ]]; then
if [[ "${exit_code_container}" -eq 0 ]]; then
echo "${exit_code_service}" >/run/s6-linux-init-container-results/exitcode
fi
fi
exec /run/s6/basedir/bin/halt

View File

@@ -0,0 +1 @@
chroma-log

View File

@@ -0,0 +1,27 @@
#!/command/with-contenv bash
# shellcheck shell=bash
# Start the Frigate service
set -o errexit -o nounset -o pipefail
# Logs should be sent to stdout so that s6 can collect them
# Tell S6-Overlay not to restart this service
s6-svc -O .
search_enabled=`python3 /usr/local/semantic_search/get_search_settings.py | jq -r .enabled`
# Replace the bash process with the Frigate process, redirecting stderr to stdout
exec 2>&1
if [[ "$search_enabled" == 'true' ]]; then
echo "[INFO] Starting ChromaDB..."
exec /usr/local/chroma run --path /config/chroma --host 127.0.0.1
else
while true
do
sleep 9999
continue
done
exit 0
fi

View File

@@ -0,0 +1 @@
120000

View File

@@ -0,0 +1 @@
longrun

View File

@@ -4,7 +4,7 @@
set -o errexit -o nounset -o pipefail
dirs=(/dev/shm/logs/frigate /dev/shm/logs/go2rtc /dev/shm/logs/nginx /dev/shm/logs/certsync)
dirs=(/dev/shm/logs/frigate /dev/shm/logs/go2rtc /dev/shm/logs/nginx /dev/shm/logs/certsync /dev/shm/logs/chroma)
mkdir -p "${dirs[@]}"
chown nobody:nogroup "${dirs[@]}"

View File

@@ -0,0 +1,14 @@
#!/usr/bin/python3
# -*- coding: utf-8 -*-s
__import__("pysqlite3")
import re
import sys
sys.modules["sqlite3"] = sys.modules.pop("pysqlite3")
from chromadb.cli.cli import app
if __name__ == "__main__":
sys.argv[0] = re.sub(r"(-script\.pyw|\.exe)?$", "", sys.argv[0])
sys.exit(app())

View File

@@ -0,0 +1,28 @@
"""Prints the semantic_search config as json to stdout."""
import json
import os
import yaml
config_file = os.environ.get("CONFIG_FILE", "/config/config.yml")
# Check if we can use .yaml instead of .yml
config_file_yaml = config_file.replace(".yml", ".yaml")
if os.path.isfile(config_file_yaml):
config_file = config_file_yaml
try:
with open(config_file) as f:
raw_config = f.read()
if config_file.endswith((".yaml", ".yml")):
config: dict[str, any] = yaml.safe_load(raw_config)
elif config_file.endswith(".json"):
config: dict[str, any] = json.loads(raw_config)
except FileNotFoundError:
config: dict[str, any] = {}
search_config: dict[str, any] = config.get("semantic_search", {"enabled": False})
print(json.dumps(search_config))

View File

@@ -4,9 +4,7 @@ title: Advanced Options
sidebar_label: Advanced Options
---
### Logging
#### Frigate `logger`
### `logger`
Change the default log level for troubleshooting purposes.
@@ -30,18 +28,6 @@ Examples of available modules are:
- `watchdog.<camera_name>`
- `ffmpeg.<camera_name>.<sorted_roles>` NOTE: All FFmpeg logs are sent as `error` level.
#### Go2RTC Logging
See [the go2rtc docs](for logging configuration)
```yaml
go2rtc:
streams:
...
log:
exec: trace
```
### `environment_vars`
This section can be used to set environment variables for those unable to modify the environment of the container (ie. within HassOS)
@@ -197,7 +183,7 @@ To do this:
3. Give `go2rtc` execute permission.
4. Restart Frigate and the custom version will be used, you can verify by checking go2rtc logs.
## Validating your config.yml file updates
## Validating your config.yaml file updates
When frigate starts up, it checks whether your config file is valid, and if it is not, the process exits. To minimize interruptions when updating your config, you have three options -- you can edit the config via the WebUI which has built in validation, use the config API, or you can validate on the command line using the frigate docker container.

View File

@@ -24,11 +24,6 @@ On startup, an admin user and password are generated and printed in the logs. It
In the event that you are locked out of your instance, you can tell Frigate to reset the admin password and print it in the logs on next startup using the `reset_admin_password` setting in your config file.
```yaml
auth:
reset_admin_password: true
```
## Login failure rate limiting
In order to limit the risk of brute force attacks, rate limiting is available for login failures. This is implemented with Flask-Limiter, and the string notation for valid values is available in [the documentation](https://flask-limiter.readthedocs.io/en/stable/configuration.html#rate-limit-string-notation).

View File

@@ -9,12 +9,6 @@ This page makes use of presets of FFmpeg args. For more information on presets,
:::
:::note
Many cameras support encoding options which greatly affect the live view experience, see the [Live view](/configuration/live) page for more info.
:::
## MJPEG Cameras
Note that mjpeg cameras require encoding the video into h264 for recording, and restream roles. This will use significantly more CPU than if the cameras supported h264 feeds directly. It is recommended to use the restream role to create an h264 restream and then use that as the source for ffmpeg.

View File

@@ -46,14 +46,6 @@ cameras:
side: ...
```
:::note
If you only define one stream in your `inputs` and do not assign a `detect` role to it, Frigate will automatically assign it the `detect` role. Frigate will always decode a stream to support motion detection, Birdseye, the API image endpoints, and other features, even if you have disabled object detection with `enabled: False` in your config's `detect` section.
If you plan to use Frigate for recording only, it is still recommended to define a `detect` role for a low resolution stream to minimize resource usage from the required stream decoding.
:::
For camera model specific settings check the [camera specific](camera_specific.md) infos.
## Setting up camera PTZ controls
@@ -79,41 +71,29 @@ cameras:
If the ONVIF connection is successful, PTZ controls will be available in the camera's WebUI.
:::tip
If your ONVIF camera does not require authentication credentials, you may still need to specify an empty string for `user` and `password`, eg: `user: ""` and `password: ""`.
:::
An ONVIF-capable camera that supports relative movement within the field of view (FOV) can also be configured to automatically track moving objects and keep them in the center of the frame. For autotracking setup, see the [autotracking](autotracking.md) docs.
## ONVIF PTZ camera recommendations
This list of working and non-working PTZ cameras is based on user feedback.
| Brand or specific camera | PTZ Controls | Autotracking | Notes |
| ---------------------------- | :----------: | :----------: | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| Amcrest | ✅ | ✅ | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support autotracking |
| Amcrest ASH21 | | ❌ | ONVIF service port: 80 |
| Amcrest IP4M-S2112EW-AI | ✅ | ❌ | FOV relative movement not supported. |
| Amcrest IP5M-1190EW | ✅ | | ONVIF Port: 80. FOV relative movement not supported. |
| Ctronics PTZ | ✅ | ❌ | |
| Dahua | ✅ | | |
| Dahua DH-SD2A500HB | ✅ | | |
| Foscam R5 | ✅ | ❌ | |
| Hanwha XNP-6550RH | ✅ | ❌ | |
| Hikvision | ✅ | ❌ | Incomplete ONVIF support (MoveStatus won't update even on latest firmware) - reported with HWP-N4215IH-DE and DS-2DE3304W-DE, but likely others |
| Hikvision DS-2DE3A404IWG-E/W | ✅ | | |
| Reolink 511WA | ✅ | ❌ | Zoom only |
| Reolink E1 Pro | ✅ | ❌ | |
| Reolink E1 Zoom | ✅ | ❌ | |
| Reolink RLC-823A 16x | ✅ | ❌ | |
| Speco O8P32X | ✅ | ❌ | |
| Sunba 405-D20X | ✅ | ❌ | |
| Tapo | ✅ | ❌ | Many models supported, ONVIF Service Port: 2020 |
| Uniview IPC672LR-AX4DUPK | ✅ | ❌ | Firmware says FOV relative movement is supported, but camera doesn't actually move when sending ONVIF commands |
| Uniview IPC6612SR-X33-VG | ✅ | ✅ | Leave `calibrate_on_startup` as `False`. A user has reported that zooming with `absolute` is working. |
| Vikylin PTZ-2804X-I2 | ❌ | ❌ | Incomplete ONVIF support |
| Brand or specific camera | PTZ Controls | Autotracking | Notes |
| ------------------------ | :----------: | :----------: | ----------------------------------------------------------------------------------------------------------------------------------------------- |
| Amcrest | ✅ | ✅ | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support autotracking |
| Amcrest ASH21 | | ❌ | No ONVIF support |
| Ctronics PTZ | ✅ | ❌ | |
| Dahua | ✅ | | |
| Foscam R5 | ✅ | ❌ | |
| Hanwha XNP-6550RH | ✅ | | |
| Hikvision | | ❌ | Incomplete ONVIF support (MoveStatus won't update even on latest firmware) - reported with HWP-N4215IH-DE and DS-2DE3304W-DE, but likely others |
| Reolink 511WA | ✅ | ❌ | Zoom only |
| Reolink E1 Pro | ✅ | ❌ | |
| Reolink E1 Zoom | ✅ | ❌ | |
| Reolink RLC-823A 16x | ✅ | | |
| Sunba 405-D20X | ✅ | ❌ | |
| Tapo | ✅ | ❌ | Many models supported, ONVIF Service Port: 2020 |
| Uniview IPC672LR-AX4DUPK | ✅ | ❌ | Firmware says FOV relative movement is supported, but camera doesn't actually move when sending ONVIF commands |
| Vikylin PTZ-2804X-I2 | ❌ | ❌ | Incomplete ONVIF support |
## Setting up camera groups

View File

@@ -0,0 +1,135 @@
---
id: genai
title: Generative AI
---
Generative AI can be used to automatically generate descriptions based on the thumbnails of your events. This helps with [semantic search](/configuration/semantic_search) in Frigate by providing detailed text descriptions as a basis of the search query.
## Configuration
Generative AI can be enabled for all cameras or only for specific cameras. There are currently 3 providers available to integrate with Frigate.
If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
```yaml
genai:
enabled: True
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
cameras:
front_camera: ...
indoor_camera:
genai: # <- disable GenAI for your indoor camera
enabled: False
```
## Ollama
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance. Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [docker container](https://hub.docker.com/r/ollama/ollama) available.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). At the time of writing, this includes `llava`, `llava-llama3`, `llava-phi3`, and `moondream`.
:::note
You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
:::
### Configuration
```yaml
genai:
enabled: True
provider: ollama
base_url: http://localhost:11434
model: llava
```
## Google Gemini
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`.
### Get API Key
To start using Gemini, you must first get an API key from [Google AI Studio](https://aistudio.google.com).
1. Accept the Terms of Service
2. Click "Get API Key" from the right hand navigation
3. Click "Create API key in new project"
4. Copy the API key for use in your config
### Configuration
```yaml
genai:
enabled: True
provider: gemini
api_key: "{FRIGATE_GEMINI_API_KEY}"
model: gemini-1.5-flash
```
## OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
### Get API Key
To start using OpenAI, you must first [create an API key](https://platform.openai.com/api-keys) and [configure billing](https://platform.openai.com/settings/organization/billing/overview).
### Configuration
```yaml
genai:
enabled: True
provider: openai
api_key: "{FRIGATE_OPENAI_API_KEY}"
model: gpt-4o
```
## Custom Prompts
Frigate sends multiple frames from the detection along with a prompt to your Generative AI provider asking it to generate a description. The default prompt is as follows:
```
Describe the {label} in the sequence of images with as much detail as possible. Do not describe the background.
```
:::tip
Prompts can use variable replacements like `{label}`, `{sub_label}`, and `{camera}` to substitute information from the detection as part of the prompt.
:::
You are also able to define custom prompts in your configuration.
```yaml
genai:
enabled: True
provider: ollama
base_url: http://localhost:11434
model: llava
prompt: "Describe the {label} in these images from the {camera} security camera."
object_prompts:
person: "Describe the main person in these images (gender, age, clothing, activity, etc). Do not include where the activity is occurring (sidewalk, concrete, driveway, etc). If delivering a package, include the company the package is from."
car: "Label the primary vehicle in these images with just the name of the company if it is a delivery vehicle, or the color make and model."
```
### Experiment with prompts
Providers also has a public facing chat interface for their models. Download a couple different thumbnails or snapshots from Frigate and try new things in the playground to get descriptions to your liking before updating the prompt in Frigate.
- OpenAI - [ChatGPT](https://chatgpt.com)
- Gemini - [Google AI Studio](https://aistudio.google.com)
- Ollama - [Open WebUI](https://docs.openwebui.com/)

View File

@@ -370,7 +370,7 @@ Make sure to follow the [Rockchip specific installation instructions](/frigate/i
### Configuration
Add one of the following FFmpeg presets to your `config.yml` to enable hardware video processing:
Add one of the following FFmpeg presets to your `config.yaml` to enable hardware video processing:
```yaml
# if you try to decode a h264 encoded stream

View File

@@ -56,6 +56,11 @@ go2rtc:
password: "{FRIGATE_GO2RTC_RTSP_PASSWORD}"
```
```yaml
genai:
api_key: "{FRIGATE_GENAI_API_KEY}"
```
## Common configuration examples
Here are some common starter configuration examples. Refer to the [reference config](./reference.md) for detailed information about all the config values.

View File

@@ -11,21 +11,11 @@ Frigate intelligently uses three different streaming technologies to display you
The jsmpeg live view will use more browser and client GPU resources. Using go2rtc is highly recommended and will provide a superior experience.
| Source | Frame Rate | Resolution | Audio | Requires go2rtc | Notes |
| ------ | ------------------------------------- | ---------- | ---------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| jsmpeg | same as `detect -> fps`, capped at 10 | 720p | no | no | Resolution is configurable, but go2rtc is recommended if you want higher resolutions and better frame rates. jsmpeg is Frigate's default without go2rtc configured. |
| mse | native | native | yes (depends on audio codec) | yes | iPhone requires iOS 17.1+, Firefox is h.264 only. This is Frigate's default when go2rtc is configured. |
| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration, doesn't support h.265. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. |
### Camera Settings Recommendations
If you are using go2rtc, you should adjust the following settings in your camera's firmware for the best experience with Live view:
- Video codec: **H.264** - provides the most compatible video codec with all Live view technologies and browsers. Avoid any kind of "smart codec" or "+" codec like _H.264+_ or _H.265+_. as these non-standard codecs remove keyframes (see below).
- Audio codec: **AAC** - provides the most compatible audio codec with all Live view technologies and browsers that support audio.
- I-frame interval (sometimes called the keyframe interval, the interframe space, or the GOP length): match your camera's frame rate, or choose "1x" (for interframe space on Reolink cameras). For example, if your stream outputs 20fps, your i-frame interval should be 20 (or 1x on Reolink). Values higher than the frame rate will cause the stream to take longer to begin playback. See [this page](https://gardinal.net/understanding-the-keyframe-interval/) for more on keyframes.
The default video and audio codec on your camera may not always be compatible with your browser, which is why setting them to H.264 and AAC is recommended. See the [go2rtc docs](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#codecs-madness) for codec support information.
| Source | Latency | Frame Rate | Resolution | Audio | Requires go2rtc | Other Limitations |
| ------ | ------- | ------------------------------------- | ---------- | ---------------------------- | --------------- | ------------------------------------------------------------------------------------ |
| jsmpeg | low | same as `detect -> fps`, capped at 10 | 720p | no | no | resolution is configurable, but go2rtc is recommended if you want higher resolutions |
| mse | low | native | native | yes (depends on audio codec) | yes | iPhone requires iOS 17.1+, Firefox is h.264 only |
| webrtc | lowest | native | native | yes (depends on audio codec) | yes | requires extra config, doesn't support h.265 |
### Audio Support
@@ -42,15 +32,6 @@ go2rtc:
- "ffmpeg:http_cam#audio=opus" # <- copy of the stream which transcodes audio to the missing codec (usually will be opus)
```
If your camera does not have audio and you are having problems with Live view, you should have go2rtc send video only:
```yaml
go2rtc:
streams:
no_audio_camera:
- ffmpeg:rtsp://192.168.1.5:554/live0#video=copy
```
### Setting Stream For Live UI
There may be some cameras that you would prefer to use the sub stream for live view, but the main stream for recording. This can be done via `live -> stream_name`.

View File

@@ -0,0 +1,42 @@
---
id: notifications
title: Notifications
---
# Notifications
Frigate offers native notifications using the [WebPush Protocol](https://web.dev/articles/push-notifications-web-push-protocol) which uses the [VAPID spec](https://tools.ietf.org/html/draft-thomson-webpush-vapid) to deliver notifications to web apps using encryption.
## Setting up Notifications
In order to use notifications the following requirements must be met:
- Frigate must be accessed via a secure https connection
- A supported browser must be used. Currently Chrome, Firefox, and Safari are known to be supported.
- In order for notifications to be usable externally, Frigate must be accessible externally
### Configuration
To configure notifications, go to the Frigate WebUI -> Settings -> Notifications and enable, then fill out the fields and save.
### Registration
Once notifications are enabled, press the `Register for Notifications` button on all devices that you would like to receive notifications on. This will register the background worker. After this Frigate must be restarted and then notifications will begin to be sent.
## Supported Notifications
Currently notifications are only supported for review alerts. More notifications will be supported in the future.
:::note
Currently, only Chrome supports images in notifications. Safari and Firefox will only show a title and message in the notification.
:::
## Reduce Notification Latency
Different platforms handle notifications differently, some settings changes may be required to get optimal notification delivery.
### Android
Most Android phones have battery optimization settings. To get reliable Notification delivery the browser (Chrome, Firefox) should have battery optimizations disabled. If Frigate is running as a PWA then the Frigate app should have battery optimizations disabled as well.

View File

@@ -5,7 +5,7 @@ title: Object Detectors
# Officially Supported Detectors
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `openvino`, `tensorrt`, and `rknn`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `openvino`, `tensorrt`, `rknn`, and `hailo8l`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
## CPU Detector (not recommended)
@@ -149,7 +149,7 @@ This detector also supports YOLOX. Frigate does not come with any YOLOX models p
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
@@ -386,3 +386,25 @@ $ cat /sys/kernel/debug/rknpu/load
- All models are automatically downloaded and stored in the folder `config/model_cache/rknn_cache`. After upgrading Frigate, you should remove older models to free up space.
- You can also provide your own `.rknn` model. You should not save your own models in the `rknn_cache` folder, store them directly in the `model_cache` folder or another subfolder. To convert a model to `.rknn` format see the `rknn-toolkit2` (requires a x86 machine). Note, that there is only post-processing for the supported models.
## Hailo-8l
This detector is available if you are using the Raspberry Pi 5 with Hailo-8L AI Kit. This has not been tested using the Hailo-8L with other hardware.
### Configuration
```yaml
detectors:
hailo8l:
type: hailo8l
device: PCIe
model:
path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
model:
width: 300
height: 300
input_tensor: nhwc
input_pixel_format: bgr
model_type: ssd
```

View File

@@ -5,7 +5,7 @@ title: Available Objects
import labels from "../../../labelmap.txt";
Frigate includes the object labels listed below from the Google Coral test data.
Frigate includes the object models listed below from the Google Coral test data.
Please note:

View File

@@ -1,24 +0,0 @@
---
id: pwa
title: Installing Frigate App
---
Frigate supports being installed as a [Progressive Web App](https://web.dev/explore/progressive-web-apps) on Desktop, Android, and iOS.
This adds features including the ability to deep link directly into the app.
## Requirements
In order to install Frigate as a PWA, the following requirements must be met:
- Frigate must be accessed via a secure context (localhost, secure https, etc.)
- On Android, Firefox, Chrome, Edge, Opera, and Samsung Internet Browser all support installing PWAs.
- On iOS 16.4 and later, PWAs can be installed from the Share menu in Safari, Chrome, Edge, Firefox, and Orion.
## Installation
Installation varies slightly based on the device that is being used:
- Desktop: Use the install button typically found in right edge of the address bar
- Android: Use the `Install as App` button in the more options menu
- iOS: Use the `Add to Homescreen` button in the share menu

View File

@@ -320,9 +320,6 @@ review:
- car
- person
# Optional: required zones for an object to be marked as an alert (default: none)
# NOTE: when settings required zones globally, this zone must exist on all cameras
# or the config will be considered invalid. In that case the required_zones
# should be configured at the camera level.
required_zones:
- driveway
# Optional: detections configuration
@@ -332,20 +329,12 @@ review:
- car
- person
# Optional: required zones for an object to be marked as a detection (default: none)
# NOTE: when settings required zones globally, this zone must exist on all cameras
# or the config will be considered invalid. In that case the required_zones
# should be configured at the camera level.
required_zones:
- driveway
# Optional: Motion configuration
# NOTE: Can be overridden at the camera level
motion:
# Optional: enables detection for the camera (default: True)
# NOTE: Motion detection is required for object detection,
# setting this to False and leaving detect enabled
# will result in an error on startup.
enabled: False
# Optional: The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. (default: shown below)
# Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive.
# The value should be between 1 and 255.
@@ -383,6 +372,14 @@ motion:
# Optional: Delay when updating camera motion through MQTT from ON -> OFF (default: shown below).
mqtt_off_delay: 30
# Optional: Notification Configuration
notifications:
# Optional: Enable notification service (default: shown below)
enabled: False
# Optional: Email for push service to reach out to
# NOTE: This is required to use notifications
email: "admin@example.com"
# Optional: Record configuration
# NOTE: Can be overridden at the camera level
record:
@@ -476,16 +473,43 @@ snapshots:
# Optional: quality of the encoded jpeg, 0-100 (default: shown below)
quality: 70
# Optional: Configuration for semantic search capability
semantic_search:
# Optional: Enable semantic search (default: shown below)
enabled: False
# Optional: Re-index embeddings database from historical events (default: shown below)
reindex: False
# Optional: Configuration for AI generated event descriptions
# NOTE: Semantic Search must be enabled for this to do anything.
# WARNING: Depending on the provider, this will send thumbnails over the internet
# to Google or OpenAI's LLMs to generate descriptions. It can be overridden at
# the camera level (enabled: False) to enhance privacy for indoor cameras.
genai:
# Optional: Enable Google Gemini description generation (default: shown below)
enabled: False
# Required if enabled: Provider must be one of ollama, gemini, or openai
provider: ollama
# Required if provider is ollama. May also be used for an OpenAI API compatible backend with the openai provider.
base_url: http://localhost::11434
# Required if gemini or openai
api_key: "{FRIGATE_GENAI_API_KEY}"
# Optional: The default prompt for generating descriptions. Can use replacement
# variables like "label", "sub_label", "camera" to make more dynamic. (default: shown below)
prompt: "Describe the {label} in the sequence of images with as much detail as possible. Do not describe the background."
# Optional: Object specific prompts to customize description results
# Format: {label}: {prompt}
object_prompts:
person: "My special person prompt."
# Optional: Restream configuration
# Uses https://github.com/AlexxIT/go2rtc (v1.9.2)
go2rtc:
# Optional: Live stream configuration for WebUI.
# NOTE: Can be overridden at the camera level
# Optional: jsmpeg stream configuration for WebUI
live:
# Optional: Set the name of the stream configured in go2rtc
# that should be used for live view in frigate WebUI. (default: name of camera)
# NOTE: In most cases this should be set at the camera level only.
# Optional: Set the name of the stream that should be used for live view
# in frigate WebUI. (default: name of camera)
stream_name: camera_name
# Optional: Set the height of the jsmpeg stream. (default: 720)
# This must be less than or equal to the height of the detect stream. Lower resolutions
@@ -732,7 +756,7 @@ camera_groups:
- side_cam
- front_doorbell_cam
# Required: icon used for group
icon: LuCar
icon: car
# Required: index of this group
order: 0
```

View File

@@ -41,6 +41,8 @@ review:
By default all detections that do not qualify as an alert qualify as a detection. However, detections can further be filtered to only include certain labels or certain zones.
By default a review item will only be marked as an alert if a person or car is detected. This can be configured to include any object or audio label using the following config:
```yaml
# can be overridden at the camera level
review:

View File

@@ -0,0 +1,38 @@
---
id: semantic_search
title: Using Semantic Search
---
Semantic search works by embedding images and/or text into a vector representation identified by numbers. Frigate has support for two such models which both run locally: [OpenAI CLIP](https://openai.com/research/clip) and [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). Embeddings are then saved to a local instance of [ChromaDB](https://trychroma.com).
## Configuration
Semantic Search is a global configuration setting.
```yaml
semantic_search:
enabled: True
reindex: False
```
:::tip
The embeddings database can be re-indexed from the existing detections in your database by adding `reindex: True` to your `semantic_search` configuration. Depending on the number of detections you have, it can take up to 30 minutes to complete and may max out your CPU while indexing. Make sure to set the config back to `False` before restarting Frigate again.
:::
### OpenAI CLIP
This model is able to embed both images and text into the same vector space, which allows `image -> image` and `text -> image` similarity searches. Frigate uses this model on detections to encode the thumbnail image and store it in Chroma. When searching detections via text in the search box, frigate will perform a `text -> image` similarity search against this embedding. When clicking "FIND SIMILAR" next to a detection, Frigate will perform an `image -> image` similarity search to retrieve the closest matching thumbnails.
### all-MiniLM-L6-v2
This is a sentence embedding model that has been fine tuned on over 1 billion sentence pairs. This model is used to embed detection descriptions and perform searches against them. Descriptions can be created and/or modified on the search page when clicking on the info icon next to a detection. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate event descriptions.
## Usage Tips
1. Semantic search is used in conjunction with the other filters available on the search page. Use a combination of traditional filtering and semantic search for the best results.
2. The comparison between text and image embedding distances generally means that results matching `description` will appear first, even if a `thumbnail` embedding may be a better match. Play with the "Search Type" filter to help find what you are looking for.
3. Make your search language and tone closely match your descriptions. If you are using thumbnail search, phrase your query as an image caption.
4. Semantic search on thumbnails tends to return better results when matching large subjects that take up most of the frame. Small things like "cat" tend to not work well.
5. Experiment! Find a detection you want to test and start typing keywords to see what works for you.

View File

@@ -13,19 +13,20 @@ Many users have reported various issues with Reolink cameras, so I do not recomm
Here are some of the camera's I recommend:
- <a href="https://amzn.to/4fwoNWA" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T549M-ALED-S3</a> (affiliate link)
- <a href="https://amzn.to/3YXpcMw" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T54IR-AS</a> (affiliate link)
- <a href="https://amzn.to/3AvBHoY" target="_blank" rel="nofollow noopener sponsored">Amcrest IP5M-T1179EW-AI-V3</a> (affiliate link)
- <a href="https://amzn.to/3uFLtxB" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) T5442TM-AS-LED</a> (affiliate link)
- <a href="https://amzn.to/3isJ3gU" target="_blank" rel="nofollow noopener sponsored">Loryta(Dahua) IPC-T5442TM-AS</a> (affiliate link)
- <a href="https://amzn.to/2ZWNWIA" target="_blank" rel="nofollow noopener sponsored">Amcrest IP5M-T1179EW-28MM</a> (affiliate link)
I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
## Server
My current favorite is the Beelink EQ13 because of the efficient N100 CPU and dual NICs that allow you to setup a dedicated private network for your cameras where they can be blocked from accessing the internet. There are many used workstation options on eBay that work very well. Anything with an Intel CPU and capable of running Debian should work fine. As a bonus, you may want to look for devices with a M.2 or PCIe express slot that is compatible with the Google Coral. I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
My current favorite is the Beelink EQ12 because of the efficient N100 CPU and dual NICs that allow you to setup a dedicated private network for your cameras where they can be blocked from accessing the internet. There are many used workstation options on eBay that work very well. Anything with an Intel CPU and capable of running Debian should work fine. As a bonus, you may want to look for devices with a M.2 or PCIe express slot that is compatible with the Google Coral. I may earn a small commission for my endorsement, recommendation, testimonial, or link to any products or services from this website.
| Name | Coral Inference Speed | Coral Compatibility | Notes |
| ------------------------------------------------------------------------------------------------------------- | --------------------- | ------------------- | ----------------------------------------------------------------------------------------- |
| Beelink EQ13 (<a href="https://amzn.to/4ejU0ew" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 5-10ms | USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
| Name | Coral Inference Speed | Coral Compatibility | Notes |
| ------------------------------------------------------------------------------------------------------------- | --------------------- | ------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
| Beelink EQ12 (<a href="https://amzn.to/3OlTMJY" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 5-10ms | USB | Dual gigabit NICs for easy isolated camera network. Easily handles several 1080p cameras. |
| Intel NUC (<a href="https://amzn.to/3psFlHi" target="_blank" rel="nofollow noopener sponsored">Amazon</a>) | 5-10ms | USB | Overkill for most, but great performance. Can handle many cameras at 5fps depending on typical amounts of motion. Requires extra parts. |
## Detectors
@@ -68,7 +69,6 @@ Inference speeds vary greatly depending on the CPU, GPU, or VPU used, some known
| Intel i5 7500 | ~ 15 ms | Inference speeds on CPU were ~ 260 ms |
| Intel i5 1135G7 | 10 - 15 ms | |
| Intel i5 12600K | ~ 15 ms | Inference speeds on CPU were ~ 35 ms |
| Intel Arc A750 | ~ 4 ms | |
### TensorRT - Nvidia GPU
@@ -107,6 +107,12 @@ Frigate supports hardware video processing on all Rockchip boards. However, hard
The inference time of a rk3588 with all 3 cores enabled is typically 25-30 ms for yolo-nas s.
#### Hailo-8l PCIe
Frigate supports the Hailo-8l M.2 card on any hardware but currently it is only tested on the Raspberry Pi5 PCIe hat from the AI kit.
The inference time for the Hailo-8L chip at time of writing is around 17-21 ms for the SSD MobileNet Version 1 model.
## What does Frigate use the CPU for and what does it use a detector for? (ELI5 Version)
This is taken from a [user question on reddit](https://www.reddit.com/r/homeassistant/comments/q8mgau/comment/hgqbxh5/?utm_source=share&utm_medium=web2x&context=3). Modified slightly for clarity.

View File

@@ -100,6 +100,38 @@ By default, the Raspberry Pi limits the amount of memory available to the GPU. I
Additionally, the USB Coral draws a considerable amount of power. If using any other USB devices such as an SSD, you will experience instability due to the Pi not providing enough power to USB devices. You will need to purchase an external USB hub with it's own power supply. Some have reported success with <a href="https://amzn.to/3a2mH0P" target="_blank" rel="nofollow noopener sponsored">this</a> (affiliate link).
### Hailo-8L
The Hailo-8L is an M.2 card typically connected to a carrier board for PCIe, which then connects to the Raspberry Pi 5 as part of the AI Kit. However, it can also be used on other boards equipped with an M.2 M key edge connector.
#### Installation
For Raspberry Pi 5 users with the AI Kit, installation is straightforward. Simply follow this [guide](https://www.raspberrypi.com/documentation/accessories/ai-kit.html#ai-kit-installation) to install the driver and software.
For other installations, follow these steps for installation:
1. Install the driver from the [Hailo GitHub repository](https://github.com/hailo-ai/hailort-drivers). A convenient script for Linux is available to clone the repository, build the driver, and install it.
2. Copy or download [this script](https://github.com/blakeblackshear/frigate/blob/41c9b13d2fffce508b32dfc971fa529b49295fbd/docker/hailo8l/user_installation.sh).
3. Ensure it has execution permissions with `sudo chmod +x install_hailo8l_driver.sh`
4. Run the script with `./install_hailo8l_driver.sh`
#### Setup
To set up Frigate, follow the default installation instructions, but use a Docker image with the `-h8l` suffix, for example: `ghcr.io/blakeblackshear/frigate:stable-h8l`
Next, grant Docker permissions to access your hardware by adding the following lines to your `docker-compose.yml` file:
```yaml
devices:
- /dev/hailo0
```
If you are using `docker run`, add this option to your command `--device /dev/hailo0`
#### Configuration
Finally, configure [hardware object detection](/configuration/object_detectors#hailo-8l) to complete the setup.
### Rockchip platform
Make sure that you use a linux distribution that comes with the rockchip BSP kernel 5.10 or 6.1 and necessary drivers (especially rkvdec2 and rknpu). To check, enter the following commands:
@@ -222,6 +254,7 @@ The community supported docker image tags for the current stable version are:
- `stable-rocm-gfx900` - AMD gfx900 driver only
- `stable-rocm-gfx1030` - AMD gfx1030 driver only
- `stable-rocm-gfx1100` - AMD gfx1100 driver only
- `stable-h8l` - Frigate build for the Hailo-8L M.2 PICe Raspberry Pi 5 hat
## Home Assistant Addon

View File

@@ -13,15 +13,7 @@ Use of the bundled go2rtc is optional. You can still configure FFmpeg to connect
# Setup a go2rtc stream
First, you will want to configure go2rtc to connect to your camera stream by adding the stream you want to use for live view in your Frigate config file. Avoid changing any other parts of your config at this step. Note that go2rtc supports [many different stream types](https://github.com/AlexxIT/go2rtc/tree/v1.9.4#module-streams), not just rtsp.
:::tip
For the best experience, you should set the stream name under `go2rtc` to match the name of your camera so that Frigate will automatically map it and be able to use better live view options for the camera.
See [the live view docs](../configuration/live.md#setting-stream-for-live-ui) for more information.
:::
First, you will want to configure go2rtc to connect to your camera stream by adding the stream you want to use for live view in your Frigate config file. If you set the stream name under go2rtc to match the name of your camera, it will automatically be mapped and you will get additional live view options for the camera. Avoid changing any other parts of your config at this step. Note that go2rtc supports [many different stream types](https://github.com/AlexxIT/go2rtc/tree/v1.9.4#module-streams), not just rtsp.
```yaml
go2rtc:
@@ -30,7 +22,7 @@ go2rtc:
- rtsp://user:password@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
```
After adding this to the config, restart Frigate and try to watch the live stream for a single camera by clicking on it from the dashboard. It should look much clearer and more fluent than the original jsmpeg stream.
The easiest live view to get working is MSE. After adding this to the config, restart Frigate and try to watch the live stream by selecting MSE in the dropdown after clicking on the camera.
### What if my video doesn't play?
@@ -54,7 +46,7 @@ After adding this to the config, restart Frigate and try to watch the live strea
streams:
back:
- rtsp://user:password@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
- "ffmpeg:back#video=h264#hardware"
- "ffmpeg:back#video=h264"
```
- Switch to FFmpeg if needed:
@@ -66,8 +58,9 @@ After adding this to the config, restart Frigate and try to watch the live strea
- ffmpeg:rtsp://user:password@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
```
- If you can see the video but do not have audio, this is most likely because your camera's audio stream codec is not AAC.
- If possible, update your camera's audio settings to AAC in your camera's firmware.
- If you can see the video but do not have audio, this is most likely because your
camera's audio stream is not AAC.
- If possible, update your camera's audio settings to AAC.
- If your cameras do not support AAC audio, you will need to tell go2rtc to re-encode the audio to AAC on demand if you want audio. This will use additional CPU and add some latency. To add AAC audio on demand, you can update your go2rtc config as follows:
```yaml
go2rtc:
@@ -84,7 +77,7 @@ After adding this to the config, restart Frigate and try to watch the live strea
streams:
back:
- rtsp://user:password@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2
- "ffmpeg:back#video=h264#audio=aac#hardware"
- "ffmpeg:back#video=h264#audio=aac"
```
When using the ffmpeg module, you would add AAC audio like this:
@@ -93,7 +86,7 @@ After adding this to the config, restart Frigate and try to watch the live strea
go2rtc:
streams:
back:
- "ffmpeg:rtsp://user:password@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2#video=copy#audio=copy#audio=aac#hardware"
- "ffmpeg:rtsp://user:password@10.0.10.10:554/cam/realmonitor?channel=1&subtype=2#video=copy#audio=copy#audio=aac"
```
:::warning
@@ -109,4 +102,4 @@ section.
## Next steps
1. If the stream you added to go2rtc is also used by Frigate for the `record` or `detect` role, you can migrate your config to pull from the RTSP restream to reduce the number of connections to your camera as shown [here](/configuration/restream#reduce-connections-to-camera).
2. You may also prefer to [setup WebRTC](/configuration/live#webrtc-extra-configuration) for slightly lower latency than MSE. Note that WebRTC only supports h264 and specific audio formats and may require opening ports on your router.
1. You may also prefer to [setup WebRTC](/configuration/live#webrtc-extra-configuration) for slightly lower latency than MSE. Note that WebRTC only supports h264 and specific audio formats.

View File

@@ -294,21 +294,11 @@ cameras:
If you don't have separate streams for detect and record, you would just add the record role to the list on the first input.
:::note
If you only define one stream in your `inputs` and do not assign a `detect` role to it, Frigate will automatically assign it the `detect` role. Frigate will always decode a stream to support motion detection, Birdseye, the API image endpoints, and other features, even if you have disabled object detection with `enabled: False` in your config's `detect` section.
If you plan to use Frigate for recording only, it is still recommended to define a `detect` role for a low resolution stream to minimize resource usage from the required stream decoding.
:::
By default, Frigate will retain video of all events for 10 days. The full set of options for recording can be found [here](../configuration/reference.md).
### Step 7: Complete config
At this point you have a complete config with basic functionality.
- View [common configuration examples](../configuration/index.md#common-configuration-examples) for a list of common configuration examples.
- View [full config reference](../configuration/reference.md) for a complete list of configuration options.
At this point you have a complete config with basic functionality. You can see the [full config reference](../configuration/reference.md) for a complete list of configuration options.
### Follow up
@@ -319,3 +309,4 @@ Now that you have a working install, you can use the following documentation for
3. [Review](../configuration/review.md)
4. [Masks](../configuration/masks.md)
5. [Home Assistant Integration](../integrations/home-assistant.md) - Integrate with Home Assistant

View File

@@ -3,38 +3,25 @@ id: reverse_proxy
title: Setting up a reverse proxy
---
This guide outlines the basic configuration steps needed to set up a reverse proxy in front of your Frigate instance.
This guide outlines the basic configuration steps needed to expose your Frigate UI to the internet.
A common way of accomplishing this is to use a reverse proxy webserver between your router and your Frigate instance.
A reverse proxy accepts HTTP requests from the public internet and redirects them transparently to internal webserver(s) on your network.
A reverse proxy is typically needed if you want to set up Frigate on a custom URL, on a subdomain, or on a host serving multiple sites. It could also be used to set up your own authentication provider or for more advanced HTTP routing.
The suggested steps are:
Before setting up a reverse proxy, check if any of the built-in functionality in Frigate suits your needs:
|Topic|Docs|
|-|-|
|TLS|Please see the `tls` [configuration option](../configuration/tls.md)|
|Authentication|Please see the [authentication](../configuration/authentication.md) documentation|
|IPv6|[Enabling IPv6](../configuration/advanced.md#enabling-ipv6)
**Note about TLS**
When using a reverse proxy, the TLS session is usually terminated at the proxy, sending the internal request over plain HTTP. If this is the desired behavior, TLS must first be disabled in Frigate, or you will encounter an HTTP 400 error: "The plain HTTP request was sent to HTTPS port."
To disable TLS, set the following in your Frigate configuration:
```yml
tls:
enabled: false
```
- **Configure** a 'proxy' HTTP webserver (such as [Apache2](https://httpd.apache.org/docs/current/) or [NPM](https://github.com/NginxProxyManager/nginx-proxy-manager)) and only expose ports 80/443 from this webserver to the internet
- **Encrypt** content from the proxy webserver by installing SSL (such as with [Let's Encrypt](https://letsencrypt.org/)). Note that SSL is then not required on your Frigate webserver as the proxy encrypts all requests for you
- **Restrict** access to your Frigate instance at the proxy using, for example, password authentication
:::warning
A reverse proxy can be used to secure access to an internal web server, but the user will be entirely reliant on the steps they have taken. You must ensure you are following security best practices.
This page does not attempt to outline the specific steps needed to secure your internal website.
A reverse proxy can be used to secure access to an internal webserver but the user will be entirely reliant
on the steps they have taken. You must ensure you are following security best practices.
This page does not attempt to outline the specific steps needed to secure your internal website.
Please use your own knowledge to assess and vet the reverse proxy software before you install anything on your system.
:::
## Proxies
There are many solutions available to implement reverse proxies and the community is invited to help out documenting others through a contribution to this page.
* [Apache2](#apache2-reverse-proxy)
* [Nginx](#nginx-reverse-proxy)
* [Traefik](#traefik-reverse-proxy)
There are several technologies available to implement reverse proxies. This document currently suggests one, using Apache2,
and the community is invited to document others through a contribution to this page.
## Apache2 Reverse Proxy
@@ -154,26 +141,3 @@ The settings below enabled connection upgrade, sets up logging (optional) and pr
}
```
## Traefik Reverse Proxy
This example shows how to add a `label` to the Frigate Docker compose file, enabling Traefik to automatically discover your Frigate instance.
Before using the example below, you must first set up Traefik with the [Docker provider](https://doc.traefik.io/traefik/providers/docker/)
```yml
services:
frigate:
container_name: frigate
image: ghcr.io/blakeblackshear/frigate:stable
...
...
labels:
- "traefik.enable=true"
- "traefik.http.services.frigate.loadbalancer.server.port=8971"
- "traefik.http.routers.frigate.rule=Host(`traefik.example.com`)"
```
The above configuration will create a "service" in Traefik, automatically adding your container's IP on port 8971 as a backend.
It will also add a router, routing requests to "traefik.example.com" to your local container.
Note that with this approach, you don't need to expose any ports for the Frigate instance since all traffic will be routed over the internal Docker network.

View File

@@ -373,7 +373,7 @@ Metadata about previews for this time range.
Metadata about previews for this hour
### `GET /api/preview/<camera>/start/<start-timestamp>/end/<end-timestamp>/frames`
### `GET /api/preview/<camera>/start/<start-timestamp>/end/<end-timestamp>`
List of frames in the preview cache for the time range. Previews are only kept in the cache until they are combined into an mp4 at the end of the hour.
@@ -381,14 +381,6 @@ List of frames in the preview cache for the time range. Previews are only kept i
Specific preview frame from preview cache.
### `GET /review/<review_id>/preview`
Looping image made from preview video / frames during this review item.
| param | Type | Description |
| --------- | ---- | -------------------------------- |
| `format` | str | Format of preview [`gif`, `mp4`] |
### `GET /<camera>/start/<start-timestamp>/end/<end-timestamp>/preview`
Looping image made from preview video / frames during this time range.
@@ -411,37 +403,17 @@ HTTP Live Streaming Video on Demand URL for the specified event. Can be viewed i
HTTP Live Streaming Video on Demand URL for the camera with the specified time range. Can be viewed in an application like VLC.
### `GET /api/exports`
Fetch a list of all export recordings
Sample response:
```json
[
{
"camera": "doorbell",
"date": 12800057,
"id": "doorbell_pjis54",
"in_progress": false,
"name": "2024-10-04 fox visit",
"thumb_path": "/media/frigate/clips/export/doorbell_pjis54.webp",
"video_path": "/media/frigate/exports/doorbell_pjis54.mp4"
}
]
```
### `POST /api/export/<camera>/start/<start-timestamp>/end/<end-timestamp>`
Export recordings from `start-timestamp` to `end-timestamp` for `camera` as a single mp4 file. These recordings will be exported to the `/media/frigate/exports` folder.
It is also possible to export this recording as a time-lapse using the "playback" key in the json body, or specify a custom export filename, using the "name" key.
It is also possible to export this recording as a time-lapse.
**Optional Body:**
```json
{
"playback": "realtime", // playback factor: realtime or timelapse_25x
"name": "custom export name" // override the default export filename with a custom name
"playback": "realtime" // playback factor: realtime or timelapse_25x
}
```

View File

@@ -25,7 +25,7 @@ Available via HACS as a default repository. To install:
- Use [HACS](https://hacs.xyz/) to install the integration:
```
Home Assistant > HACS > Click in the Search bar and type "Frigate" > Frigate
Home Assistant > HACS > Integrations > "Explore & Add Integrations" > Frigate
```
- Restart Home Assistant.

View File

@@ -11,7 +11,7 @@ These are the MQTT messages generated by Frigate. The default topic_prefix is `f
Designed to be used as an availability topic with Home Assistant. Possible message are:
"online": published when Frigate is running (on startup)
"offline": published after Frigate has stopped
"offline": published right before Frigate stops
### `frigate/restart`

View File

@@ -19,17 +19,17 @@ Once logged in, you can generate an API key for Frigate in Settings.
### Set your API key
In Frigate, you can use an environment variable or a docker secret named `PLUS_API_KEY` to enable the Frigate+ page. Home Assistant Addon users can set it under Settings > Addons > Frigate NVR > Configuration > Options (be sure to toggle the "Show unused optional configuration options" switch).
In Frigate, you can use an environment variable or a docker secret named `PLUS_API_KEY` to enable the `SEND TO FRIGATE+` buttons on the events page. Home Assistant Addon users can set it under Settings > Addons > Frigate NVR > Configuration > Options (be sure to toggle the "Show unused optional configuration options" switch).
:::warning
You cannot use the `environment_vars` section of your Frigate configuration file to set this environment variable. It must be defined as an environment variable in the docker config or HA addon config.
You cannot use the `environment_vars` section of your configuration file to set this environment variable.
:::
## Submit examples
Once your API key is configured, you can submit examples directly from the Frigate+ page.
Once your API key is configured, you can submit examples directly from the events page in Frigate using the `SEND TO FRIGATE+` button.
:::note

View File

@@ -18,7 +18,3 @@ Please use your own knowledge to assess and vet them before you install anything
[Double Take](https://github.com/skrashevich/double-take) provides an unified UI and API for processing and training images for facial recognition.
It supports automatically setting the sub labels in Frigate for person objects that are detected and recognized.
This is a fork (with fixed errors and new features) of [original Double Take](https://github.com/jakowenko/double-take) project which, unfortunately, isn't being maintained by author.
## [Frigate telegram](https://github.com/OldTyT/frigate-telegram)
[Frigate telegram](https://github.com/OldTyT/frigate-telegram) makes it possible to send events from Frigate to Telegram. Events are sent as a message with a text description, video, and thumbnail.

View File

@@ -5,7 +5,7 @@ title: Requesting your first model
## Step 1: Upload and annotate your images
Before requesting your first model, you will need to upload and verify at least 10 images to Frigate+. The more images you upload, annotate, and verify the better your results will be. Most users start to see very good results once they have at least 100 verified images per camera. Keep in mind that varying conditions should be included. You will want images from cloudy days, sunny days, dawn, dusk, and night. Refer to the [integration docs](../integrations/plus.md#generate-an-api-key) for instructions on how to easily submit images to Frigate+ directly from Frigate.
Before requesting your first model, you will need to upload at least 10 images to Frigate+. But for the best results, you should provide at least 100 verified images per camera. Keep in mind that varying conditions should be included. You will want images from cloudy days, sunny days, dawn, dusk, and night. Refer to the [integration docs](../integrations/plus.md#generate-an-api-key) for instructions on how to easily submit images to Frigate+ directly from Frigate.
It is recommended to submit **both** true positives and false positives. This will help the model differentiate between what is and isn't correct. You should aim for a target of 80% true positive submissions and 20% false positives across all of your images. If you are experiencing false positives in a specific area, submitting true positives for any object type near that area in similar lighting conditions will help teach the model what that area looks like when no objects are present.
@@ -13,7 +13,7 @@ For more detailed recommendations, you can refer to the docs on [improving your
## Step 2: Submit a model request
Once you have an initial set of verified images, you can request a model on the Models page. For guidance on choosing a model type, refer to [this part of the documentation](./index.md#available-model-types). Each model request requires 1 of the 12 trainings that you receive with your annual subscription. This model will support all [label types available](./index.md#available-label-types) even if you do not submit any examples for those labels. Model creation can take up to 36 hours.
Once you have an initial set of verified images, you can request a model on the Models page. Each model request requires 1 of the 12 trainings that you receive with your annual subscription. This model will support all [label types available](./index.md#available-label-types) even if you do not submit any examples for those labels. Model creation can take up to 36 hours.
![Plus Models Page](/img/plus/plus-models.jpg)
## Step 3: Set your model id in the config

View File

@@ -3,7 +3,7 @@ id: improving_model
title: Improving your model
---
You may find that Frigate+ models result in more false positives initially, but by submitting true and false positives, the model will improve. With all the new images now being submitted by subscribers, future base models will improve as more and more examples are incorporated. Note that only images with at least one verified label will be used when training your model. Submitting an image from Frigate as a true or false positive will not verify the image. You still must verify the image in Frigate+ in order for it to be used in training.
You may find that Frigate+ models result in more false positives initially, but by submitting true and false positives, the model will improve. Because a limited number of users submitted images to Frigate+ prior to this launch, you may need to submit several hundred images per camera to see good results. With all the new images now being submitted, future base models will improve as more and more users (including you) submit examples to Frigate+. Note that only verified images will be used when training your model. Submitting an image from Frigate as a true or false positive will not verify the image. You still must verify the image in Frigate+ in order for it to be used in training.
- **Submit both true positives and false positives**. This will help the model differentiate between what is and isn't correct. You should aim for a target of 80% true positive submissions and 20% false positives across all of your images. If you are experiencing false positives in a specific area, submitting true positives for any object type near that area in similar lighting conditions will help teach the model what that area looks like when no objects are present.
- **Lower your thresholds a little in order to generate more false/true positives near the threshold value**. For example, if you have some false positives that are scoring at 68% and some true positives scoring at 72%, you can try lowering your threshold to 65% and submitting both true and false positives within that range. This will help the model learn and widen the gap between true and false positive scores.
@@ -13,7 +13,7 @@ You may find that Frigate+ models result in more false positives initially, but
For the best results, follow the following guidelines.
**Label every object in the image**: It is important that you label all objects in each image before verifying. If you don't label a car for example, the model will be taught that part of the image is _not_ a car and it will start to get confused. You can exclude labels that you don't want detected on any of your cameras.
**Label every object in the image**: It is important that you label all objects in each image before verifying. If you don't label a car for example, the model will be taught that part of the image is _not_ a car and it will start to get confused.
**Make tight bounding boxes**: Tighter bounding boxes improve the recognition and ensure that accurate bounding boxes are predicted at runtime.
@@ -21,7 +21,7 @@ For the best results, follow the following guidelines.
**Label objects hard to identify as difficult**: When objects are truly difficult to make out, such as a car barely visible through a bush, or a dog that is hard to distinguish from the background at night, flag it as 'difficult'. This is not used in the model training as of now, but will in the future.
**Delivery logos such as `amazon`, `ups`, and `fedex` should label the logo**: For a Fedex truck, label the truck as a `car` and make a different bounding box just for the Fedex logo. If there are multiple logos, label each of them.
**`amazon`, `ups`, and `fedex` should label the logo**: For a Fedex truck, label the truck as a `car` and make a different bounding box just for the Fedex logo. If there are multiple logos, label each of them.
![Fedex Logo](/img/plus/fedex-logo.jpg)
@@ -36,17 +36,18 @@ Misidentified objects should have a correct label added. For example, if a perso
## Shortcuts for a faster workflow
| Shortcut Key | Description |
| ----------------- | ----------------------------- |
| `?` | Show all keyboard shortcuts |
| `w` | Add box |
| `d` | Toggle difficult |
| `s` | Switch to the next label |
| `tab` | Select next largest box |
| `del` | Delete current box |
| `esc` | Deselect/Cancel |
| `← ↑ → ↓` | Move box |
| `Shift + ← ↑ → ↓` | Resize box |
| `scrollwheel` | Zoom in/out |
| `f` | Hide/show all but current box |
| `spacebar` | Verify and save |
|Shortcut Key|Description|
|-----|--------|
|`?`|Show all keyboard shortcuts|
|`w`|Add box|
|`d`|Toggle difficult|
|`s`|Switch to the next label|
|`tab`|Select next largest box|
|`del`|Delete current box|
|`esc`|Deselect/Cancel|
|`← ↑ → ↓`|Move box|
|`Shift + ← ↑ → ↓`|Resize box|
|`-`|Zoom out|
|`=`|Zoom in|
|`f`|Hide/show all but current box|
|`spacebar`|Verify and save|

View File

@@ -15,52 +15,25 @@ With a subscription, 12 model trainings per year are included. If you cancel you
Information on how to integrate Frigate+ with Frigate can be found in the [integration docs](../integrations/plus.md).
## Available model types
There are two model types offered in Frigate+, `mobiledet` and `yolonas`. Both of these models are object detection models and are trained to detect the same set of labels [listed below](#available-label-types).
Not all model types are supported by all detectors, so it's important to choose a model type to match your detector as shown in the table under [supported detector types](#supported-detector-types).
| Model Type | Description |
| ----------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
| `mobiledet` | Based on the same architecture as the default model included with Frigate. Runs on Google Coral devices and CPUs. |
| `yolonas` | A newer architecture that offers slightly higher accuracy and improved detection of small objects. Runs on Intel, NVidia GPUs, and AMD GPUs. |
## Supported detector types
Currently, Frigate+ models support CPU (`cpu`), Google Coral (`edgetpu`), OpenVino (`openvino`), ONNX (`onnx`), and ROCm (`rocm`) detectors.
:::warning
Using Frigate+ models with `onnx` and `rocm` is only available with Frigate 0.15, which is still under development.
Frigate+ models are not supported for TensorRT or OpenVino yet.
:::
| Hardware | Recommended Detector Type | Recommended Model Type |
| ---------------------------------------------------------------------------------------------------------------------------- | ------------------------- | ---------------------- |
| [CPU](/configuration/object_detectors.md#cpu-detector-not-recommended) | `cpu` | `mobiledet` |
| [Coral (all form factors)](/configuration/object_detectors.md#edge-tpu-detector) | `edgetpu` | `mobiledet` |
| [Intel](/configuration/object_detectors.md#openvino-detector) | `openvino` | `yolonas` |
| [NVidia GPU](https://deploy-preview-13787--frigate-docs.netlify.app/configuration/object_detectors#onnx)\* | `onnx` | `yolonas` |
| [AMD ROCm GPU](https://deploy-preview-13787--frigate-docs.netlify.app/configuration/object_detectors#amdrocm-gpu-detector)\* | `rocm` | `yolonas` |
Currently, Frigate+ models only support CPU (`cpu`) and Coral (`edgetpu`) models. OpenVino is next in line to gain support.
_\* Requires Frigate 0.15_
The models are created using the same MobileDet architecture as the default model. Additional architectures will be added in future releases as needed.
## Available label types
Frigate+ models support a more relevant set of objects for security cameras. Currently, the following objects are supported:
- **People**: `person`, `face`
- **Vehicles**: `car`, `motorcycle`, `bicycle`, `boat`, `license_plate`
- **Delivery Logos**: `amazon`, `usps`, `ups`, `fedex`, `dhl`, `an_post`, `purolator`, `postnl`, `nzpost`, `postnord`, `gls`, `dpd`
- **Animals**: `dog`, `cat`, `deer`, `horse`, `bird`, `raccoon`, `fox`, `bear`, `cow`, `squirrel`, `goat`, `rabbit`
- **Other**: `package`, `waste_bin`, `bbq_grill`, `robot_lawnmower`, `umbrella`
Other object types available in the default Frigate model are not available. Additional object types will be added in future releases.
Frigate+ models support a more relevant set of objects for security cameras. Currently, only the following objects are supported: `person`, `face`, `car`, `license_plate`, `amazon`, `ups`, `fedex`, `package`, `dog`, `cat`, `deer`. Other object types available in the default Frigate model are not available. Additional object types will be added in future releases.
### Label attributes
Frigate has special handling for some labels when using Frigate+ models. `face`, `license_plate`, and delivery logos such as `amazon`, `ups`, and `fedex` are considered attribute labels which are not tracked like regular objects and do not generate events. In addition, the `threshold` filter will have no effect on these labels. You should adjust the `min_score` and other filter values as needed.
Frigate has special handling for some labels when using Frigate+ models. `face`, `license_plate`, `amazon`, `ups`, and `fedex` are considered attribute labels which are not tracked like regular objects and do not generate events. In addition, the `threshold` filter will have no effect on these labels. You should adjust the `min_score` and other filter values as needed.
In order to have Frigate start using these attribute labels, you will need to add them to the list of objects to track:
@@ -83,6 +56,6 @@ When using Frigate+ models, Frigate will choose the snapshot of a person object
![Face Attribute](/img/plus/attribute-example-face.jpg)
Delivery logos such as `amazon`, `ups`, and `fedex` labels are used to automatically assign a sub label to car objects.
`amazon`, `ups`, and `fedex` labels are used to automatically assign a sub label to car objects.
![Fedex Attribute](/img/plus/attribute-example-fedex.jpg)

View File

@@ -28,18 +28,6 @@ The USB coral has different IDs when it is uninitialized and initialized.
- When running Frigate in a VM, Proxmox lxc, etc. you must ensure both device IDs are mapped.
- When running HA OS you may need to run the Full Access version of the Frigate addon with the `Protected Mode` switch disabled so that the coral can be accessed.
### Synology 716+II running DSM 7.2.1-69057 Update 5
Some users have reported that this older device runs an older kernel causing issues with the coral not being detected. The following steps allowed it to be detected correctly:
1. Plug in the coral TPU in any of the USB ports on the NAS
2. Open the control panel - info screen. The coral TPU would be shown as a generic device.
3. Start the docker container with Coral TPU enabled in the config
4. The TPU would be detected but a few moments later it would disconnect.
5. While leaving the TPU device plugged in, restart the NAS using the reboot command in the UI. Do NOT unplug the NAS/power it off etc.
6. Open the control panel - info scree. The coral TPU will now be recognised as a USB Device - google inc
7. Start the frigate container. Everything should work now!
## USB Coral Detection Appears to be Stuck
The USB Coral can become stuck and need to be restarted, this can happen for a number of reasons depending on hardware and software setup. Some common reasons are:
@@ -49,21 +37,7 @@ The USB Coral can become stuck and need to be restarted, this can happen for a n
## PCIe Coral Not Detected
The most common reason for the PCIe Coral not being detected is that the driver has not been installed. This process varies based on what OS and kernel that is being run.
- In most cases [the Coral docs](https://coral.ai/docs/m2/get-started/#2-install-the-pcie-driver-and-edge-tpu-runtime) show how to install the driver for the PCIe based Coral.
- For Ubuntu 22.04+ https://github.com/jnicolson/gasket-builder can be used to build and install the latest version of the driver.
### Not detected on Raspberry Pi5
A kernel update to the RPi5 means an upate to config.txt is required, see [the raspberry pi forum for more info](https://forums.raspberrypi.com/viewtopic.php?t=363682&sid=cb59b026a412f0dc041595951273a9ca&start=25)
Specifically, add the following to config.txt
```
dtoverlay=pciex1-compat-pi5,no-mip
dtoverlay=pcie-32bit-dma-pi5
```
The most common reason for the PCIe coral not being detected is that the driver has not been installed. See [the coral docs](https://coral.ai/docs/m2/get-started/#2-install-the-pcie-driver-and-edge-tpu-runtime) for how to install the driver for the PCIe based coral.
## Only One PCIe Coral Is Detected With Coral Dual EdgeTPU

View File

@@ -29,6 +29,10 @@ module.exports = {
"configuration/object_detectors",
"configuration/audio_detectors",
],
"Semantic Search": [
"configuration/semantic_search",
"configuration/genai",
],
Cameras: [
"configuration/cameras",
"configuration/review",
@@ -50,9 +54,9 @@ module.exports = {
],
"Extra Configuration": [
"configuration/authentication",
"configuration/notifications",
"configuration/hardware_acceleration",
"configuration/ffmpeg_presets",
"configuration/pwa",
"configuration/tls",
"configuration/advanced",
],

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

After

Width:  |  Height:  |  Size: 63 KiB

View File

@@ -7,6 +7,7 @@ import os
import traceback
from datetime import datetime, timedelta
from functools import reduce
from typing import Optional
import requests
from flask import Blueprint, Flask, current_app, jsonify, make_response, request
@@ -19,10 +20,12 @@ from frigate.api.auth import AuthBp, get_jwt_secret, limiter
from frigate.api.event import EventBp
from frigate.api.export import ExportBp
from frigate.api.media import MediaBp
from frigate.api.notification import NotificationBp
from frigate.api.preview import PreviewBp
from frigate.api.review import ReviewBp
from frigate.config import FrigateConfig
from frigate.const import CONFIG_DIR
from frigate.embeddings import EmbeddingsContext
from frigate.events.external import ExternalEventProcessor
from frigate.models import Event, Timeline
from frigate.plus import PlusApi
@@ -47,11 +50,13 @@ bp.register_blueprint(MediaBp)
bp.register_blueprint(PreviewBp)
bp.register_blueprint(ReviewBp)
bp.register_blueprint(AuthBp)
bp.register_blueprint(NotificationBp)
def create_app(
frigate_config,
database: SqliteQueueDatabase,
embeddings: Optional[EmbeddingsContext],
detected_frames_processor,
storage_maintainer: StorageMaintainer,
onvif: OnvifController,
@@ -79,6 +84,7 @@ def create_app(
database.close()
app.frigate_config = frigate_config
app.embeddings = embeddings
app.detected_frames_processor = detected_frames_processor
app.storage_maintainer = storage_maintainer
app.onvif = onvif
@@ -450,10 +456,24 @@ def vainfo():
@bp.route("/logs/<service>", methods=["GET"])
def logs(service: str):
def download_logs(service_location: str):
try:
file = open(service_location, "r")
contents = file.read()
file.close()
return jsonify(contents)
except FileNotFoundError as e:
logger.error(e)
return make_response(
jsonify({"success": False, "message": "Could not find log file"}),
500,
)
log_locations = {
"frigate": "/dev/shm/logs/frigate/current",
"go2rtc": "/dev/shm/logs/go2rtc/current",
"nginx": "/dev/shm/logs/nginx/current",
"chroma": "/dev/shm/logs/chroma/current",
}
service_location = log_locations.get(service)
@@ -463,6 +483,9 @@ def logs(service: str):
404,
)
if request.args.get("download", type=bool, default=False):
return download_logs(service_location)
start = request.args.get("start", type=int, default=0)
end = request.args.get("end", type=int)

View File

@@ -1,5 +1,7 @@
"""Event apis."""
import base64
import io
import logging
import os
from datetime import datetime
@@ -8,6 +10,7 @@ from pathlib import Path
from urllib.parse import unquote
import cv2
import numpy as np
from flask import (
Blueprint,
current_app,
@@ -15,13 +18,16 @@ from flask import (
make_response,
request,
)
from peewee import DoesNotExist, fn, operator
from peewee import JOIN, DoesNotExist, fn, operator
from PIL import Image
from playhouse.shortcuts import model_to_dict
from frigate.const import (
CLIPS_DIR,
)
from frigate.models import Event, Timeline
from frigate.embeddings import EmbeddingsContext
from frigate.embeddings.embeddings import get_metadata
from frigate.models import Event, ReviewSegment, Timeline
from frigate.object_processing import TrackedObject
from frigate.util.builtin import get_tz_modifiers
@@ -245,6 +251,209 @@ def events():
return jsonify(list(events))
@EventBp.route("/event_ids")
def event_ids():
idString = request.args.get("ids")
ids = idString.split(",")
if not ids:
return make_response(
jsonify({"success": False, "message": "Valid list of ids must be sent"}),
400,
)
try:
events = Event.select().where(Event.id << ids).dicts().iterator()
return jsonify(list(events))
except Exception:
return make_response(
jsonify({"success": False, "message": "Events not found"}), 400
)
@EventBp.route("/events/search")
def events_search():
query = request.args.get("query", type=str)
search_type = request.args.get("search_type", "text", type=str)
include_thumbnails = request.args.get("include_thumbnails", default=1, type=int)
limit = request.args.get("limit", 50, type=int)
# Filters
cameras = request.args.get("cameras", "all", type=str)
labels = request.args.get("labels", "all", type=str)
zones = request.args.get("zones", "all", type=str)
after = request.args.get("after", type=float)
before = request.args.get("before", type=float)
if not query:
return make_response(
jsonify(
{
"success": False,
"message": "A search query must be supplied",
}
),
400,
)
if not current_app.frigate_config.semantic_search.enabled:
return make_response(
jsonify(
{
"success": False,
"message": "Semantic search is not enabled",
}
),
400,
)
context: EmbeddingsContext = current_app.embeddings
selected_columns = [
Event.id,
Event.camera,
Event.label,
Event.sub_label,
Event.zones,
Event.start_time,
Event.end_time,
Event.data,
ReviewSegment.thumb_path,
]
if include_thumbnails:
selected_columns.append(Event.thumbnail)
# Build the where clause for the embeddings query
embeddings_filters = []
if cameras != "all":
camera_list = cameras.split(",")
embeddings_filters.append({"camera": {"$in": camera_list}})
if labels != "all":
label_list = labels.split(",")
embeddings_filters.append({"label": {"$in": label_list}})
if zones != "all":
filtered_zones = zones.split(",")
zone_filters = [{f"zones_{zone}": {"$eq": True}} for zone in filtered_zones]
if len(zone_filters) > 1:
embeddings_filters.append({"$or": zone_filters})
else:
embeddings_filters.append(zone_filters[0])
if after:
embeddings_filters.append({"start_time": {"$gt": after}})
if before:
embeddings_filters.append({"start_time": {"$lt": before}})
where = None
if len(embeddings_filters) > 1:
where = {"$and": embeddings_filters}
elif len(embeddings_filters) == 1:
where = embeddings_filters[0]
thumb_ids = {}
desc_ids = {}
if search_type == "thumbnail":
# Grab the ids of events that match the thumbnail image embeddings
try:
search_event: Event = Event.get(Event.id == query)
except DoesNotExist:
return make_response(
jsonify(
{
"success": False,
"message": "Event not found",
}
),
404,
)
thumbnail = base64.b64decode(search_event.thumbnail)
img = np.array(Image.open(io.BytesIO(thumbnail)).convert("RGB"))
thumb_result = context.embeddings.thumbnail.query(
query_images=[img],
n_results=limit,
where=where,
)
thumb_ids = dict(zip(thumb_result["ids"][0], thumb_result["distances"][0]))
else:
thumb_result = context.embeddings.thumbnail.query(
query_texts=[query],
n_results=limit,
where=where,
)
# Do a rudimentary normalization of the difference in distances returned by CLIP and MiniLM.
thumb_ids = dict(
zip(
thumb_result["ids"][0],
context.thumb_stats.normalize(thumb_result["distances"][0]),
)
)
desc_result = context.embeddings.description.query(
query_texts=[query],
n_results=limit,
where=where,
)
desc_ids = dict(
zip(
desc_result["ids"][0],
context.desc_stats.normalize(desc_result["distances"][0]),
)
)
results = {}
for event_id in thumb_ids.keys() | desc_ids:
min_distance = min(
i
for i in (thumb_ids.get(event_id), desc_ids.get(event_id))
if i is not None
)
results[event_id] = {
"distance": min_distance,
"source": "thumbnail"
if min_distance == thumb_ids.get(event_id)
else "description",
}
if not results:
return jsonify([])
# Get the event data
events = (
Event.select(*selected_columns)
.join(
ReviewSegment,
JOIN.LEFT_OUTER,
on=(fn.json_extract(ReviewSegment.data, "$.detections").contains(Event.id)),
)
.where(Event.id << list(results.keys()))
.dicts()
.iterator()
)
events = list(events)
events = [
{k: v for k, v in event.items() if k != "data"}
| {
k: v
for k, v in event["data"].items()
if k in ["type", "score", "top_score", "description"]
}
| {
"search_distance": results[event["id"]]["distance"],
"search_source": results[event["id"]]["source"],
}
for event in events
]
events = sorted(events, key=lambda x: x["search_distance"])[:limit]
return jsonify(events)
@EventBp.route("/events/summary")
def events_summary():
tz_name = request.args.get("timezone", default="utc", type=str)
@@ -604,6 +813,52 @@ def set_sub_label(id):
)
@EventBp.route("/events/<id>/description", methods=("POST",))
def set_description(id):
try:
event: Event = Event.get(Event.id == id)
except DoesNotExist:
return make_response(
jsonify({"success": False, "message": "Event " + id + " not found"}), 404
)
json: dict[str, any] = request.get_json(silent=True) or {}
new_description = json.get("description")
if new_description is None or len(new_description) == 0:
return make_response(
jsonify(
{
"success": False,
"message": "description cannot be empty",
}
),
400,
)
event.data["description"] = new_description
event.save()
# If semantic search is enabled, update the index
if current_app.frigate_config.semantic_search.enabled:
context: EmbeddingsContext = current_app.embeddings
context.embeddings.description.upsert(
documents=[new_description],
metadatas=[get_metadata(event)],
ids=[id],
)
return make_response(
jsonify(
{
"success": True,
"message": "Event " + id + " description set to " + new_description,
}
),
200,
)
@EventBp.route("/events/<id>", methods=("DELETE",))
def delete_event(id):
try:
@@ -625,6 +880,11 @@ def delete_event(id):
event.delete_instance()
Timeline.delete().where(Timeline.source_id == id).execute()
# If semantic search is enabled, update the index
if current_app.frigate_config.semantic_search.enabled:
context: EmbeddingsContext = current_app.embeddings
context.embeddings.thumbnail.delete(ids=[id])
context.embeddings.description.delete(ids=[id])
return make_response(
jsonify({"success": True, "message": "Event " + id + " deleted"}), 200
)

View File

@@ -55,6 +55,8 @@ def export_recording(camera_name: str, start_time, end_time):
401,
)
existing_image = json.get("image_path")
recordings_count = (
Recordings.select()
.where(
@@ -78,6 +80,7 @@ def export_recording(camera_name: str, start_time, end_time):
current_app.frigate_config,
camera_name,
friendly_name,
existing_image,
int(start_time),
int(end_time),
(

View File

@@ -0,0 +1,65 @@
"""Notification apis."""
import logging
import os
from cryptography.hazmat.primitives import serialization
from flask import (
Blueprint,
current_app,
jsonify,
make_response,
request,
)
from peewee import DoesNotExist
from py_vapid import Vapid01, utils
from frigate.const import CONFIG_DIR
from frigate.models import User
logger = logging.getLogger(__name__)
NotificationBp = Blueprint("notifications", __name__)
@NotificationBp.route("/notifications/pubkey", methods=["GET"])
def get_vapid_pub_key():
if not current_app.frigate_config.notifications.enabled:
return make_response(
jsonify({"success": False, "message": "Notifications are not enabled."}),
400,
)
key = Vapid01.from_file(os.path.join(CONFIG_DIR, "notifications.pem"))
raw_pub = key.public_key.public_bytes(
serialization.Encoding.X962, serialization.PublicFormat.UncompressedPoint
)
return jsonify(utils.b64urlencode(raw_pub)), 200
@NotificationBp.route("/notifications/register", methods=["POST"])
def register_notifications():
if current_app.frigate_config.auth.enabled:
username = request.headers.get("remote-user", type=str) or "admin"
else:
username = "admin"
json: dict[str, any] = request.get_json(silent=True) or {}
sub = json.get("sub")
if not sub:
return jsonify(
{"success": False, "message": "Subscription must be provided."}
), 400
try:
User.update(notification_tokens=User.notification_tokens.append(sub)).where(
User.username == username
).execute()
return make_response(
jsonify({"success": True, "message": "Successfully saved token."}), 200
)
except DoesNotExist:
return make_response(
jsonify({"success": False, "message": "Could not find user."}), 404
)

View File

@@ -22,11 +22,12 @@ from pydantic import ValidationError
from frigate.api.app import create_app
from frigate.api.auth import hash_password
from frigate.comms.config_updater import ConfigPublisher
from frigate.comms.detections_updater import DetectionProxy
from frigate.comms.dispatcher import Communicator, Dispatcher
from frigate.comms.inter_process import InterProcessCommunicator
from frigate.comms.mqtt import MqttClient
from frigate.comms.webpush import WebPushClient
from frigate.comms.ws import WebSocketClient
from frigate.comms.zmq_proxy import ZmqProxy
from frigate.config import FrigateConfig
from frigate.const import (
CACHE_DIR,
@@ -37,6 +38,7 @@ from frigate.const import (
MODEL_CACHE_DIR,
RECORD_DIR,
)
from frigate.embeddings import EmbeddingsContext, manage_embeddings
from frigate.events.audio import listen_to_audio
from frigate.events.cleanup import EventCleanup
from frigate.events.external import ExternalEventProcessor
@@ -316,7 +318,25 @@ class FrigateApp:
self.review_segment_process = review_segment_process
review_segment_process.start()
self.processes["review_segment"] = review_segment_process.pid or 0
logger.info(f"Recording process started: {review_segment_process.pid}")
logger.info(f"Review process started: {review_segment_process.pid}")
def init_embeddings_manager(self) -> None:
if not self.config.semantic_search.enabled:
self.embeddings = None
return
# Create a client for other processes to use
self.embeddings = EmbeddingsContext()
embedding_process = mp.Process(
target=manage_embeddings,
name="embeddings_manager",
args=(self.config,),
)
embedding_process.daemon = True
self.embedding_process = embedding_process
embedding_process.start()
self.processes["embeddings"] = embedding_process.pid or 0
logger.info(f"Embedding process started: {embedding_process.pid}")
def bind_database(self) -> None:
"""Bind db to the main process."""
@@ -362,12 +382,13 @@ class FrigateApp:
def init_inter_process_communicator(self) -> None:
self.inter_process_communicator = InterProcessCommunicator()
self.inter_config_updater = ConfigPublisher()
self.inter_detection_proxy = DetectionProxy()
self.inter_zmq_proxy = ZmqProxy()
def init_web_server(self) -> None:
self.flask_app = create_app(
self.config,
self.db,
self.embeddings,
self.detected_frames_processor,
self.storage_maintainer,
self.onvif_controller,
@@ -385,6 +406,9 @@ class FrigateApp:
if self.config.mqtt.enabled:
comms.append(MqttClient(self.config))
if self.config.notifications.enabled:
comms.append(WebPushClient(self.config))
comms.append(WebSocketClient(self.config))
comms.append(self.inter_process_communicator)
@@ -678,6 +702,7 @@ class FrigateApp:
self.init_onvif()
self.init_recording_manager()
self.init_review_segment_manager()
self.init_embeddings_manager()
self.init_go2rtc()
self.bind_database()
self.check_db_data_migrations()
@@ -794,10 +819,14 @@ class FrigateApp:
self.frigate_watchdog.join()
self.db.stop()
# Save embeddings stats to disk
if self.embeddings:
self.embeddings.save_stats()
# Stop Communicators
self.inter_process_communicator.stop()
self.inter_config_updater.stop()
self.inter_detection_proxy.stop()
self.inter_zmq_proxy.stop()
while len(self.detection_shms) > 0:
shm = self.detection_shms.pop()

View File

@@ -1,14 +1,9 @@
"""Facilitates communication between processes."""
import threading
from enum import Enum
from typing import Optional
import zmq
SOCKET_CONTROL = "inproc://control.detections_updater"
SOCKET_PUB = "ipc:///tmp/cache/detect_pub"
SOCKET_SUB = "ipc:///tmp/cache/detect_sub"
from .zmq_proxy import Publisher, Subscriber
class DetectionTypeEnum(str, Enum):
@@ -18,85 +13,31 @@ class DetectionTypeEnum(str, Enum):
audio = "audio"
class DetectionProxyRunner(threading.Thread):
def __init__(self, context: zmq.Context[zmq.Socket]) -> None:
threading.Thread.__init__(self)
self.name = "detection_proxy"
self.context = context
def run(self) -> None:
"""Run the proxy."""
control = self.context.socket(zmq.REP)
control.connect(SOCKET_CONTROL)
incoming = self.context.socket(zmq.XSUB)
incoming.bind(SOCKET_PUB)
outgoing = self.context.socket(zmq.XPUB)
outgoing.bind(SOCKET_SUB)
zmq.proxy_steerable(
incoming, outgoing, None, control
) # blocking, will unblock terminate message is received
incoming.close()
outgoing.close()
class DetectionProxy:
"""Proxies video and audio detections."""
def __init__(self) -> None:
self.context = zmq.Context()
self.control = self.context.socket(zmq.REQ)
self.control.bind(SOCKET_CONTROL)
self.runner = DetectionProxyRunner(self.context)
self.runner.start()
def stop(self) -> None:
self.control.send("TERMINATE".encode()) # tell the proxy to stop
self.runner.join()
self.context.destroy()
class DetectionPublisher:
class DetectionPublisher(Publisher):
"""Simplifies receiving video and audio detections."""
topic_base = "detection/"
def __init__(self, topic: DetectionTypeEnum) -> None:
self.topic = topic
self.context = zmq.Context()
self.socket = self.context.socket(zmq.PUB)
self.socket.connect(SOCKET_PUB)
def send_data(self, payload: any) -> None:
"""Publish detection."""
self.socket.send_string(self.topic.value, flags=zmq.SNDMORE)
self.socket.send_json(payload)
def stop(self) -> None:
self.socket.close()
self.context.destroy()
topic = topic.value
super().__init__(topic)
class DetectionSubscriber:
class DetectionSubscriber(Subscriber):
"""Simplifies receiving video and audio detections."""
topic_base = "detection/"
def __init__(self, topic: DetectionTypeEnum) -> None:
self.context = zmq.Context()
self.socket = self.context.socket(zmq.SUB)
self.socket.setsockopt_string(zmq.SUBSCRIBE, topic.value)
self.socket.connect(SOCKET_SUB)
topic = topic.value
super().__init__(topic)
def get_data(self, timeout: float = None) -> Optional[tuple[str, any]]:
"""Returns detections or None if no update."""
try:
has_update, _, _ = zmq.select([self.socket], [], [], timeout)
def check_for_update(
self, timeout: float = None
) -> Optional[tuple[DetectionTypeEnum, any]]:
return super().check_for_update(timeout)
if has_update:
topic = DetectionTypeEnum[self.socket.recv_string(flags=zmq.NOBLOCK)]
return (topic, self.socket.recv_json())
except zmq.ZMQError:
pass
return (None, None)
def stop(self) -> None:
self.socket.close()
self.context.destroy()
def _return_object(self, topic: str, payload: any) -> any:
if payload is None:
return (None, None)
return (DetectionTypeEnum[topic[len(self.topic_base) :]], payload)

View File

@@ -14,9 +14,10 @@ from frigate.const import (
INSERT_PREVIEW,
REQUEST_REGION_GRID,
UPDATE_CAMERA_ACTIVITY,
UPDATE_EVENT_DESCRIPTION,
UPSERT_REVIEW_SEGMENT,
)
from frigate.models import Previews, Recordings, ReviewSegment
from frigate.models import Event, Previews, Recordings, ReviewSegment
from frigate.ptz.onvif import OnvifCommandEnum, OnvifController
from frigate.types import PTZMetricsTypes
from frigate.util.object import get_camera_regions_grid
@@ -128,6 +129,10 @@ class Dispatcher:
).execute()
elif topic == UPDATE_CAMERA_ACTIVITY:
self.camera_activity = payload
elif topic == UPDATE_EVENT_DESCRIPTION:
event: Event = Event.get(Event.id == payload["id"])
event.data["description"] = payload["description"]
event.save()
elif topic == "onConnect":
camera_status = self.camera_activity.copy()

View File

@@ -1,100 +1,51 @@
"""Facilitates communication between processes."""
import zmq
from frigate.events.types import EventStateEnum, EventTypeEnum
SOCKET_PUSH_PULL = "ipc:///tmp/cache/events"
SOCKET_PUSH_PULL_END = "ipc:///tmp/cache/events_ended"
from .zmq_proxy import Publisher, Subscriber
class EventUpdatePublisher:
class EventUpdatePublisher(Publisher):
"""Publishes events (objects, audio, manual)."""
topic_base = "event/"
def __init__(self) -> None:
self.context = zmq.Context()
self.socket = self.context.socket(zmq.PUSH)
self.socket.connect(SOCKET_PUSH_PULL)
super().__init__("update")
def publish(
self, payload: tuple[EventTypeEnum, EventStateEnum, str, dict[str, any]]
) -> None:
"""There is no communication back to the processes."""
self.socket.send_json(payload)
def stop(self) -> None:
self.socket.close()
self.context.destroy()
super().publish(payload)
class EventUpdateSubscriber:
class EventUpdateSubscriber(Subscriber):
"""Receives event updates."""
topic_base = "event/"
def __init__(self) -> None:
self.context = zmq.Context()
self.socket = self.context.socket(zmq.PULL)
self.socket.bind(SOCKET_PUSH_PULL)
def check_for_update(
self, timeout=1
) -> tuple[EventTypeEnum, EventStateEnum, str, dict[str, any]]:
"""Returns events or None if no update."""
try:
has_update, _, _ = zmq.select([self.socket], [], [], timeout)
if has_update:
return self.socket.recv_json()
except zmq.ZMQError:
pass
return None
def stop(self) -> None:
self.socket.close()
self.context.destroy()
super().__init__("update")
class EventEndPublisher:
class EventEndPublisher(Publisher):
"""Publishes events that have ended."""
topic_base = "event/"
def __init__(self) -> None:
self.context = zmq.Context()
self.socket = self.context.socket(zmq.PUSH)
self.socket.connect(SOCKET_PUSH_PULL_END)
super().__init__("finalized")
def publish(
self, payload: tuple[EventTypeEnum, EventStateEnum, str, dict[str, any]]
) -> None:
"""There is no communication back to the processes."""
self.socket.send_json(payload)
def stop(self) -> None:
self.socket.close()
self.context.destroy()
super().publish(payload)
class EventEndSubscriber:
class EventEndSubscriber(Subscriber):
"""Receives events that have ended."""
topic_base = "event/"
def __init__(self) -> None:
self.context = zmq.Context()
self.socket = self.context.socket(zmq.PULL)
self.socket.bind(SOCKET_PUSH_PULL_END)
def check_for_update(
self, timeout=1
) -> tuple[EventTypeEnum, EventStateEnum, str, dict[str, any]]:
"""Returns events ended or None if no update."""
try:
has_update, _, _ = zmq.select([self.socket], [], [], timeout)
if has_update:
return self.socket.recv_json()
except zmq.ZMQError:
pass
return None
def stop(self) -> None:
self.socket.close()
self.context.destroy()
super().__init__("finalized")

190
frigate/comms/webpush.py Normal file
View File

@@ -0,0 +1,190 @@
"""Handle sending notifications for Frigate via Firebase."""
import datetime
import json
import logging
import os
from typing import Any, Callable
from py_vapid import Vapid01
from pywebpush import WebPusher
from frigate.comms.dispatcher import Communicator
from frigate.config import FrigateConfig
from frigate.const import CONFIG_DIR
from frigate.models import User
logger = logging.getLogger(__name__)
class WebPushClient(Communicator): # type: ignore[misc]
"""Frigate wrapper for webpush client."""
def __init__(self, config: FrigateConfig) -> None:
self.config = config
self.claim_headers: dict[str, dict[str, str]] = {}
self.refresh: int = 0
self.web_pushers: dict[str, list[WebPusher]] = {}
self.expired_subs: dict[str, list[str]] = {}
if not self.config.notifications.email:
logger.warning("Email must be provided for push notifications to be sent.")
# Pull keys from PEM or generate if they do not exist
self.vapid = Vapid01.from_file(os.path.join(CONFIG_DIR, "notifications.pem"))
users: list[User] = (
User.select(User.username, User.notification_tokens).dicts().iterator()
)
for user in users:
self.web_pushers[user["username"]] = []
for sub in user["notification_tokens"]:
self.web_pushers[user["username"]].append(WebPusher(sub))
def subscribe(self, receiver: Callable) -> None:
"""Wrapper for allowing dispatcher to subscribe."""
pass
def check_registrations(self) -> None:
# check for valid claim or create new one
now = datetime.datetime.now().timestamp()
if len(self.claim_headers) == 0 or self.refresh < now:
self.refresh = int(
(datetime.datetime.now() + datetime.timedelta(hours=1)).timestamp()
)
endpoints: set[str] = set()
# get a unique set of push endpoints
for pushers in self.web_pushers.values():
for push in pushers:
endpoint: str = push.subscription_info["endpoint"]
endpoints.add(endpoint[0 : endpoint.index("/", 10)])
# create new claim
for endpoint in endpoints:
claim = {
"sub": f"mailto:{self.config.notifications.email}",
"aud": endpoint,
"exp": self.refresh,
}
self.claim_headers[endpoint] = self.vapid.sign(claim)
def cleanup_registrations(self) -> None:
# delete any expired subs
if len(self.expired_subs) > 0:
for user, expired in self.expired_subs.items():
user_subs = []
# get all subscriptions, removing ones that are expired
stored_user: User = User.get_by_id(user)
for token in stored_user.notification_tokens:
if token["endpoint"] in expired:
continue
user_subs.append(token)
# overwrite the database and reset web pushers
User.update(notification_tokens=user_subs).where(
User.username == user
).execute()
self.web_pushers[user] = []
for sub in user_subs:
self.web_pushers[user].append(WebPusher(sub))
logger.info(
f"Cleaned up {len(expired)} notification subscriptions for {user}"
)
self.expired_subs = {}
def publish(self, topic: str, payload: Any, retain: bool = False) -> None:
"""Wrapper for publishing when client is in valid state."""
if topic == "reviews":
self.send_alert(json.loads(payload))
def send_alert(self, payload: dict[str, any]) -> None:
if not self.config.notifications.email:
return
self.check_registrations()
# Only notify for alerts
if payload["after"]["severity"] != "alert":
return
state = payload["type"]
# Don't notify if message is an update and important fields don't have an update
if (
state == "update"
and len(payload["before"]["data"]["objects"])
== len(payload["after"]["data"]["objects"])
and len(payload["before"]["data"]["zones"])
== len(payload["after"]["data"]["zones"])
):
return
reviewId = payload["after"]["id"]
sorted_objects: set[str] = set()
for obj in payload["after"]["data"]["objects"]:
if "-verified" not in obj:
sorted_objects.add(obj)
sorted_objects.update(payload["after"]["data"]["sub_labels"])
camera: str = payload["after"]["camera"]
title = f"{', '.join(sorted_objects).replace('_', ' ').title()}{' was' if state == 'end' else ''} detected in {', '.join(payload['after']['data']['zones']).replace('_', ' ').title()}"
message = f"Detected on {camera.replace('_', ' ').title()}"
image = f'{payload["after"]["thumb_path"].replace("/media/frigate", "")}'
# if event is ongoing open to live view otherwise open to recordings view
direct_url = f"/review?id={reviewId}" if state == "end" else f"/#{camera}"
for user, pushers in self.web_pushers.items():
for pusher in pushers:
endpoint = pusher.subscription_info["endpoint"]
# set headers for notification behavior
headers = self.claim_headers[
endpoint[0 : endpoint.index("/", 10)]
].copy()
headers["urgency"] = "high"
ttl = 3600 if state == "end" else 0
# send message
resp = pusher.send(
headers=headers,
ttl=ttl,
data=json.dumps(
{
"title": title,
"message": message,
"direct_url": direct_url,
"image": image,
"id": reviewId,
"type": "alert",
}
),
)
if resp.status_code == 201:
pass
elif resp.status_code == 404 or resp.status_code == 410:
# subscription is not found or has been unsubscribed
if not self.expired_subs.get(user):
self.expired_subs[user] = []
self.expired_subs[user].append(pusher.subscription_info["endpoint"])
# the subscription no longer exists and should be removed
else:
logger.warning(
f"Failed to send notification to {user} :: {resp.headers}"
)
self.cleanup_registrations()
def stop(self) -> None:
pass

View File

@@ -0,0 +1,99 @@
"""Facilitates communication over zmq proxy."""
import json
import threading
from typing import Optional
import zmq
SOCKET_PUB = "ipc:///tmp/cache/proxy_pub"
SOCKET_SUB = "ipc:///tmp/cache/proxy_sub"
class ZmqProxyRunner(threading.Thread):
def __init__(self, context: zmq.Context[zmq.Socket]) -> None:
threading.Thread.__init__(self)
self.name = "detection_proxy"
self.context = context
def run(self) -> None:
"""Run the proxy."""
incoming = self.context.socket(zmq.XSUB)
incoming.bind(SOCKET_PUB)
outgoing = self.context.socket(zmq.XPUB)
outgoing.bind(SOCKET_SUB)
# Blocking: This will unblock (via exception) when we destroy the context
# The incoming and outgoing sockets will be closed automatically
# when the context is destroyed as well.
try:
zmq.proxy(incoming, outgoing)
except zmq.ZMQError:
pass
class ZmqProxy:
"""Proxies video and audio detections."""
def __init__(self) -> None:
self.context = zmq.Context()
self.runner = ZmqProxyRunner(self.context)
self.runner.start()
def stop(self) -> None:
# destroying the context will tell the proxy to stop
self.context.destroy()
self.runner.join()
class Publisher:
"""Publishes messages."""
topic_base: str = ""
def __init__(self, topic: str = "") -> None:
self.topic = f"{self.topic_base}{topic}"
self.context = zmq.Context()
self.socket = self.context.socket(zmq.PUB)
self.socket.connect(SOCKET_PUB)
def publish(self, payload: any, sub_topic: str = "") -> None:
"""Publish message."""
self.socket.send_string(f"{self.topic}{sub_topic} {json.dumps(payload)}")
def stop(self) -> None:
self.socket.close()
self.context.destroy()
class Subscriber:
"""Receives messages."""
topic_base: str = ""
def __init__(self, topic: str = "") -> None:
self.topic = f"{self.topic_base}{topic}"
self.context = zmq.Context()
self.socket = self.context.socket(zmq.SUB)
self.socket.setsockopt_string(zmq.SUBSCRIBE, self.topic)
self.socket.connect(SOCKET_SUB)
def check_for_update(self, timeout: float = 1) -> Optional[tuple[str, any]]:
"""Returns message or None if no update."""
try:
has_update, _, _ = zmq.select([self.socket], [], [], timeout)
if has_update:
parts = self.socket.recv_string(flags=zmq.NOBLOCK).split(maxsplit=1)
return self._return_object(parts[0], json.loads(parts[1]))
except zmq.ZMQError:
pass
return self._return_object("", None)
def stop(self) -> None:
self.socket.close()
self.context.destroy()
def _return_object(self, topic: str, payload: any) -> any:
return payload

View File

@@ -169,6 +169,11 @@ class AuthConfig(FrigateBaseModel):
hash_iterations: int = Field(default=600000, title="Password hash iterations")
class NotificationConfig(FrigateBaseModel):
enabled: bool = Field(default=False, title="Enable notifications")
email: Optional[str] = Field(default=None, title="Email required for push.")
class StatsConfig(FrigateBaseModel):
amd_gpu_stats: bool = Field(default=True, title="Enable AMD GPU stats.")
intel_gpu_stats: bool = Field(default=True, title="Enable Intel GPU stats.")
@@ -730,6 +735,38 @@ class ReviewConfig(FrigateBaseModel):
)
class SemanticSearchConfig(FrigateBaseModel):
enabled: bool = Field(default=False, title="Enable semantic search.")
reindex: Optional[bool] = Field(
default=False, title="Reindex all detections on startup."
)
class GenAIProviderEnum(str, Enum):
openai = "openai"
gemini = "gemini"
ollama = "ollama"
class GenAIConfig(FrigateBaseModel):
enabled: bool = Field(default=False, title="Enable GenAI.")
provider: GenAIProviderEnum = Field(
default=GenAIProviderEnum.openai, title="GenAI provider."
)
base_url: Optional[str] = Field(None, title="Provider base url.")
api_key: Optional[str] = Field(None, title="Provider API key.")
model: str = Field(default="gpt-4o", title="GenAI model.")
prompt: str = Field(
default="Describe the {label} in the sequence of images with as much detail as possible. Do not describe the background.",
title="Default caption prompt.",
)
object_prompts: Dict[str, str] = Field(default={}, title="Object specific prompts.")
class GenAICameraConfig(FrigateBaseModel):
enabled: bool = Field(default=False, title="Enable GenAI for camera.")
class AudioConfig(FrigateBaseModel):
enabled: bool = Field(default=False, title="Enable audio events.")
max_not_heard: int = Field(
@@ -1011,6 +1048,9 @@ class CameraConfig(FrigateBaseModel):
review: ReviewConfig = Field(
default_factory=ReviewConfig, title="Review configuration."
)
genai: GenAICameraConfig = Field(
default_factory=GenAICameraConfig, title="Generative AI configuration."
)
audio: AudioConfig = Field(
default_factory=AudioConfig, title="Audio events configuration."
)
@@ -1326,6 +1366,9 @@ class FrigateConfig(FrigateBaseModel):
default_factory=dict, title="Frigate environment variables."
)
ui: UIConfig = Field(default_factory=UIConfig, title="UI configuration.")
notifications: NotificationConfig = Field(
default_factory=NotificationConfig, title="Notification Config"
)
telemetry: TelemetryConfig = Field(
default_factory=TelemetryConfig, title="Telemetry configuration."
)
@@ -1363,6 +1406,12 @@ class FrigateConfig(FrigateBaseModel):
review: ReviewConfig = Field(
default_factory=ReviewConfig, title="Review configuration."
)
semantic_search: SemanticSearchConfig = Field(
default_factory=SemanticSearchConfig, title="Semantic search configuration."
)
genai: GenAIConfig = Field(
default_factory=GenAIConfig, title="Generative AI configuration."
)
audio: AudioConfig = Field(
default_factory=AudioConfig, title="Global Audio events configuration."
)
@@ -1397,6 +1446,10 @@ class FrigateConfig(FrigateBaseModel):
config.mqtt.user = config.mqtt.user.format(**FRIGATE_ENV_VARS)
config.mqtt.password = config.mqtt.password.format(**FRIGATE_ENV_VARS)
# GenAI substitution
if config.genai.api_key:
config.genai.api_key = config.genai.api_key.format(**FRIGATE_ENV_VARS)
# set default min_score for object attributes
for attribute in ALL_ATTRIBUTE_LABELS:
if not config.objects.filters.get(attribute):
@@ -1418,6 +1471,7 @@ class FrigateConfig(FrigateBaseModel):
"live": ...,
"objects": ...,
"review": ...,
"genai": {"enabled"},
"motion": ...,
"detect": ...,
"ffmpeg": ...,

View File

@@ -81,6 +81,7 @@ REQUEST_REGION_GRID = "request_region_grid"
UPSERT_REVIEW_SEGMENT = "upsert_review_segment"
CLEAR_ONGOING_REVIEW_SEGMENTS = "clear_ongoing_review_segments"
UPDATE_CAMERA_ACTIVITY = "update_camera_activity"
UPDATE_EVENT_DESCRIPTION = "update_event_description"
# Stats Values

View File

@@ -0,0 +1,294 @@
import logging
import os
import urllib.request
import numpy as np
try:
from hailo_platform import (
HEF,
ConfigureParams,
FormatType,
HailoRTException,
HailoStreamInterface,
InferVStreams,
InputVStreamParams,
OutputVStreamParams,
VDevice,
)
except ModuleNotFoundError:
pass
from pydantic import BaseModel, Field
from typing_extensions import Literal
from frigate.detectors.detection_api import DetectionApi
from frigate.detectors.detector_config import BaseDetectorConfig
from frigate.detectors.util import preprocess # Assuming this function is available
# Set up logging
logger = logging.getLogger(__name__)
# Define the detector key for Hailo
DETECTOR_KEY = "hailo8l"
# Configuration class for model settings
class ModelConfig(BaseModel):
path: str = Field(default=None, title="Model Path") # Path to the HEF file
# Configuration class for Hailo detector
class HailoDetectorConfig(BaseDetectorConfig):
type: Literal[DETECTOR_KEY] # Type of the detector
device: str = Field(default="PCIe", title="Device Type") # Device type (e.g., PCIe)
# Hailo detector class implementation
class HailoDetector(DetectionApi):
type_key = DETECTOR_KEY # Set the type key to the Hailo detector key
def __init__(self, detector_config: HailoDetectorConfig):
# Initialize device type and model path from the configuration
self.h8l_device_type = detector_config.device
self.h8l_model_path = detector_config.model.path
self.h8l_model_height = detector_config.model.height
self.h8l_model_width = detector_config.model.width
self.h8l_model_type = detector_config.model.model_type
self.h8l_tensor_format = detector_config.model.input_tensor
self.h8l_pixel_format = detector_config.model.input_pixel_format
self.model_url = "https://hailo-model-zoo.s3.eu-west-2.amazonaws.com/ModelZoo/Compiled/v2.11.0/hailo8l/ssd_mobilenet_v1.hef"
self.cache_dir = "/config/model_cache/h8l_cache"
self.expected_model_filename = "ssd_mobilenet_v1.hef"
output_type = "FLOAT32"
logger.info(f"Initializing Hailo device as {self.h8l_device_type}")
self.check_and_prepare_model()
try:
# Validate device type
if self.h8l_device_type not in ["PCIe", "M.2"]:
raise ValueError(f"Unsupported device type: {self.h8l_device_type}")
# Initialize the Hailo device
self.target = VDevice()
# Load the HEF (Hailo's binary format for neural networks)
self.hef = HEF(self.h8l_model_path)
# Create configuration parameters from the HEF
self.configure_params = ConfigureParams.create_from_hef(
hef=self.hef, interface=HailoStreamInterface.PCIe
)
# Configure the device with the HEF
self.network_groups = self.target.configure(self.hef, self.configure_params)
self.network_group = self.network_groups[0]
self.network_group_params = self.network_group.create_params()
# Create input and output virtual stream parameters
self.input_vstreams_params = InputVStreamParams.make(
self.network_group,
format_type=self.hef.get_input_vstream_infos()[0].format.type,
)
self.output_vstreams_params = OutputVStreamParams.make(
self.network_group, format_type=getattr(FormatType, output_type)
)
# Get input and output stream information from the HEF
self.input_vstream_info = self.hef.get_input_vstream_infos()
self.output_vstream_info = self.hef.get_output_vstream_infos()
logger.info("Hailo device initialized successfully")
logger.debug(f"[__init__] Model Path: {self.h8l_model_path}")
logger.debug(f"[__init__] Input Tensor Format: {self.h8l_tensor_format}")
logger.debug(f"[__init__] Input Pixel Format: {self.h8l_pixel_format}")
logger.debug(f"[__init__] Input VStream Info: {self.input_vstream_info[0]}")
logger.debug(
f"[__init__] Output VStream Info: {self.output_vstream_info[0]}"
)
except HailoRTException as e:
logger.error(f"HailoRTException during initialization: {e}")
raise
except Exception as e:
logger.error(f"Failed to initialize Hailo device: {e}")
raise
def check_and_prepare_model(self):
# Ensure cache directory exists
if not os.path.exists(self.cache_dir):
os.makedirs(self.cache_dir)
# Check for the expected model file
model_file_path = os.path.join(self.cache_dir, self.expected_model_filename)
if not os.path.isfile(model_file_path):
logger.info(
f"A model file was not found at {model_file_path}, Downloading one from {self.model_url}."
)
urllib.request.urlretrieve(self.model_url, model_file_path)
logger.info(f"A model file was downloaded to {model_file_path}.")
else:
logger.info(
f"A model file already exists at {model_file_path} not downloading one."
)
def detect_raw(self, tensor_input):
logger.debug("[detect_raw] Entering function")
logger.debug(
f"[detect_raw] The `tensor_input` = {tensor_input} tensor_input shape = {tensor_input.shape}"
)
if tensor_input is None:
raise ValueError(
"[detect_raw] The 'tensor_input' argument must be provided"
)
# Ensure tensor_input is a numpy array
if isinstance(tensor_input, list):
tensor_input = np.array(tensor_input)
logger.debug(
f"[detect_raw] Converted tensor_input to numpy array: shape {tensor_input.shape}"
)
# Preprocess the tensor input using Frigate's preprocess function
processed_tensor = preprocess(
tensor_input, (1, self.h8l_model_height, self.h8l_model_width, 3), np.uint8
)
logger.debug(
f"[detect_raw] Tensor data and shape after preprocessing: {processed_tensor} {processed_tensor.shape}"
)
input_data = processed_tensor
logger.debug(
f"[detect_raw] Input data for inference shape: {processed_tensor.shape}, dtype: {processed_tensor.dtype}"
)
try:
with InferVStreams(
self.network_group,
self.input_vstreams_params,
self.output_vstreams_params,
) as infer_pipeline:
input_dict = {}
if isinstance(input_data, dict):
input_dict = input_data
logger.debug("[detect_raw] it a dictionary.")
elif isinstance(input_data, (list, tuple)):
for idx, layer_info in enumerate(self.input_vstream_info):
input_dict[layer_info.name] = input_data[idx]
logger.debug("[detect_raw] converted from list/tuple.")
else:
if len(input_data.shape) == 3:
input_data = np.expand_dims(input_data, axis=0)
logger.debug("[detect_raw] converted from an array.")
input_dict[self.input_vstream_info[0].name] = input_data
logger.debug(
f"[detect_raw] Input dictionary for inference keys: {input_dict.keys()}"
)
with self.network_group.activate(self.network_group_params):
raw_output = infer_pipeline.infer(input_dict)
logger.debug(f"[detect_raw] Raw inference output: {raw_output}")
if self.output_vstream_info[0].name not in raw_output:
logger.error(
f"[detect_raw] Missing output stream {self.output_vstream_info[0].name} in inference results"
)
return np.zeros((20, 6), np.float32)
raw_output = raw_output[self.output_vstream_info[0].name][0]
logger.debug(
f"[detect_raw] Raw output for stream {self.output_vstream_info[0].name}: {raw_output}"
)
# Process the raw output
detections = self.process_detections(raw_output)
if len(detections) == 0:
logger.debug(
"[detect_raw] No detections found after processing. Setting default values."
)
return np.zeros((20, 6), np.float32)
else:
formatted_detections = detections
if (
formatted_detections.shape[1] != 6
): # Ensure the formatted detections have 6 columns
logger.error(
f"[detect_raw] Unexpected shape for formatted detections: {formatted_detections.shape}. Expected (20, 6)."
)
return np.zeros((20, 6), np.float32)
return formatted_detections
except HailoRTException as e:
logger.error(f"[detect_raw] HailoRTException during inference: {e}")
return np.zeros((20, 6), np.float32)
except Exception as e:
logger.error(f"[detect_raw] Exception during inference: {e}")
return np.zeros((20, 6), np.float32)
finally:
logger.debug("[detect_raw] Exiting function")
def process_detections(self, raw_detections, threshold=0.5):
boxes, scores, classes = [], [], []
num_detections = 0
logger.debug(f"[process_detections] Raw detections: {raw_detections}")
for i, detection_set in enumerate(raw_detections):
if not isinstance(detection_set, np.ndarray) or detection_set.size == 0:
logger.debug(
f"[process_detections] Detection set {i} is empty or not an array, skipping."
)
continue
logger.debug(
f"[process_detections] Detection set {i} shape: {detection_set.shape}"
)
for detection in detection_set:
if detection.shape[0] == 0:
logger.debug(
f"[process_detections] Detection in set {i} is empty, skipping."
)
continue
ymin, xmin, ymax, xmax = detection[:4]
score = np.clip(detection[4], 0, 1) # Use np.clip for clarity
if score < threshold:
logger.debug(
f"[process_detections] Detection in set {i} has a score {score} below threshold {threshold}. Skipping."
)
continue
logger.debug(
f"[process_detections] Adding detection with coordinates: ({xmin}, {ymin}), ({xmax}, {ymax}) and score: {score}"
)
boxes.append([ymin, xmin, ymax, xmax])
scores.append(score)
classes.append(i)
num_detections += 1
logger.debug(
f"[process_detections] Boxes: {boxes}, Scores: {scores}, Classes: {classes}, Num detections: {num_detections}"
)
if num_detections == 0:
logger.debug("[process_detections] No valid detections found.")
return np.zeros((20, 6), np.float32)
combined = np.hstack(
(
np.array(classes)[:, np.newaxis],
np.array(scores)[:, np.newaxis],
np.array(boxes),
)
)
if combined.shape[0] < 20:
padding = np.zeros(
(20 - combined.shape[0], combined.shape[1]), dtype=combined.dtype
)
combined = np.vstack((combined, padding))
logger.debug(
f"[process_detections] Combined detections (padded to 20 if necessary): {np.array_str(combined, precision=4, suppress_small=True)}"
)
return combined[:20, :6]

View File

@@ -0,0 +1,91 @@
"""ChromaDB embeddings database."""
import json
import logging
import multiprocessing as mp
import signal
import threading
from types import FrameType
from typing import Optional
from playhouse.sqliteq import SqliteQueueDatabase
from setproctitle import setproctitle
from frigate.config import FrigateConfig
from frigate.const import CONFIG_DIR
from frigate.models import Event
from frigate.util.services import listen
from .embeddings import Embeddings
from .maintainer import EmbeddingMaintainer
from .util import ZScoreNormalization
logger = logging.getLogger(__name__)
def manage_embeddings(config: FrigateConfig) -> None:
# Only initialize embeddings if semantic search is enabled
if not config.semantic_search.enabled:
return
stop_event = mp.Event()
def receiveSignal(signalNumber: int, frame: Optional[FrameType]) -> None:
stop_event.set()
signal.signal(signal.SIGTERM, receiveSignal)
signal.signal(signal.SIGINT, receiveSignal)
threading.current_thread().name = "process:embeddings_manager"
setproctitle("frigate.embeddings_manager")
listen()
# Configure Frigate DB
db = SqliteQueueDatabase(
config.database.path,
pragmas={
"auto_vacuum": "FULL", # Does not defragment database
"cache_size": -512 * 1000, # 512MB of cache
"synchronous": "NORMAL", # Safe when using WAL https://www.sqlite.org/pragma.html#pragma_synchronous
},
timeout=max(60, 10 * len([c for c in config.cameras.values() if c.enabled])),
)
models = [Event]
db.bind(models)
embeddings = Embeddings()
# Check if we need to re-index events
if config.semantic_search.reindex:
embeddings.reindex()
maintainer = EmbeddingMaintainer(
config,
stop_event,
)
maintainer.start()
class EmbeddingsContext:
def __init__(self):
self.embeddings = Embeddings()
self.thumb_stats = ZScoreNormalization()
self.desc_stats = ZScoreNormalization()
# load stats from disk
try:
with open(f"{CONFIG_DIR}/.search_stats.json", "r") as f:
data = json.loads(f.read())
self.thumb_stats.from_dict(data["thumb_stats"])
self.desc_stats.from_dict(data["desc_stats"])
except FileNotFoundError:
pass
def save_stats(self):
"""Write the stats to disk as JSON on exit."""
contents = {
"thumb_stats": self.thumb_stats.to_dict(),
"desc_stats": self.desc_stats.to_dict(),
}
with open(f"{CONFIG_DIR}/.search_stats.json", "w") as f:
f.write(json.dumps(contents))

View File

@@ -0,0 +1,163 @@
"""ChromaDB embeddings database."""
import base64
import io
import logging
import sys
import time
import numpy as np
from PIL import Image
from playhouse.shortcuts import model_to_dict
from frigate.models import Event
# Squelch posthog logging
logging.getLogger("chromadb.telemetry.product.posthog").setLevel(logging.CRITICAL)
# Hotsawp the sqlite3 module for Chroma compatibility
try:
from chromadb import Collection
from chromadb import HttpClient as ChromaClient
from chromadb.config import Settings
from .functions.clip import ClipEmbedding
from .functions.minilm_l6_v2 import MiniLMEmbedding
except RuntimeError:
__import__("pysqlite3")
sys.modules["sqlite3"] = sys.modules.pop("pysqlite3")
from chromadb import Collection
from chromadb import HttpClient as ChromaClient
from chromadb.config import Settings
from .functions.clip import ClipEmbedding
from .functions.minilm_l6_v2 import MiniLMEmbedding
logger = logging.getLogger(__name__)
def get_metadata(event: Event) -> dict:
"""Extract valid event metadata."""
event_dict = model_to_dict(event)
return (
{
k: v
for k, v in event_dict.items()
if k not in ["id", "thumbnail"]
and v is not None
and isinstance(v, (str, int, float, bool))
}
| {
k: v
for k, v in event_dict["data"].items()
if k not in ["description"]
and v is not None
and isinstance(v, (str, int, float, bool))
}
| {
# Metadata search doesn't support $contains
# and an event can have multiple zones, so
# we need to create a key for each zone
f"{k}_{x}": True
for k, v in event_dict.items()
if isinstance(v, list) and len(v) > 0
for x in v
if isinstance(x, str)
}
)
class Embeddings:
"""ChromaDB embeddings database."""
def __init__(self) -> None:
self.client: ChromaClient = ChromaClient(
host="127.0.0.1",
settings=Settings(anonymized_telemetry=False),
)
@property
def thumbnail(self) -> Collection:
return self.client.get_or_create_collection(
name="event_thumbnail", embedding_function=ClipEmbedding()
)
@property
def description(self) -> Collection:
return self.client.get_or_create_collection(
name="event_description", embedding_function=MiniLMEmbedding()
)
def reindex(self) -> None:
"""Reindex all event embeddings."""
logger.info("Indexing event embeddings...")
self.client.reset()
st = time.time()
totals = {
"thumb": 0,
"desc": 0,
}
batch_size = 100
current_page = 1
events = (
Event.select()
.where(
(Event.has_clip == True | Event.has_snapshot == True)
& Event.thumbnail.is_null(False)
)
.order_by(Event.start_time.desc())
.paginate(current_page, batch_size)
)
while len(events) > 0:
thumbnails = {"ids": [], "images": [], "metadatas": []}
descriptions = {"ids": [], "documents": [], "metadatas": []}
event: Event
for event in events:
metadata = get_metadata(event)
thumbnail = base64.b64decode(event.thumbnail)
img = np.array(Image.open(io.BytesIO(thumbnail)).convert("RGB"))
thumbnails["ids"].append(event.id)
thumbnails["images"].append(img)
thumbnails["metadatas"].append(metadata)
if event.data.get("description") is not None:
descriptions["ids"].append(event.id)
descriptions["documents"].append(event.data["description"])
descriptions["metadatas"].append(metadata)
if len(thumbnails["ids"]) > 0:
totals["thumb"] += len(thumbnails["ids"])
self.thumbnail.upsert(
images=thumbnails["images"],
metadatas=thumbnails["metadatas"],
ids=thumbnails["ids"],
)
if len(descriptions["ids"]) > 0:
totals["desc"] += len(descriptions["ids"])
self.description.upsert(
documents=descriptions["documents"],
metadatas=descriptions["metadatas"],
ids=descriptions["ids"],
)
current_page += 1
events = (
Event.select()
.where(
(Event.has_clip == True | Event.has_snapshot == True)
& Event.thumbnail.is_null(False)
)
.order_by(Event.start_time.desc())
.paginate(current_page, batch_size)
)
logger.info(
"Embedded %d thumbnails and %d descriptions in %s seconds",
totals["thumb"],
totals["desc"],
time.time() - st,
)

View File

@@ -0,0 +1,63 @@
"""CLIP Embeddings for Frigate."""
import os
from typing import Tuple, Union
import onnxruntime as ort
from chromadb import EmbeddingFunction, Embeddings
from chromadb.api.types import (
Documents,
Images,
is_document,
is_image,
)
from onnx_clip import OnnxClip
from frigate.const import MODEL_CACHE_DIR
class Clip(OnnxClip):
"""Override load models to download to cache directory."""
@staticmethod
def _load_models(
model: str,
silent: bool,
) -> Tuple[ort.InferenceSession, ort.InferenceSession]:
"""
These models are a part of the container. Treat as as such.
"""
if model == "ViT-B/32":
IMAGE_MODEL_FILE = "clip_image_model_vitb32.onnx"
TEXT_MODEL_FILE = "clip_text_model_vitb32.onnx"
elif model == "RN50":
IMAGE_MODEL_FILE = "clip_image_model_rn50.onnx"
TEXT_MODEL_FILE = "clip_text_model_rn50.onnx"
else:
raise ValueError(f"Unexpected model {model}. No `.onnx` file found.")
models = []
for model_file in [IMAGE_MODEL_FILE, TEXT_MODEL_FILE]:
path = os.path.join(MODEL_CACHE_DIR, "clip", model_file)
models.append(OnnxClip._load_model(path, silent))
return models[0], models[1]
class ClipEmbedding(EmbeddingFunction):
"""Embedding function for CLIP model used in Chroma."""
def __init__(self, model: str = "ViT-B/32"):
"""Initialize CLIP Embedding function."""
self.model = Clip(model)
def __call__(self, input: Union[Documents, Images]) -> Embeddings:
embeddings: Embeddings = []
for item in input:
if is_image(item):
result = self.model.get_image_embeddings([item])
embeddings.append(result[0, :].tolist())
elif is_document(item):
result = self.model.get_text_embeddings([item])
embeddings.append(result[0, :].tolist())
return embeddings

View File

@@ -0,0 +1,11 @@
"""Embedding function for ONNX MiniLM-L6 model used in Chroma."""
from chromadb.utils.embedding_functions import ONNXMiniLM_L6_V2
from frigate.const import MODEL_CACHE_DIR
class MiniLMEmbedding(ONNXMiniLM_L6_V2):
"""Override DOWNLOAD_PATH to download to cache directory."""
DOWNLOAD_PATH = f"{MODEL_CACHE_DIR}/all-MiniLM-L6-v2"

View File

@@ -0,0 +1,197 @@
"""Maintain embeddings in Chroma."""
import base64
import io
import logging
import threading
from multiprocessing.synchronize import Event as MpEvent
from typing import Optional
import cv2
import numpy as np
from peewee import DoesNotExist
from PIL import Image
from frigate.comms.events_updater import EventEndSubscriber, EventUpdateSubscriber
from frigate.comms.inter_process import InterProcessRequestor
from frigate.config import FrigateConfig
from frigate.const import UPDATE_EVENT_DESCRIPTION
from frigate.events.types import EventTypeEnum
from frigate.genai import get_genai_client
from frigate.models import Event
from frigate.util.image import SharedMemoryFrameManager, calculate_region
from .embeddings import Embeddings, get_metadata
logger = logging.getLogger(__name__)
class EmbeddingMaintainer(threading.Thread):
"""Handle embedding queue and post event updates."""
def __init__(
self,
config: FrigateConfig,
stop_event: MpEvent,
) -> None:
threading.Thread.__init__(self)
self.name = "embeddings_maintainer"
self.config = config
self.embeddings = Embeddings()
self.event_subscriber = EventUpdateSubscriber()
self.event_end_subscriber = EventEndSubscriber()
self.frame_manager = SharedMemoryFrameManager()
# create communication for updating event descriptions
self.requestor = InterProcessRequestor()
self.stop_event = stop_event
self.tracked_events = {}
self.genai_client = get_genai_client(config.genai)
def run(self) -> None:
"""Maintain a Chroma vector database for semantic search."""
while not self.stop_event.is_set():
self._process_updates()
self._process_finalized()
self.event_subscriber.stop()
self.event_end_subscriber.stop()
self.requestor.stop()
logger.info("Exiting embeddings maintenance...")
def _process_updates(self) -> None:
"""Process event updates"""
update = self.event_subscriber.check_for_update()
if update is None:
return
source_type, _, camera, data = update
if not camera or source_type != EventTypeEnum.tracked_object:
return
camera_config = self.config.cameras[camera]
if data["id"] not in self.tracked_events:
self.tracked_events[data["id"]] = []
# Create our own thumbnail based on the bounding box and the frame time
try:
frame_id = f"{camera}{data['frame_time']}"
yuv_frame = self.frame_manager.get(frame_id, camera_config.frame_shape_yuv)
data["thumbnail"] = self._create_thumbnail(yuv_frame, data["box"])
self.tracked_events[data["id"]].append(data)
self.frame_manager.close(frame_id)
except FileNotFoundError:
pass
def _process_finalized(self) -> None:
"""Process the end of an event."""
while True:
ended = self.event_end_subscriber.check_for_update()
if ended == None:
break
event_id, camera, updated_db = ended
camera_config = self.config.cameras[camera]
if updated_db:
try:
event: Event = Event.get(Event.id == event_id)
except DoesNotExist:
continue
# Skip the event if not an object
if event.data.get("type") != "object":
continue
# Extract valid event metadata
metadata = get_metadata(event)
thumbnail = base64.b64decode(event.thumbnail)
# Embed the thumbnail
self._embed_thumbnail(event_id, thumbnail, metadata)
if (
camera_config.genai.enabled
and self.genai_client is not None
and event.data.get("description") is None
):
# Generate the description. Call happens in a thread since it is network bound.
threading.Thread(
target=self._embed_description,
name=f"_embed_description_{event.id}",
daemon=True,
args=(
event,
[
data["thumbnail"]
for data in self.tracked_events[event_id]
]
if len(self.tracked_events.get(event_id, [])) > 0
else [thumbnail],
metadata,
),
).start()
# Delete tracked events based on the event_id
if event_id in self.tracked_events:
del self.tracked_events[event_id]
def _create_thumbnail(self, yuv_frame, box, height=500) -> Optional[bytes]:
"""Return jpg thumbnail of a region of the frame."""
frame = cv2.cvtColor(yuv_frame, cv2.COLOR_YUV2BGR_I420)
region = calculate_region(
frame.shape, box[0], box[1], box[2], box[3], height, multiplier=1.4
)
frame = frame[region[1] : region[3], region[0] : region[2]]
width = int(height * frame.shape[1] / frame.shape[0])
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
ret, jpg = cv2.imencode(".jpg", frame, [int(cv2.IMWRITE_JPEG_QUALITY), 100])
if ret:
return jpg.tobytes()
return None
def _embed_thumbnail(self, event_id: str, thumbnail: bytes, metadata: dict) -> None:
"""Embed the thumbnail for an event."""
# Encode the thumbnail
img = np.array(Image.open(io.BytesIO(thumbnail)).convert("RGB"))
self.embeddings.thumbnail.upsert(
images=[img],
metadatas=[metadata],
ids=[event_id],
)
def _embed_description(
self, event: Event, thumbnails: list[bytes], metadata: dict
) -> None:
"""Embed the description for an event."""
description = self.genai_client.generate_description(thumbnails, metadata)
if description is None:
logger.debug("Failed to generate description for %s", event.id)
return
# fire and forget description update
self.requestor.send_data(
UPDATE_EVENT_DESCRIPTION,
{"id": event.id, "description": description},
)
# Encode the description
self.embeddings.description.upsert(
documents=[description],
metadatas=[metadata],
ids=[event.id],
)
logger.debug(
"Generated description for %s (%d images): %s",
event.id,
len(thumbnails),
description,
)

View File

@@ -0,0 +1,47 @@
"""Z-score normalization for search distance."""
import math
class ZScoreNormalization:
"""Running Z-score normalization for search distance."""
def __init__(self):
self.n = 0
self.mean = 0
self.m2 = 0
@property
def variance(self):
return self.m2 / (self.n - 1) if self.n > 1 else 0.0
@property
def stddev(self):
return math.sqrt(self.variance)
def normalize(self, distances: list[float]):
self._update(distances)
if self.stddev == 0:
return distances
return [(x - self.mean) / self.stddev for x in distances]
def _update(self, distances: list[float]):
for x in distances:
self.n += 1
delta = x - self.mean
self.mean += delta / self.n
delta2 = x - self.mean
self.m2 += delta * delta2
def to_dict(self):
return {
"n": self.n,
"mean": self.mean,
"m2": self.m2,
}
def from_dict(self, data: dict):
self.n = data["n"]
self.mean = data["mean"]
self.m2 = data["m2"]
return self

View File

@@ -223,7 +223,7 @@ class AudioEventMaintainer(threading.Thread):
audio_detections.append(label)
# send audio detection data
self.detection_publisher.send_data(
self.detection_publisher.publish(
(
self.config.name,
datetime.datetime.now().timestamp(),

View File

@@ -10,6 +10,7 @@ from pathlib import Path
from frigate.config import FrigateConfig
from frigate.const import CLIPS_DIR
from frigate.embeddings.embeddings import Embeddings
from frigate.models import Event, Timeline
logger = logging.getLogger(__name__)
@@ -30,6 +31,9 @@ class EventCleanup(threading.Thread):
self.removed_camera_labels: list[str] = None
self.camera_labels: dict[str, dict[str, any]] = {}
if self.config.semantic_search.enabled:
self.embeddings = Embeddings()
def get_removed_camera_labels(self) -> list[Event]:
"""Get a list of distinct labels for removed cameras."""
if self.removed_camera_labels is None:
@@ -190,16 +194,31 @@ class EventCleanup(threading.Thread):
events_with_expired_clips = self.expire(EventCleanupType.clips)
# delete timeline entries for events that have expired recordings
Timeline.delete().where(
Timeline.source_id << events_with_expired_clips
).execute()
# delete up to 100,000 at a time
max_deletes = 100000
deleted_events_list = list(events_with_expired_clips)
for i in range(0, len(deleted_events_list), max_deletes):
Timeline.delete().where(
Timeline.source_id << deleted_events_list[i : i + max_deletes]
).execute()
self.expire(EventCleanupType.snapshots)
# drop events from db where has_clip and has_snapshot are false
delete_query = Event.delete().where(
Event.has_clip == False, Event.has_snapshot == False
events = (
Event.select()
.where(Event.has_clip == False, Event.has_snapshot == False)
.iterator()
)
delete_query.execute()
events_to_delete = [e.id for e in events]
if len(events_to_delete) > 0:
chunk_size = 50
for i in range(0, len(events_to_delete), chunk_size):
chunk = events_to_delete[i : i + chunk_size]
Event.delete().where(Event.id << chunk).execute()
if self.config.semantic_search.enabled:
self.embeddings.thumbnail.delete(ids=chunk)
self.embeddings.description.delete(ids=chunk)
logger.info("Exiting event cleanup...")

View File

@@ -86,7 +86,7 @@ class ExternalEventProcessor:
if source_type == "api":
self.event_camera[event_id] = camera
self.detection_updater.send_data(
self.detection_updater.publish(
(
camera,
now,
@@ -115,7 +115,7 @@ class ExternalEventProcessor:
)
if event_id in self.event_camera:
self.detection_updater.send_data(
self.detection_updater.publish(
(
self.event_camera[event_id],
end_time,

View File

@@ -237,7 +237,7 @@ class EventProcessor(threading.Thread):
if event_type == EventStateEnum.end:
del self.events_in_process[event_data["id"]]
self.event_end_publisher.publish((event_data["id"], camera))
self.event_end_publisher.publish((event_data["id"], camera, updated_db))
def handle_external_detection(
self, event_type: EventStateEnum, event_data: Event

63
frigate/genai/__init__.py Normal file
View File

@@ -0,0 +1,63 @@
"""Generative AI module for Frigate."""
import importlib
import os
from typing import Optional
from frigate.config import GenAIConfig, GenAIProviderEnum
PROVIDERS = {}
def register_genai_provider(key: GenAIProviderEnum):
"""Register a GenAI provider."""
def decorator(cls):
PROVIDERS[key] = cls
return cls
return decorator
class GenAIClient:
"""Generative AI client for Frigate."""
def __init__(self, genai_config: GenAIConfig, timeout: int = 60) -> None:
self.genai_config: GenAIConfig = genai_config
self.timeout = timeout
self.provider = self._init_provider()
def generate_description(
self, thumbnails: list[bytes], metadata: dict[str, any]
) -> Optional[str]:
"""Generate a description for the frame."""
prompt = self.genai_config.object_prompts.get(
metadata["label"], self.genai_config.prompt
).format(**metadata)
return self._send(prompt, thumbnails)
def _init_provider(self):
"""Initialize the client."""
return None
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
"""Submit a request to the provider."""
return None
def get_genai_client(genai_config: GenAIConfig) -> Optional[GenAIClient]:
"""Get the GenAI client."""
if genai_config.enabled:
load_providers()
provider = PROVIDERS.get(genai_config.provider)
if provider:
return provider(genai_config)
return None
def load_providers():
package_dir = os.path.dirname(__file__)
for filename in os.listdir(package_dir):
if filename.endswith(".py") and filename != "__init__.py":
module_name = f"frigate.genai.{filename[:-3]}"
importlib.import_module(module_name)

49
frigate/genai/gemini.py Normal file
View File

@@ -0,0 +1,49 @@
"""Gemini Provider for Frigate AI."""
from typing import Optional
import google.generativeai as genai
from google.api_core.exceptions import GoogleAPICallError
from frigate.config import GenAIProviderEnum
from frigate.genai import GenAIClient, register_genai_provider
@register_genai_provider(GenAIProviderEnum.gemini)
class GeminiClient(GenAIClient):
"""Generative AI client for Frigate using Gemini."""
provider: genai.GenerativeModel
def _init_provider(self):
"""Initialize the client."""
genai.configure(api_key=self.genai_config.api_key)
return genai.GenerativeModel(self.genai_config.model)
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
"""Submit a request to Gemini."""
data = [
{
"mime_type": "image/jpeg",
"data": img,
}
for img in images
] + [prompt]
try:
response = self.provider.generate_content(
data,
generation_config=genai.types.GenerationConfig(
candidate_count=1,
),
request_options=genai.types.RequestOptions(
timeout=self.timeout,
),
)
except GoogleAPICallError:
return None
try:
description = response.text.strip()
except ValueError:
# No description was generated
return None
return description

41
frigate/genai/ollama.py Normal file
View File

@@ -0,0 +1,41 @@
"""Ollama Provider for Frigate AI."""
import logging
from typing import Optional
from httpx import TimeoutException
from ollama import Client as ApiClient
from ollama import ResponseError
from frigate.config import GenAIProviderEnum
from frigate.genai import GenAIClient, register_genai_provider
logger = logging.getLogger(__name__)
@register_genai_provider(GenAIProviderEnum.ollama)
class OllamaClient(GenAIClient):
"""Generative AI client for Frigate using Ollama."""
provider: ApiClient
def _init_provider(self):
"""Initialize the client."""
client = ApiClient(host=self.genai_config.base_url, timeout=self.timeout)
response = client.pull(self.genai_config.model)
if response["status"] != "success":
logger.error("Failed to pull %s model from Ollama", self.genai_config.model)
return None
return client
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
"""Submit a request to Ollama"""
try:
result = self.provider.generate(
self.genai_config.model,
prompt,
images=images,
)
return result["response"].strip()
except (TimeoutException, ResponseError):
return None

51
frigate/genai/openai.py Normal file
View File

@@ -0,0 +1,51 @@
"""OpenAI Provider for Frigate AI."""
import base64
from typing import Optional
from httpx import TimeoutException
from openai import OpenAI
from frigate.config import GenAIProviderEnum
from frigate.genai import GenAIClient, register_genai_provider
@register_genai_provider(GenAIProviderEnum.openai)
class OpenAIClient(GenAIClient):
"""Generative AI client for Frigate using OpenAI."""
provider: OpenAI
def _init_provider(self):
"""Initialize the client."""
return OpenAI(api_key=self.genai_config.api_key)
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
"""Submit a request to OpenAI."""
encoded_images = [base64.b64encode(image).decode("utf-8") for image in images]
try:
result = self.provider.chat.completions.create(
model=self.genai_config.model,
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{image}",
"detail": "low",
},
}
for image in encoded_images
]
+ [prompt],
},
],
timeout=self.timeout,
)
except TimeoutException:
return None
if len(result.choices) > 0:
return result.choices[0].message.content.strip()
return None

View File

@@ -118,3 +118,4 @@ class RecordingsToDelete(Model): # type: ignore[misc]
class User(Model): # type: ignore[misc]
username = CharField(null=False, primary_key=True, max_length=30)
password_hash = CharField(null=False, max_length=120)
notification_tokens = JSONField()

View File

@@ -1187,7 +1187,7 @@ class TrackedObjectProcessor(threading.Thread):
]
# publish info on this frame
self.detection_publisher.send_data(
self.detection_publisher.publish(
(
camera,
frame_time,
@@ -1274,7 +1274,7 @@ class TrackedObjectProcessor(threading.Thread):
if not update:
break
event_id, camera = update
event_id, camera, _ = update
self.camera_states[camera].finished(event_id)
self.requestor.stop()

View File

@@ -80,7 +80,7 @@ def output_frames(
websocket_thread.start()
while not stop_event.is_set():
(topic, data) = detection_subscriber.get_data(timeout=1)
(topic, data) = detection_subscriber.check_for_update(timeout=1)
if not topic:
continue
@@ -134,7 +134,7 @@ def output_frames(
move_preview_frames("clips")
while True:
(topic, data) = detection_subscriber.get_data(timeout=0)
(topic, data) = detection_subscriber.check_for_update(timeout=0)
if not topic:
break

View File

@@ -10,6 +10,7 @@ import subprocess as sp
import threading
from enum import Enum
from pathlib import Path
from typing import Optional
from peewee import DoesNotExist
@@ -49,7 +50,8 @@ class RecordingExporter(threading.Thread):
self,
config: FrigateConfig,
camera: str,
name: str,
name: Optional[str],
image: Optional[str],
start_time: int,
end_time: int,
playback_factor: PlaybackFactorEnum,
@@ -58,6 +60,7 @@ class RecordingExporter(threading.Thread):
self.config = config
self.camera = camera
self.user_provided_name = name
self.user_provided_image = image
self.start_time = start_time
self.end_time = end_time
self.playback_factor = playback_factor
@@ -72,6 +75,12 @@ class RecordingExporter(threading.Thread):
def save_thumbnail(self, id: str) -> str:
thumb_path = os.path.join(CLIPS_DIR, f"export/{id}.webp")
if self.user_provided_image is not None and os.path.isfile(
self.user_provided_image
):
shutil.copy(self.user_provided_image, thumb_path)
return thumb_path
if (
self.start_time
< datetime.datetime.now(datetime.timezone.utc)

View File

@@ -470,7 +470,7 @@ class RecordingMaintainer(threading.Thread):
stale_frame_count_threshold = 10
# empty the object recordings info queue
while True:
(topic, data) = self.detection_subscriber.get_data(
(topic, data) = self.detection_subscriber.check_for_update(
timeout=QUEUE_READ_TIMEOUT
)

View File

@@ -424,7 +424,7 @@ class ReviewSegmentMaintainer(threading.Thread):
camera_name = updated_topic.rpartition("/")[-1]
self.config.cameras[camera_name].record = updated_record_config
(topic, data) = self.detection_subscriber.get_data(timeout=1)
(topic, data) = self.detection_subscriber.check_for_update(timeout=1)
if not topic:
continue

View File

@@ -120,6 +120,7 @@ class TestHttp(unittest.TestCase):
None,
None,
None,
None,
PlusApi(),
None,
)
@@ -156,6 +157,7 @@ class TestHttp(unittest.TestCase):
None,
None,
None,
None,
PlusApi(),
None,
)
@@ -177,6 +179,7 @@ class TestHttp(unittest.TestCase):
None,
None,
None,
None,
PlusApi(),
None,
)
@@ -197,6 +200,7 @@ class TestHttp(unittest.TestCase):
None,
None,
None,
None,
PlusApi(),
None,
)
@@ -219,6 +223,7 @@ class TestHttp(unittest.TestCase):
None,
None,
None,
None,
PlusApi(),
None,
)
@@ -245,6 +250,7 @@ class TestHttp(unittest.TestCase):
None,
None,
None,
None,
PlusApi(),
None,
)
@@ -283,6 +289,7 @@ class TestHttp(unittest.TestCase):
None,
None,
None,
None,
PlusApi(),
None,
)
@@ -318,6 +325,7 @@ class TestHttp(unittest.TestCase):
None,
None,
None,
None,
PlusApi(),
None,
)
@@ -343,6 +351,7 @@ class TestHttp(unittest.TestCase):
None,
None,
None,
None,
PlusApi(),
None,
)
@@ -360,6 +369,7 @@ class TestHttp(unittest.TestCase):
None,
None,
None,
None,
PlusApi(),
None,
)
@@ -381,6 +391,7 @@ class TestHttp(unittest.TestCase):
None,
None,
None,
None,
PlusApi(),
stats,
)

View File

@@ -0,0 +1,40 @@
"""Peewee migrations
Some examples (model - class or model name)::
> Model = migrator.orm['model_name'] # Return model in current state by name
> migrator.sql(sql) # Run custom SQL
> migrator.python(func, *args, **kwargs) # Run python code
> migrator.create_model(Model) # Create a model (could be used as decorator)
> migrator.remove_model(model, cascade=True) # Remove a model
> migrator.add_fields(model, **fields) # Add fields to a model
> migrator.change_fields(model, **fields) # Change fields
> migrator.remove_fields(model, *field_names, cascade=True)
> migrator.rename_field(model, old_field_name, new_field_name)
> migrator.rename_table(model, new_table_name)
> migrator.add_index(model, *col_names, unique=False)
> migrator.drop_index(model, *col_names)
> migrator.add_not_null(model, *field_names)
> migrator.drop_not_null(model, *field_names)
> migrator.add_default(model, field_name, default)
"""
import peewee as pw
from playhouse.sqlite_ext import JSONField
from frigate.models import User
SQL = pw.SQL
def migrate(migrator, database, fake=False, **kwargs):
migrator.add_fields(
User,
notification_tokens=JSONField(default=[]),
)
def rollback(migrator, database, fake=False, **kwargs):
pass

View File

@@ -11,18 +11,6 @@
"! pip install -q super_gradients==3.7.1"
]
},
{
"cell_type": "code",
"source": [
"! sed -i 's/sghub.deci.ai/sg-hub-nv.s3.amazonaws.com/' /usr/local/lib/python3.10/dist-packages/super_gradients/training/pretrained_models.py\n",
"! sed -i 's/sghub.deci.ai/sg-hub-nv.s3.amazonaws.com/' /usr/local/lib/python3.10/dist-packages/super_gradients/training/utils/checkpoint_utils.py"
],
"metadata": {
"id": "NiRCt917KKcL"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
@@ -84,4 +72,4 @@
},
"nbformat": 4,
"nbformat_minor": 0
}
}

6
package-lock.json generated Normal file
View File

@@ -0,0 +1,6 @@
{
"name": "frigate",
"lockfileVersion": 3,
"requires": true,
"packages": {}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.8 KiB

View File

@@ -0,0 +1,66 @@
// Notifications Worker
self.addEventListener("push", function (event) {
// @ts-expect-error we know this exists
if (event.data) {
// @ts-expect-error we know this exists
const data = event.data.json();
let actions = [];
switch (data.type ?? "unknown") {
case "alert":
actions = [
{
action: "markReviewed",
title: "Mark as Reviewed",
},
];
break;
}
// @ts-expect-error we know this exists
self.registration.showNotification(data.title, {
body: data.message,
icon: "/images/maskable-icon.png",
image: data.image,
badge: "/images/maskable-badge.png",
tag: data.id,
data: { id: data.id, link: data.direct_url },
actions,
});
} else {
// pass
// This push event has no data
}
});
self.addEventListener("notificationclick", (event) => {
// @ts-expect-error we know this exists
if (event.notification) {
// @ts-expect-error we know this exists
event.notification.close();
switch (event.action ?? "default") {
case "markReviewed":
if (event.notification.data) {
fetch("/api/reviews/viewed", {
method: "POST",
headers: { "Content-Type": "application/json", "X-CSRF-TOKEN": 1 },
body: JSON.stringify({ ids: [event.notification.data.id] }),
});
}
break;
default:
// @ts-expect-error we know this exists
if (event.notification.data) {
const url = event.notification.data.link;
// eslint-disable-next-line no-undef
if (clients.openWindow) {
// eslint-disable-next-line no-undef
return clients.openWindow(url);
}
}
}
}
});

View File

@@ -20,6 +20,12 @@
"sizes": "180x180",
"type": "image/png",
"purpose": "maskable"
},
{
"src": "/images/maskable-badge.png",
"sizes": "96x96",
"type": "image/png",
"purpose": "maskable"
}
],
"theme_color": "#ffffff",

Some files were not shown because too many files have changed in this diff Show More