Compare commits

..

68 Commits

Author SHA1 Message Date
Josh Hawkins
b0c4c77cfd update docs 2024-10-21 09:36:16 -05:00
Josh Hawkins
059475e6bb add try/except around ollama initialization 2024-10-21 09:31:34 -05:00
Josh Hawkins
8002e59031 disable mem arena in options for cpu only 2024-10-21 09:31:11 -05:00
Josh Hawkins
6c70e56059 Misc bugfixes and improvements (#14460)
* only save a fixed number of thumbnails if genai is enabled

* disable cpu_mem_arena to save on memory until its actually needed

* fix search settings pane so it actually saves to the config
2024-10-20 14:14:51 -06:00
Josh Hawkins
b24d292ade Improve Explore SQL query memory usage (#14451)
* Remove sql window function in explore endpoint

* don't revalidate first page on every fetch
2024-10-19 22:12:54 -06:00
Nicolas Mowen
2137de37b9 Fix snapshot call (#14448) 2024-10-19 14:11:49 -05:00
Josh Hawkins
3c591ad8a9 Explore snapshot and clip filter (#14439)
* backend

* add ToggleButton component

* boolean type

* frontend

* allow setting filter in input

* better padding on dual slider

* use shadcn toggle group instead of custom component
2024-10-18 16:16:43 -05:00
Josh Hawkins
b56f4c4558 Semantic search docs update (#14438)
* Add minimum requirements to semantic search docs

* clarify
2024-10-18 08:07:29 -06:00
Josh Hawkins
5d8bcb42c6 Fix autotrack to work with new tracked object package (#14414) 2024-10-17 10:21:27 -06:00
Josh Hawkins
b299652e86 Generative AI changes (#14413)
* Update default genai prompt

* Update docs

* improve wording

* clarify wording
2024-10-17 10:15:44 -06:00
Nicolas Mowen
8ac4b001a2 Various fixes (#14410)
* Fix access

* Reorganize tracked object for imports

* Separate out rockchip build

* Formatting

* Use original ffmpeg build

* Fix build

* Update default search type value
2024-10-17 11:02:27 -05:00
Josh Hawkins
6294ce7807 Adjust Explore settings (#14409)
* Re-add search source chip without confidence percentage

* add confidence to tooltip only

* move search type to settings

* padding tweak

* docs update

* docs clarity
2024-10-17 09:21:20 -06:00
Josh Hawkins
8173cd7776 Add score filter to Explore view (#14397)
* backend score filtering and sorting

* score filter frontend

* use input for score filtering

* use correct score on search thumbnail

* add popover to explain top_score

* revert sublabel score calc

* update filters logic

* fix rounding on score

* wait until default view is loaded

* don't turn button to selected style for similarity searches

* clarify language

* fix alert dialog buttons to use correct destructive variant

* use root level top_score for very old events

* better arrangement of thumbnail footer items on smaller screens
2024-10-17 05:30:52 -06:00
Nicolas Mowen
edaccd86d6 Fix build (#14398) 2024-10-16 19:26:47 -05:00
Nicolas Mowen
5f77408956 Update logos handling (#14396)
* Add attribute for logos

* Clean up tracked object to pass model data

* Update default attributes map
2024-10-16 16:22:34 -05:00
Josh Hawkins
e836523bc3 Explore UI changes (#14393)
* Add time ago to explore summary view on desktop

* add search settings for columns and default view selection

* add descriptions

* clarify wording

* padding tweak

* padding tweaks for mobile

* fix size of activity indicator

* smaller
2024-10-16 10:54:01 -06:00
Nicolas Mowen
9f866be110 Remove line in install deps (#14389) 2024-10-16 11:40:31 -05:00
Josh Hawkins
f6879f40b0 Refactor MobilePage to work like shadcn components (#14388)
* Refactor MobilePage to work like shadcn components

* fix bug with search detail dialog not opening
2024-10-16 08:18:06 -06:00
Nicolas Mowen
06f47f262f Use config attribute map instead of hard coded (#14387) 2024-10-16 07:27:36 -06:00
Josh Hawkins
eda52a3b82 Search and search filter UI tweaks (#14381)
* fix search type switches

* select/unselect style for more filters button

* fix reset button

* fix labels scrollbar

* set min width and remove modal to allow scrolling with filters open

* hover colors

* better match of font size

* stop sheet from displaying console errors

* fix detail dialog behavior
2024-10-16 06:15:25 -06:00
Nicolas Mowen
3f1ab66899 Embeddings UI updates (#14378)
* Handle Frigate+ submitted case

* Add search settings and rename general to ui settings

* Add platform aware sheet component

* use two columns on mobile view

* Add cameras page to more filters

* clean up search settings view

* Add time range to side filter

* better match with ui settings

* fix icon size

* use two columns on mobile view

* clean up search settings view

* Add zones and saving logic

* Add all filters to side panel

* better match with ui settings

* fix icon size

* Fix mobile fitler page

* Fix embeddings access

* Cleanup

* Fix scroll

* fix double scrollbars and add separators on mobile too

* two columns on mobile

* italics for emphasis

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2024-10-15 19:25:59 -05:00
Nicolas Mowen
b75efcbca2 UI tweaks (#14369)
* Adjust text size

* Make cursor consistent

* Fix lint
2024-10-15 09:37:04 -06:00
Nicolas Mowen
25043278ab Always run embedding descs one by one (#14365) 2024-10-15 07:40:45 -06:00
Josh Hawkins
644069fb23 Explore layout changes (#14348)
* Reset selected index on new searches

* Remove right click for similarity search

* Fix sub label icon

* add card footer

* Add Frigate+ dialog

* Move buttons and menu to thumbnail footer

* Add similarity search

* Show object score

* Implement download buttons

* remove confidence score

* conditionally show submenu items

* Implement delete

* fix icon color

* Add object lifecycle button

* fix score

* delete confirmation

* small tweaks

* consistent icons

---------

Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
2024-10-15 07:24:47 -06:00
Nicolas Mowen
0eccb6a610 Db fixes (#14364)
* Handle case where embeddings overflow token limit

* Set notification tokens

* Fix sort
2024-10-15 07:17:54 -06:00
Josh Hawkins
0abd514064 Use direct download link instead of blob method (#14347) 2024-10-14 17:53:25 -06:00
Nicolas Mowen
3879fde06d Don't allow unlimited unprocessed segments to stay in cache (#14341)
* Don't allow unlimited unprocessed frames to stay in cache

* Formatting
2024-10-14 16:11:43 -06:00
Nicolas Mowen
887433fc6a Streaming download (#14346)
* Send downloaded mp4 as a streaming response instead of a file

* Add download button to UI

* Formatting

* Fix CSS and text

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>

* download video button component

* use download button component in review detail dialog

* better filename

---------

Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
2024-10-14 15:23:02 -06:00
Josh Hawkins
dd7a07bd0d Add ability to rename camera groups (#14339)
* Add ability to rename camera groups

* clean up

* ampersand consistency
2024-10-14 10:27:50 -05:00
Josh Hawkins
0ee32cf110 Fix yaml bug and ensure embeddings progress doesn't show until all models are loaded (#14338) 2024-10-14 08:23:08 -06:00
Josh Hawkins
72aa68cedc Fix genai labels (#14330)
* Publish model state and embeddings reindex in dispatcher onConnect

* remove unneeded from explore

* add embeddings reindex progress to statusbar

* don't allow right click or show similar button if semantic search is disabled

* fix status bar

* Convert peewee model to dict before formatting for genai description

* add embeddings reindex progress to statusbar

* fix status bar

* Convert peewee model to dict before formatting for genai description
2024-10-14 06:23:10 -06:00
Nicolas Mowen
9adffa1ef5 Detection adjustments (#14329) 2024-10-13 21:34:51 -05:00
Josh Hawkins
4ca267ea17 Search UI tweaks and bugfixes (#14328)
* Publish model state and embeddings reindex in dispatcher onConnect

* remove unneeded from explore

* add embeddings reindex progress to statusbar

* don't allow right click or show similar button if semantic search is disabled

* fix status bar
2024-10-13 19:36:49 -06:00
Josh Hawkins
833768172d UI tweaks (#14326)
* small tweaks for frigate+ submission and debug object list

* exclude attributes from labels colormap
2024-10-13 15:48:54 -06:00
Josh Hawkins
1ec459ea3a Batch embeddings fixes (#14325)
* fixes

* more readable loops

* more robust key check and warning message

* ensure we get reindex progress on mount

* use correct var for length
2024-10-13 15:25:13 -06:00
Josh Hawkins
66d0ad5803 See a preview when using the timeline to export footage (#14321)
* custom hook and generic video player component

* add export preview dialog

* export preview dialog when using timeline export

* refactor search detail dialog to use new generic video player component

* clean up
2024-10-13 12:46:40 -05:00
Josh Hawkins
92ac025e43 Don't show submit to frigate plus card if plus is disabled (#14319) 2024-10-13 11:34:39 -06:00
Nicolas Mowen
e8b2fde753 Support batch embeddings when reindexing (#14320)
* Refactor onnx embeddings to handle multiple inputs by default

* Process items in batches when reindexing
2024-10-13 12:33:27 -05:00
Josh Hawkins
0fc7999780 Improve reindex completion flag (#14308) 2024-10-12 14:44:01 -05:00
Nicolas Mowen
3a403392e7 Fixes for model downloading (#14305)
* Use different requestor for downloaders

* Handle case where lock is left over from failed partial download

* close requestor

* Formatting
2024-10-12 13:36:10 -05:00
Josh Hawkins
acccc6fd93 Only revalidate if event update is valid (#14302) 2024-10-12 08:32:11 -06:00
Nicolas Mowen
40bb4765d4 Add support for more icons (#14299) 2024-10-12 08:37:22 -05:00
Josh Hawkins
48c60621b6 Fix substitution on genai prompts (#14298) 2024-10-12 06:19:24 -06:00
Josh Hawkins
1e1610671e Add info icons for popovers in debug view (#14296) 2024-10-12 06:12:02 -06:00
Josh Hawkins
de86c37687 Prevent single letter words from matching filter suggestions (#14297) 2024-10-12 06:11:22 -06:00
Nicolas Mowen
6e332bbdf8 Remove device config and use model size to configure device used (#14290)
* Remove device config and use model size to configure device used

* Don't show Frigate+ submission when in progress

* Add docs link for bounding box colors
2024-10-11 17:08:14 -05:00
Josh Hawkins
8a8a0c7dec Embeddings normalization fixes (#14284)
* Use cosine distance metric for vec tables

* Only apply normalization to multi modal searches

* Catch possible edge case in stddev calc

* Use sigmoid function for normalization for multi modal searches only

* Ensure we get model state on initial page load

* Only save stats for multi modal searches and only use cosine similarity for image -> image search
2024-10-11 13:11:11 -05:00
Nicolas Mowen
d4b9b5a7dd Reduce onnx memory usage (#14285) 2024-10-11 13:03:47 -05:00
Nicolas Mowen
6df541e1fd Openvino models (#14283)
* Enable model conversion cache for openvino

* Use openvino directly for onnx embeddings if available

* Don't fail if zmq is busy
2024-10-11 10:47:23 -06:00
Josh Hawkins
748087483c Use number keys on keyboard to move ptz camera to presets (#14278)
* Use number keys on keyboard to move ptz camera to presets

* clean up
2024-10-11 07:05:28 -06:00
Josh Hawkins
ae91fa6a39 Add time remaining to embedding reindex pane (#14279)
* Add function to convert seconds to human readable duration

* Add estimated time remaining to reindexing pane
2024-10-11 07:04:25 -06:00
Josh Hawkins
2897afce41 Reset saved search stats on reindex (#14280) 2024-10-11 06:59:29 -06:00
Josh Hawkins
ee8091ba91 Correctly handle camera command in dispatcher (#14273) 2024-10-10 18:48:56 -06:00
Josh Hawkins
30b5faebae chunk is already a list (#14272) 2024-10-10 17:53:11 -06:00
Josh Hawkins
8d753f821d Allow empty description for tracked objects (#14271)
* Allow tracked object description to be saved as an empty string

* ensure event_ids is passed as list
2024-10-10 18:12:05 -05:00
Josh Hawkins
54eb03d2a1 Add config option to select fp16 or quantized jina vision model (#14270)
* Add config option to select fp16 or quantized jina vision model

* requires_fp16 for text and large models only

* fix model type check

* fix cpu

* pass model size
2024-10-10 16:46:21 -06:00
Nicolas Mowen
dd6276e706 Embeddings fixes (#14269)
* Add debugging logs for more info

* Improve timeout handling

* Fix event cleanup

* Handle zmq error and empty data

* Don't run download

* Remove unneeded embeddings creations

* Update timouts

* Init models immediately

* Fix order of init

* Cleanup
2024-10-10 16:37:43 -05:00
Josh Hawkins
f67ec241d4 Add embeddings reindex progress to the UI (#14268)
* refactor dispatcher

* add reindex to dictionary

* add circular progress bar component

* Add progress to UI when embeddings are reindexing

* readd comments to dispatcher for clarity

* Only report progress every 10 events so we don't spam the logs and websocket

* clean up
2024-10-10 13:28:43 -06:00
Nicolas Mowen
8ade85edec Restructure embeddings (#14266)
* Restructure embeddings

* Use ZMQ to proxy embeddings requests

* Handle serialization

* Formatting

* Remove unused
2024-10-10 09:42:24 -06:00
Nicolas Mowen
a2ca18a714 Bug fixes (#14263)
* Simplify loitering logic

* Fix divide by zero

* Add device config for semantic search

* Add docs
2024-10-10 07:09:12 -06:00
Josh Hawkins
6a83ff2511 Fix config editor error pane (#14264) 2024-10-10 07:09:03 -06:00
Nicolas Mowen
bc3a06178b Embedding gpu (#14253) 2024-10-09 19:46:31 -06:00
Josh Hawkins
9fda259c0c Ensure genai prompt is properly formatted (#14256) 2024-10-09 19:19:40 -06:00
Josh Hawkins
d4925622f9 Use JinaAI models for embeddings (#14252)
* add generic onnx model class and use jina ai clip models for all embeddings

* fix merge confligt

* add generic onnx model class and use jina ai clip models for all embeddings

* fix merge confligt

* preferred providers

* fix paths

* disable download progress bar

* remove logging of path

* drop and recreate tables on reindex

* use cache paths

* fix model name

* use trust remote code per transformers docs

* ensure tokenizer and feature extractor are correctly loaded

* revert

* manually download and cache feature extractor config

* remove unneeded

* remove old clip and minilm code

* docs update
2024-10-09 15:31:54 -06:00
Nicolas Mowen
dbeaf43b8f Fix detector config help template (#14249)
* Fix detector config

* Fix general support
2024-10-09 16:04:31 -05:00
Nicolas Mowen
a2f42d51fd Fix install docs (#14226) 2024-10-08 15:48:54 -05:00
Nicolas Mowen
0b71cfaf06 Handle loitering objects (#14221) 2024-10-08 09:41:54 -05:00
Josh Hawkins
d558ac83b6 Search fixes (#14217)
* Ensure semantic search is enabled before checking model download state

* Only clear similarity search when removing similarity pill
2024-10-08 07:01:31 -06:00
96 changed files with 4775 additions and 2585 deletions

View File

@@ -212,6 +212,7 @@ rcond
RDONLY
rebranded
referer
reindex
Reolink
restream
restreamed

View File

@@ -74,19 +74,6 @@ body:
- CPU (no coral)
validations:
required: true
- type: dropdown
id: object-detector
attributes:
label: Object Detector
options:
- Coral
- OpenVino
- TensorRT
- RKNN
- Other
- CPU (no coral)
validations:
required: true
- type: textarea
id: screenshots
attributes:

View File

@@ -102,19 +102,6 @@ body:
- CPU (no coral)
validations:
required: true
- type: dropdown
id: object-detector
attributes:
label: Object Detector
options:
- Coral
- OpenVino
- TensorRT
- RKNN
- Other
- CPU (no coral)
validations:
required: true
- type: dropdown
id: network
attributes:

View File

@@ -155,6 +155,28 @@ jobs:
tensorrt.tags=${{ steps.setup.outputs.image-name }}-tensorrt
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-amd64,mode=max
arm64_extra_builds:
runs-on: ubuntu-latest
name: ARM Extra Build
needs:
- arm64_build
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push Rockchip build
uses: docker/bake-action@v3
with:
push: true
targets: rk
files: docker/rockchip/rk.hcl
set: |
rk.tags=${{ steps.setup.outputs.image-name }}-rk
*.cache-from=type=gha
combined_extra_builds:
runs-on: ubuntu-latest
name: Combined Extra Builds

View File

@@ -180,9 +180,6 @@ RUN /build_pysqlite3.sh
COPY docker/main/requirements-wheels.txt /requirements-wheels.txt
RUN pip3 wheel --wheel-dir=/wheels -r /requirements-wheels.txt
COPY docker/main/requirements-wheels-post.txt /requirements-wheels-post.txt
RUN pip3 wheel --no-deps --wheel-dir=/wheels-post -r /requirements-wheels-post.txt
# Collect deps in a single layer
FROM scratch AS deps-rootfs
@@ -225,14 +222,6 @@ RUN --mount=type=bind,from=wheels,source=/wheels,target=/deps/wheels \
python3 -m pip install --upgrade pip && \
pip3 install -U /deps/wheels/*.whl
# We have to uninstall this dependency specifically
# as it will break onnxruntime-openvino
RUN pip3 uninstall -y onnxruntime
RUN --mount=type=bind,from=wheels,source=/wheels-post,target=/deps/wheels \
python3 -m pip install --upgrade pip && \
pip3 install -U /deps/wheels/*.whl
COPY --from=deps-rootfs / /
RUN ldconfig

View File

@@ -8,6 +8,7 @@ apt-get -qq install --no-install-recommends -y \
apt-transport-https \
gnupg \
wget \
lbzip2 \
procps vainfo \
unzip locales tzdata libxml2 xz-utils \
python3.9 \
@@ -45,7 +46,7 @@ if [[ "${TARGETARCH}" == "amd64" ]]; then
wget -qO btbn-ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linux64-gpl-5.1.tar.xz"
tar -xf btbn-ffmpeg.tar.xz -C /usr/lib/ffmpeg/5.0 --strip-components 1
rm -rf btbn-ffmpeg.tar.xz /usr/lib/ffmpeg/5.0/doc /usr/lib/ffmpeg/5.0/bin/ffplay
wget -qO btbn-ffmpeg.tar.xz "https://github.com/BtbN/FFmpeg-Builds/releases/download/autobuild-2024-09-30-15-36/ffmpeg-n7.1-linux64-gpl-7.1.tar.xz"
wget -qO btbn-ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2024-09-19-12-51/ffmpeg-n7.0.2-18-g3e6cec1286-linux64-gpl-7.0.tar.xz"
tar -xf btbn-ffmpeg.tar.xz -C /usr/lib/ffmpeg/7.0 --strip-components 1
rm -rf btbn-ffmpeg.tar.xz /usr/lib/ffmpeg/7.0/doc /usr/lib/ffmpeg/7.0/bin/ffplay
fi
@@ -57,7 +58,7 @@ if [[ "${TARGETARCH}" == "arm64" ]]; then
wget -qO btbn-ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2022-07-31-12-37/ffmpeg-n5.1-2-g915ef932a3-linuxarm64-gpl-5.1.tar.xz"
tar -xf btbn-ffmpeg.tar.xz -C /usr/lib/ffmpeg/5.0 --strip-components 1
rm -rf btbn-ffmpeg.tar.xz /usr/lib/ffmpeg/5.0/doc /usr/lib/ffmpeg/5.0/bin/ffplay
wget -qO btbn-ffmpeg.tar.xz "https://github.com/BtbN/FFmpeg-Builds/releases/download/autobuild-2024-09-30-15-36/ffmpeg-n7.1-linuxarm64-gpl-7.1.tar.xz"
wget -qO btbn-ffmpeg.tar.xz "https://github.com/NickM-27/FFmpeg-Builds/releases/download/autobuild-2024-09-19-12-51/ffmpeg-n7.0.2-18-g3e6cec1286-linuxarm64-gpl-7.0.tar.xz"
tar -xf btbn-ffmpeg.tar.xz -C /usr/lib/ffmpeg/7.0 --strip-components 1
rm -rf btbn-ffmpeg.tar.xz /usr/lib/ffmpeg/7.0/doc /usr/lib/ffmpeg/7.0/bin/ffplay
fi
@@ -76,6 +77,9 @@ if [[ "${TARGETARCH}" == "amd64" ]]; then
apt-get -qq install --no-install-recommends --no-install-suggests -y \
i965-va-driver-shaders
# intel packages use zst compression so we need to update dpkg
apt-get install -y dpkg
rm -f /etc/apt/sources.list.d/debian-bookworm.list
# use intel apt intel packages

View File

@@ -1,3 +0,0 @@
# ONNX
onnxruntime-openvino == 1.19.* ; platform_machine == 'x86_64'
onnxruntime == 1.19.* ; platform_machine == 'aarch64'

View File

@@ -7,7 +7,7 @@ slowapi == 0.1.9
imutils == 0.5.*
joserfc == 1.0.*
pathvalidate == 3.2.*
markupsafe == 3.0.*
markupsafe == 2.1.*
mypy == 1.6.1
numpy == 1.26.*
onvif_zeep == 0.2.12
@@ -30,11 +30,12 @@ norfair == 2.2.*
setproctitle == 1.3.*
ws4py == 0.5.*
unidecode == 1.3.*
# OpenVino (ONNX installed in wheels-post)
# OpenVino & ONNX
openvino == 2024.3.*
onnxruntime-openvino == 1.19.* ; platform_machine == 'x86_64'
onnxruntime == 1.19.* ; platform_machine == 'aarch64'
# Embeddings
transformers == 4.45.*
onnx_clip == 4.0.*
# Generative AI
google-generativeai == 0.8.*
ollama == 0.3.*

View File

@@ -3,7 +3,7 @@ id: genai
title: Generative AI
---
Generative AI can be used to automatically generate descriptions based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate by providing detailed text descriptions as a basis of the search query.
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects.
Semantic Search must be enabled to use Generative AI. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
@@ -29,11 +29,15 @@ cameras:
## Ollama
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance. Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [docker container](https://hub.docker.com/r/ollama/ollama) available.
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance. CPU inference is not recommended.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). At the time of writing, this includes `llava`, `llava-llama3`, `llava-phi3`, and `moondream`.
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). At the time of writing, this includes `llava`, `llava-llama3`, `llava-phi3`, and `moondream`. Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first.
:::note
@@ -122,12 +126,18 @@ genai:
api_key: "{FRIGATE_OPENAI_API_KEY}"
```
## Usage and Best Practices
Frigate's thumbnail search excels at identifying specific details about tracked objects for example, using an "image caption" approach to find a "person wearing a yellow vest," "a white dog running across the lawn," or "a red car on a residential street." To enhance this further, Frigates default prompts are designed to ask your AI provider about the intent behind the object's actions, rather than just describing its appearance.
While generating simple descriptions of detected objects is useful, understanding intent provides a deeper layer of insight. Instead of just recognizing "what" is in a scene, Frigates default prompts aim to infer "why" it might be there or "what" it could do next. Descriptions tell you whats happening, but intent gives context. For instance, a person walking toward a door might seem like a visitor, but if theyre moving quickly after hours, you can infer a potential break-in attempt. Detecting a person loitering near a door at night can trigger an alert sooner than simply noting "a person standing by the door," helping you respond based on the situations context.
## Custom Prompts
Frigate sends multiple frames from the tracked object along with a prompt to your Generative AI provider asking it to generate a description. The default prompt is as follows:
```
Describe the {label} in the sequence of images with as much detail as possible. Do not describe the background.
Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.
```
:::tip
@@ -144,10 +154,10 @@ genai:
provider: ollama
base_url: http://localhost:11434
model: llava
prompt: "Describe the {label} in these images from the {camera} security camera."
prompt: "Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person: "Describe the main person in these images (gender, age, clothing, activity, etc). Do not include where the activity is occurring (sidewalk, concrete, driveway, etc)."
car: "Label the primary vehicle in these images with just the name of the company if it is a delivery vehicle, or the color make and model."
person: "Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car: "Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire. By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
@@ -159,10 +169,10 @@ cameras:
front_door:
genai:
use_snapshot: True
prompt: "Describe the {label} in these images from the {camera} security camera at the front door of a house, aimed outward toward the street."
prompt: "Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person: "Describe the main person in these images (gender, age, clothing, activity, etc). Do not include where the activity is occurring (sidewalk, concrete, driveway, etc). If delivering a package, include the company the package is from."
cat: "Describe the cat in these images (color, size, tail). Indicate whether or not the cat is by the flower pots. If the cat is chasing a mouse, make up a name for the mouse."
person: "Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
cat: "Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
objects:
- person
- cat

View File

@@ -518,6 +518,9 @@ semantic_search:
enabled: False
# Optional: Re-index embeddings database from historical tracked objects (default: shown below)
reindex: False
# Optional: Set the model size used for embeddings. (default: shown below)
# NOTE: small model runs on CPU and large model runs on GPU
model_size: "small"
# Optional: Configuration for AI generated tracked object descriptions
# NOTE: Semantic Search must be enabled for this to do anything.

View File

@@ -5,10 +5,18 @@ title: Using Semantic Search
Semantic Search in Frigate allows you to find tracked objects within your review items using either the image itself, a user-defined text description, or an automatically generated one. This feature works by creating _embeddings_ — numerical vector representations — for both the images and text descriptions of your tracked objects. By comparing these embeddings, Frigate assesses their similarities to deliver relevant search results.
Frigate has support for two models to create embeddings, both of which run locally: [OpenAI CLIP](https://openai.com/research/clip) and [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). Embeddings are then saved to Frigate's database.
Frigate has support for [Jina AI's CLIP model](https://huggingface.co/jinaai/jina-clip-v1) to create embeddings, which runs locally. Embeddings are then saved to Frigate's database.
Semantic Search is accessed via the _Explore_ view in the Frigate UI.
## Minimum System Requirements
Semantic Search works by running a large AI model locally on your system. Small or underpowered systems like a Raspberry Pi will not run Semantic Search reliably or at all.
A minimum of 8GB of RAM is required to use Semantic Search. A GPU is not strictly required but will provide a significant performance increase over CPU-only systems.
For best performance, 16GB or more of RAM and a dedicated GPU are recommended.
## Configuration
Semantic search is disabled by default, and must be enabled in your config file before it can be used. Semantic Search is a global configuration setting.
@@ -27,18 +35,34 @@ If you are enabling the Search feature for the first time, be advised that Friga
:::
### OpenAI CLIP
### Jina AI CLIP
This model is able to embed both images and text into the same vector space, which allows `image -> image` and `text -> image` similarity searches. Frigate uses this model on tracked objects to encode the thumbnail image and store it in the database. When searching for tracked objects via text in the search box, Frigate will perform a `text -> image` similarity search against this embedding. When clicking "Find Similar" in the tracked object detail pane, Frigate will perform an `image -> image` similarity search to retrieve the closest matching thumbnails.
The vision model is able to embed both images and text into the same vector space, which allows `image -> image` and `text -> image` similarity searches. Frigate uses this model on tracked objects to encode the thumbnail image and store it in the database. When searching for tracked objects via text in the search box, Frigate will perform a `text -> image` similarity search against this embedding. When clicking "Find Similar" in the tracked object detail pane, Frigate will perform an `image -> image` similarity search to retrieve the closest matching thumbnails.
### all-MiniLM-L6-v2
The text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Search page when clicking on the gray tracked object chip at the top left of each review item. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.
This is a sentence embedding model that has been fine tuned on over 1 billion sentence pairs. This model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Search page when clicking on the gray tracked object chip at the top left of each review item. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.
Differently weighted CLIP models are available and can be selected by setting the `model_size` config option:
## Usage
:::tip
The CLIP models are downloaded in ONNX format, which means they will be accelerated using GPU hardware when available. This depends on the Docker build that is used. See [the object detector docs](../configuration/object_detectors.md) for more information.
:::
```yaml
semantic_search:
enabled: True
model_size: small
```
- Configuring the `large` model employs the full Jina model and will automatically run on the GPU if applicable.
- Configuring the `small` model employs a quantized version of the model that uses much less RAM and runs faster on CPU with a very negligible difference in embedding quality.
## Usage and Best Practices
1. Semantic search is used in conjunction with the other filters available on the Search page. Use a combination of traditional filtering and semantic search for the best results.
2. The comparison between text and image embedding distances generally means that results matching `description` will appear first, even if a `thumbnail` embedding may be a better match. Play with the "Search Type" filter to help find what you are looking for.
3. Make your search language and tone closely match your descriptions. If you are using thumbnail search, phrase your query as an image caption.
4. Semantic search on thumbnails tends to return better results when matching large subjects that take up most of the frame. Small things like "cat" tend to not work well.
5. Experiment! Find a tracked object you want to test and start typing keywords to see what works for you.
2. Use the thumbnail search type when searching for particular objects in the scene. Use the description search type when attempting to discern the intent of your object.
3. Because of how the AI models Frigate uses have been trained, the comparison between text and image embedding distances generally means that with multi-modal (`thumbnail` and `description`) searches, results matching `description` will appear first, even if a `thumbnail` embedding may be a better match. Play with the "Search Type" setting to help find what you are looking for. Note that if you are generating descriptions for specific objects or zones only, this may cause search results to prioritize the objects with descriptions even if the the ones without them are more relevant.
4. Make your search language and tone closely match exactly what you're looking for. If you are using thumbnail search, **phrase your query as an image caption**. Searching for "red car" may not work as well as "red sedan driving down a residential street on a sunny day".
5. Semantic search on thumbnails tends to return better results when matching large subjects that take up most of the frame. Small things like "cat" tend to not work well.
6. Experiment! Find a tracked object you want to test and start typing keywords and phrases to see what works for you.

View File

@@ -250,10 +250,7 @@ The community supported docker image tags for the current stable version are:
- `stable-tensorrt-jp5` - Frigate build optimized for nvidia Jetson devices running Jetpack 5
- `stable-tensorrt-jp4` - Frigate build optimized for nvidia Jetson devices running Jetpack 4.6
- `stable-rk` - Frigate build for SBCs with Rockchip SoC
- `stable-rocm` - Frigate build for [AMD GPUs and iGPUs](../configuration/object_detectors.md#amdrocm-gpu-detector), all drivers
- `stable-rocm-gfx900` - AMD gfx900 driver only
- `stable-rocm-gfx1030` - AMD gfx1030 driver only
- `stable-rocm-gfx1100` - AMD gfx1100 driver only
- `stable-rocm` - Frigate build for [AMD GPUs](../configuration/object_detectors.md#amdrocm-gpu-detector)
- `stable-h8l` - Frigate build for the Hailo-8L M.2 PICe Raspberry Pi 5 hat
## Home Assistant Addon

View File

@@ -357,6 +357,7 @@ def create_user(request: Request, body: AppPostUsersBody):
{
User.username: body.username,
User.password_hash: password_hash,
User.notification_tokens: [],
}
).execute()
return JSONResponse(content={"username": body.username})

View File

@@ -11,9 +11,7 @@ class EventsSubLabelBody(BaseModel):
class EventsDescriptionBody(BaseModel):
description: Union[str, None] = Field(
title="The description of the event", min_length=1
)
description: Union[str, None] = Field(title="The description of the event")
class EventsCreateBody(BaseModel):

View File

@@ -35,7 +35,7 @@ class EventsQueryParams(BaseModel):
class EventsSearchQueryParams(BaseModel):
query: Optional[str] = None
event_id: Optional[str] = None
search_type: Optional[str] = "thumbnail,description"
search_type: Optional[str] = "thumbnail"
include_thumbnails: Optional[int] = 1
limit: Optional[int] = 50
cameras: Optional[str] = "all"
@@ -44,7 +44,12 @@ class EventsSearchQueryParams(BaseModel):
after: Optional[float] = None
before: Optional[float] = None
time_range: Optional[str] = DEFAULT_TIME_RANGE
has_clip: Optional[bool] = None
has_snapshot: Optional[bool] = None
timezone: Optional[str] = "utc"
min_score: Optional[float] = None
max_score: Optional[float] = None
sort: Optional[str] = None
class EventsSummaryQueryParams(BaseModel):

View File

@@ -259,66 +259,61 @@ def events(params: EventsQueryParams = Depends()):
@router.get("/events/explore")
def events_explore(limit: int = 10):
subquery = Event.select(
Event.id,
Event.camera,
Event.label,
Event.zones,
Event.start_time,
Event.end_time,
Event.has_clip,
Event.has_snapshot,
Event.plus_id,
Event.retain_indefinitely,
Event.sub_label,
Event.top_score,
Event.false_positive,
Event.box,
Event.data,
fn.rank()
.over(partition_by=[Event.label], order_by=[Event.start_time.desc()])
.alias("rank"),
fn.COUNT(Event.id).over(partition_by=[Event.label]).alias("event_count"),
).alias("subquery")
# get distinct labels for all events
distinct_labels = Event.select(Event.label).distinct().order_by(Event.label)
query = (
Event.select(
subquery.c.id,
subquery.c.camera,
subquery.c.label,
subquery.c.zones,
subquery.c.start_time,
subquery.c.end_time,
subquery.c.has_clip,
subquery.c.has_snapshot,
subquery.c.plus_id,
subquery.c.retain_indefinitely,
subquery.c.sub_label,
subquery.c.top_score,
subquery.c.false_positive,
subquery.c.box,
subquery.c.data,
subquery.c.event_count,
)
.from_(subquery)
.where(subquery.c.rank <= limit)
.order_by(subquery.c.event_count.desc(), subquery.c.start_time.desc())
.dicts()
)
label_counts = {}
events = list(query.iterator())
def event_generator():
for label_obj in distinct_labels.iterator():
label = label_obj.label
processed_events = [
{k: v for k, v in event.items() if k != "data"}
| {
"data": {
k: v
for k, v in event["data"].items()
if k in ["type", "score", "top_score", "description"]
# get most recent events for this label
label_events = (
Event.select()
.where(Event.label == label)
.order_by(Event.start_time.desc())
.limit(limit)
.iterator()
)
# count total events for this label
label_counts[label] = Event.select().where(Event.label == label).count()
yield from label_events
def process_events():
for event in event_generator():
processed_event = {
"id": event.id,
"camera": event.camera,
"label": event.label,
"zones": event.zones,
"start_time": event.start_time,
"end_time": event.end_time,
"has_clip": event.has_clip,
"has_snapshot": event.has_snapshot,
"plus_id": event.plus_id,
"retain_indefinitely": event.retain_indefinitely,
"sub_label": event.sub_label,
"top_score": event.top_score,
"false_positive": event.false_positive,
"box": event.box,
"data": {
k: v
for k, v in event.data.items()
if k in ["type", "score", "top_score", "description"]
},
"event_count": label_counts[event.label],
}
}
for event in events
]
yield processed_event
# convert iterator to list and sort
processed_events = sorted(
process_events(),
key=lambda x: (x["event_count"], x["start_time"]),
reverse=True,
)
return JSONResponse(content=processed_events)
@@ -348,6 +343,7 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
search_type = params.search_type
include_thumbnails = params.include_thumbnails
limit = params.limit
sort = params.sort
# Filters
cameras = params.cameras
@@ -355,7 +351,11 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
zones = params.zones
after = params.after
before = params.before
min_score = params.min_score
max_score = params.max_score
time_range = params.time_range
has_clip = params.has_clip
has_snapshot = params.has_snapshot
# for similarity search
event_id = params.event_id
@@ -430,6 +430,20 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
if before:
event_filters.append((Event.start_time < before))
if has_clip is not None:
event_filters.append((Event.has_clip == has_clip))
if has_snapshot is not None:
event_filters.append((Event.has_snapshot == has_snapshot))
if min_score is not None and max_score is not None:
event_filters.append((Event.data["score"].between(min_score, max_score)))
else:
if min_score is not None:
event_filters.append((Event.data["score"] >= min_score))
if max_score is not None:
event_filters.append((Event.data["score"] <= max_score))
if time_range != DEFAULT_TIME_RANGE:
tz_name = params.timezone
hour_modifier, minute_modifier, _ = get_tz_modifiers(tz_name)
@@ -472,13 +486,8 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
status_code=404,
)
thumb_result = context.embeddings.search_thumbnail(search_event)
thumb_ids = dict(
zip(
[result[0] for result in thumb_result],
context.thumb_stats.normalize([result[1] for result in thumb_result]),
)
)
thumb_result = context.search_thumbnail(search_event)
thumb_ids = {result[0]: result[1] for result in thumb_result}
search_results = {
event_id: {"distance": distance, "source": "thumbnail"}
for event_id, distance in thumb_ids.items()
@@ -486,15 +495,18 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
else:
search_types = search_type.split(",")
# only save stats for multi-modal searches
save_stats = "thumbnail" in search_types and "description" in search_types
if "thumbnail" in search_types:
thumb_result = context.embeddings.search_thumbnail(query)
thumb_result = context.search_thumbnail(query)
thumb_distances = context.thumb_stats.normalize(
[result[1] for result in thumb_result], save_stats
)
thumb_ids = dict(
zip(
[result[0] for result in thumb_result],
context.thumb_stats.normalize(
[result[1] for result in thumb_result]
),
)
zip([result[0] for result in thumb_result], thumb_distances)
)
search_results.update(
{
@@ -504,13 +516,14 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
)
if "description" in search_types:
desc_result = context.embeddings.search_description(query)
desc_ids = dict(
zip(
[result[0] for result in desc_result],
context.desc_stats.normalize([result[1] for result in desc_result]),
)
desc_result = context.search_description(query)
desc_distances = context.desc_stats.normalize(
[result[1] for result in desc_result], save_stats
)
desc_ids = dict(zip([result[0] for result in desc_result], desc_distances))
for event_id, distance in desc_ids.items():
if (
event_id not in search_results
@@ -555,11 +568,19 @@ def events_search(request: Request, params: EventsSearchQueryParams = Depends())
processed_events.append(processed_event)
# Sort by search distance if search_results are available, otherwise by start_time
# Sort by search distance if search_results are available, otherwise by start_time as default
if search_results:
processed_events.sort(key=lambda x: x.get("search_distance", float("inf")))
else:
processed_events.sort(key=lambda x: x["start_time"], reverse=True)
if sort == "score_asc":
processed_events.sort(key=lambda x: x["score"])
elif sort == "score_desc":
processed_events.sort(key=lambda x: x["score"], reverse=True)
elif sort == "date_asc":
processed_events.sort(key=lambda x: x["start_time"])
else:
# "date_desc" default
processed_events.sort(key=lambda x: x["start_time"], reverse=True)
# Limit the number of events returned
processed_events = processed_events[:limit]
@@ -927,27 +948,19 @@ def set_description(
new_description = body.description
if new_description is None or len(new_description) == 0:
return JSONResponse(
content=(
{
"success": False,
"message": "description cannot be empty",
}
),
status_code=400,
)
event.data["description"] = new_description
event.save()
# If semantic search is enabled, update the index
if request.app.frigate_config.semantic_search.enabled:
context: EmbeddingsContext = request.app.embeddings
context.embeddings.upsert_description(
event_id=event_id,
description=new_description,
)
if len(new_description) > 0:
context.update_description(
event_id,
new_description,
)
else:
context.db.delete_embeddings_description(event_ids=[event_id])
response_message = (
f"Event {event_id} description is now blank"
@@ -1033,8 +1046,8 @@ def delete_event(request: Request, event_id: str):
# If semantic search is enabled, update the index
if request.app.frigate_config.semantic_search.enabled:
context: EmbeddingsContext = request.app.embeddings
context.embeddings.delete_thumbnail(id=[event_id])
context.embeddings.delete_description(id=[event_id])
context.db.delete_embeddings_thumbnail(event_ids=[event_id])
context.db.delete_embeddings_description(event_ids=[event_id])
return JSONResponse(
content=({"success": True, "message": "Event " + event_id + " deleted"}),
status_code=200,

View File

@@ -7,6 +7,7 @@ import os
import subprocess as sp
import time
from datetime import datetime, timedelta, timezone
from pathlib import Path as FilePath
from urllib.parse import unquote
import cv2
@@ -450,8 +451,27 @@ def recording_clip(
camera_name: str,
start_ts: float,
end_ts: float,
download: bool = False,
):
def run_download(ffmpeg_cmd: list[str], file_path: str):
with sp.Popen(
ffmpeg_cmd,
stderr=sp.PIPE,
stdout=sp.PIPE,
text=False,
) as ffmpeg:
while True:
data = ffmpeg.stdout.read(1024)
if data is not None:
yield data
else:
if ffmpeg.returncode and ffmpeg.returncode != 0:
logger.error(
f"Failed to generate clip, ffmpeg logs: {ffmpeg.stderr.read()}"
)
else:
FilePath(file_path).unlink(missing_ok=True)
break
recordings = (
Recordings.select(
Recordings.path,
@@ -467,18 +487,18 @@ def recording_clip(
.order_by(Recordings.start_time.asc())
)
playlist_lines = []
clip: Recordings
for clip in recordings:
playlist_lines.append(f"file '{clip.path}'")
# if this is the starting clip, add an inpoint
if clip.start_time < start_ts:
playlist_lines.append(f"inpoint {int(start_ts - clip.start_time)}")
# if this is the ending clip, add an outpoint
if clip.end_time > end_ts:
playlist_lines.append(f"outpoint {int(end_ts - clip.start_time)}")
file_name = sanitize_filename(f"clip_{camera_name}_{start_ts}-{end_ts}.mp4")
file_name = sanitize_filename(f"playlist_{camera_name}_{start_ts}-{end_ts}.txt")
file_path = f"/tmp/cache/{file_name}"
with open(file_path, "w") as file:
clip: Recordings
for clip in recordings:
file.write(f"file '{clip.path}'\n")
# if this is the starting clip, add an inpoint
if clip.start_time < start_ts:
file.write(f"inpoint {int(start_ts - clip.start_time)}\n")
# if this is the ending clip, add an outpoint
if clip.end_time > end_ts:
file.write(f"outpoint {int(end_ts - clip.start_time)}\n")
if len(file_name) > 1000:
return JSONResponse(
@@ -489,67 +509,32 @@ def recording_clip(
status_code=403,
)
path = os.path.join(CLIPS_DIR, f"cache/{file_name}")
config: FrigateConfig = request.app.frigate_config
if not os.path.exists(path):
ffmpeg_cmd = [
config.ffmpeg.ffmpeg_path,
"-hide_banner",
"-y",
"-protocol_whitelist",
"pipe,file",
"-f",
"concat",
"-safe",
"0",
"-i",
"/dev/stdin",
"-c",
"copy",
"-movflags",
"+faststart",
path,
]
p = sp.run(
ffmpeg_cmd,
input="\n".join(playlist_lines),
encoding="ascii",
capture_output=True,
)
ffmpeg_cmd = [
config.ffmpeg.ffmpeg_path,
"-hide_banner",
"-y",
"-protocol_whitelist",
"pipe,file",
"-f",
"concat",
"-safe",
"0",
"-i",
file_path,
"-c",
"copy",
"-movflags",
"frag_keyframe+empty_moov",
"-f",
"mp4",
"pipe:",
]
if p.returncode != 0:
logger.error(p.stderr)
return JSONResponse(
content={
"success": False,
"message": "Could not create clip from recordings",
},
status_code=500,
)
else:
logger.debug(
f"Ignoring subsequent request for {path} as it already exists in the cache."
)
headers = {
"Content-Description": "File Transfer",
"Cache-Control": "no-cache",
"Content-Type": "video/mp4",
"Content-Length": str(os.path.getsize(path)),
# nginx: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_ignore_headers
"X-Accel-Redirect": f"/clips/cache/{file_name}",
}
if download:
headers["Content-Disposition"] = "attachment; filename=%s" % file_name
return FileResponse(
path,
return StreamingResponse(
run_download(ffmpeg_cmd, file_path),
media_type="video/mp4",
filename=file_name,
headers=headers,
)
@@ -1028,7 +1013,7 @@ def event_snapshot_clean(request: Request, event_id: str, download: bool = False
@router.get("/events/{event_id}/clip.mp4")
def event_clip(request: Request, event_id: str, download: bool = False):
def event_clip(request: Request, event_id: str):
try:
event: Event = Event.get(Event.id == event_id)
except DoesNotExist:
@@ -1048,7 +1033,7 @@ def event_clip(request: Request, event_id: str, download: bool = False):
end_ts = (
datetime.now().timestamp() if event.end_time is None else event.end_time
)
return recording_clip(request, event.camera, event.start_time, end_ts, download)
return recording_clip(request, event.camera, event.start_time, end_ts)
headers = {
"Content-Description": "File Transfer",
@@ -1059,9 +1044,6 @@ def event_clip(request: Request, event_id: str, download: bool = False):
"X-Accel-Redirect": f"/clips/{file_name}",
}
if download:
headers["Content-Disposition"] = "attachment; filename=%s" % file_name
return FileResponse(
clip_path,
media_type="video/mp4",
@@ -1546,11 +1528,11 @@ def label_snapshot(request: Request, camera_name: str, label: str):
)
try:
event = event_query.get()
return event_snapshot(request, event.id)
event: Event = event_query.get()
return event_snapshot(request, event.id, MediaEventsSnapshotQueryParams())
except DoesNotExist:
frame = np.zeros((720, 1280, 3), np.uint8)
ret, jpg = cv2.imencode(".jpg", frame, [int(cv2.IMWRITE_JPEG_QUALITY), 70])
_, jpg = cv2.imencode(".jpg", frame, [int(cv2.IMWRITE_JPEG_QUALITY), 70])
return Response(
jpg.tobytes,

View File

@@ -581,12 +581,12 @@ class FrigateApp:
self.init_recording_manager()
self.init_review_segment_manager()
self.init_go2rtc()
self.start_detectors()
self.init_embeddings_manager()
self.bind_database()
self.check_db_data_migrations()
self.init_inter_process_communicator()
self.init_dispatcher()
self.start_detectors()
self.init_embeddings_manager()
self.init_embeddings_client()
self.start_video_output_processor()
self.start_ptz_autotracker()
@@ -699,7 +699,7 @@ class FrigateApp:
# Save embeddings stats to disk
if self.embeddings:
self.embeddings.save_stats()
self.embeddings.stop()
# Stop Communicators
self.inter_process_communicator.stop()

View File

@@ -15,6 +15,7 @@ from frigate.const import (
INSERT_PREVIEW,
REQUEST_REGION_GRID,
UPDATE_CAMERA_ACTIVITY,
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
UPDATE_EVENT_DESCRIPTION,
UPDATE_MODEL_STATE,
UPSERT_REVIEW_SEGMENT,
@@ -63,6 +64,9 @@ class Dispatcher:
self.onvif = onvif
self.ptz_metrics = ptz_metrics
self.comms = communicators
self.camera_activity = {}
self.model_state = {}
self.embeddings_reindex = {}
self._camera_settings_handlers: dict[str, Callable] = {
"audio": self._on_audio_command,
@@ -84,37 +88,25 @@ class Dispatcher:
for comm in self.comms:
comm.subscribe(self._receive)
self.camera_activity = {}
self.model_state = {}
def _receive(self, topic: str, payload: str) -> Optional[Any]:
"""Handle receiving of payload from communicators."""
if topic.endswith("set"):
def handle_camera_command(command_type, camera_name, command, payload):
try:
# example /cam_name/detect/set payload=ON|OFF
if topic.count("/") == 2:
camera_name = topic.split("/")[-3]
command = topic.split("/")[-2]
if command_type == "set":
self._camera_settings_handlers[command](camera_name, payload)
elif topic.count("/") == 1:
command = topic.split("/")[-2]
self._global_settings_handlers[command](payload)
except IndexError:
logger.error(f"Received invalid set command: {topic}")
return
elif topic.endswith("ptz"):
try:
# example /cam_name/ptz payload=MOVE_UP|MOVE_DOWN|STOP...
camera_name = topic.split("/")[-2]
self._on_ptz_command(camera_name, payload)
except IndexError:
logger.error(f"Received invalid ptz command: {topic}")
return
elif topic == "restart":
elif command_type == "ptz":
self._on_ptz_command(camera_name, payload)
except KeyError:
logger.error(f"Invalid command type or handler: {command_type}")
def handle_restart():
restart_frigate()
elif topic == INSERT_MANY_RECORDINGS:
def handle_insert_many_recordings():
Recordings.insert_many(payload).execute()
elif topic == REQUEST_REGION_GRID:
def handle_request_region_grid():
camera = payload
grid = get_camera_regions_grid(
camera,
@@ -122,24 +114,25 @@ class Dispatcher:
max(self.config.model.width, self.config.model.height),
)
return grid
elif topic == INSERT_PREVIEW:
def handle_insert_preview():
Previews.insert(payload).execute()
elif topic == UPSERT_REVIEW_SEGMENT:
(
ReviewSegment.insert(payload)
.on_conflict(
conflict_target=[ReviewSegment.id],
update=payload,
)
.execute()
)
elif topic == CLEAR_ONGOING_REVIEW_SEGMENTS:
ReviewSegment.update(end_time=datetime.datetime.now().timestamp()).where(
ReviewSegment.end_time == None
def handle_upsert_review_segment():
ReviewSegment.insert(payload).on_conflict(
conflict_target=[ReviewSegment.id],
update=payload,
).execute()
elif topic == UPDATE_CAMERA_ACTIVITY:
def handle_clear_ongoing_review_segments():
ReviewSegment.update(end_time=datetime.datetime.now().timestamp()).where(
ReviewSegment.end_time.is_null(True)
).execute()
def handle_update_camera_activity():
self.camera_activity = payload
elif topic == UPDATE_EVENT_DESCRIPTION:
def handle_update_event_description():
event: Event = Event.get(Event.id == payload["id"])
event.data["description"] = payload["description"]
event.save()
@@ -147,15 +140,31 @@ class Dispatcher:
"event_update",
json.dumps({"id": event.id, "description": event.data["description"]}),
)
elif topic == UPDATE_MODEL_STATE:
model = payload["model"]
state = payload["state"]
self.model_state[model] = ModelStatusTypesEnum[state]
self.publish("model_state", json.dumps(self.model_state))
elif topic == "modelState":
model_state = self.model_state.copy()
self.publish("model_state", json.dumps(model_state))
elif topic == "onConnect":
def handle_update_model_state():
if payload:
model = payload["model"]
state = payload["state"]
self.model_state[model] = ModelStatusTypesEnum[state]
self.publish("model_state", json.dumps(self.model_state))
def handle_model_state():
self.publish("model_state", json.dumps(self.model_state.copy()))
def handle_update_embeddings_reindex_progress():
self.embeddings_reindex = payload
self.publish(
"embeddings_reindex_progress",
json.dumps(payload),
)
def handle_embeddings_reindex_progress():
self.publish(
"embeddings_reindex_progress",
json.dumps(self.embeddings_reindex.copy()),
)
def handle_on_connect():
camera_status = self.camera_activity.copy()
for camera in camera_status.keys():
@@ -170,6 +179,51 @@ class Dispatcher:
}
self.publish("camera_activity", json.dumps(camera_status))
self.publish("model_state", json.dumps(self.model_state.copy()))
self.publish(
"embeddings_reindex_progress",
json.dumps(self.embeddings_reindex.copy()),
)
# Dictionary mapping topic to handlers
topic_handlers = {
INSERT_MANY_RECORDINGS: handle_insert_many_recordings,
REQUEST_REGION_GRID: handle_request_region_grid,
INSERT_PREVIEW: handle_insert_preview,
UPSERT_REVIEW_SEGMENT: handle_upsert_review_segment,
CLEAR_ONGOING_REVIEW_SEGMENTS: handle_clear_ongoing_review_segments,
UPDATE_CAMERA_ACTIVITY: handle_update_camera_activity,
UPDATE_EVENT_DESCRIPTION: handle_update_event_description,
UPDATE_MODEL_STATE: handle_update_model_state,
UPDATE_EMBEDDINGS_REINDEX_PROGRESS: handle_update_embeddings_reindex_progress,
"restart": handle_restart,
"embeddingsReindexProgress": handle_embeddings_reindex_progress,
"modelState": handle_model_state,
"onConnect": handle_on_connect,
}
if topic.endswith("set") or topic.endswith("ptz"):
try:
parts = topic.split("/")
if len(parts) == 3 and topic.endswith("set"):
# example /cam_name/detect/set payload=ON|OFF
camera_name = parts[-3]
command = parts[-2]
handle_camera_command("set", camera_name, command, payload)
elif len(parts) == 2 and topic.endswith("set"):
command = parts[-2]
self._global_settings_handlers[command](payload)
elif len(parts) == 2 and topic.endswith("ptz"):
# example /cam_name/ptz payload=MOVE_UP|MOVE_DOWN|STOP...
camera_name = parts[-2]
handle_camera_command("ptz", camera_name, "", payload)
except IndexError:
logger.error(
f"Received invalid {topic.split('/')[-1]} command: {topic}"
)
return
elif topic in topic_handlers:
return topic_handlers[topic]()
else:
self.publish(topic, payload, retain=False)

View File

@@ -0,0 +1,65 @@
"""Facilitates communication between processes."""
from enum import Enum
from typing import Callable
import zmq
SOCKET_REP_REQ = "ipc:///tmp/cache/embeddings"
class EmbeddingsRequestEnum(Enum):
embed_description = "embed_description"
embed_thumbnail = "embed_thumbnail"
generate_search = "generate_search"
class EmbeddingsResponder:
def __init__(self) -> None:
self.context = zmq.Context()
self.socket = self.context.socket(zmq.REP)
self.socket.bind(SOCKET_REP_REQ)
def check_for_request(self, process: Callable) -> None:
while True: # load all messages that are queued
has_message, _, _ = zmq.select([self.socket], [], [], 0.1)
if not has_message:
break
try:
(topic, value) = self.socket.recv_json(flags=zmq.NOBLOCK)
response = process(topic, value)
if response is not None:
self.socket.send_json(response)
else:
self.socket.send_json([])
except zmq.ZMQError:
break
def stop(self) -> None:
self.socket.close()
self.context.destroy()
class EmbeddingsRequestor:
"""Simplifies sending data to EmbeddingsResponder and getting a reply."""
def __init__(self) -> None:
self.context = zmq.Context()
self.socket = self.context.socket(zmq.REQ)
self.socket.connect(SOCKET_REP_REQ)
def send_data(self, topic: str, data: any) -> str:
"""Sends data and then waits for reply."""
try:
self.socket.send_json((topic, data))
return self.socket.recv_json()
except zmq.ZMQError:
return ""
def stop(self) -> None:
self.socket.close()
self.context.destroy()

View File

@@ -39,7 +39,7 @@ class EventMetadataSubscriber(Subscriber):
super().__init__(topic)
def check_for_update(
self, timeout: float = None
self, timeout: float = 1
) -> Optional[tuple[EventMetadataTypeEnum, str, RegenerateDescriptionEnum]]:
return super().check_for_update(timeout)

View File

@@ -65,8 +65,11 @@ class InterProcessRequestor:
def send_data(self, topic: str, data: any) -> any:
"""Sends data and then waits for reply."""
self.socket.send_json((topic, data))
return self.socket.recv_json()
try:
self.socket.send_json((topic, data))
return self.socket.recv_json()
except zmq.ZMQError:
return ""
def stop(self) -> None:
self.socket.close()

View File

@@ -23,7 +23,7 @@ class GenAICameraConfig(BaseModel):
default=False, title="Use snapshots for generating descriptions."
)
prompt: str = Field(
default="Describe the {label} in the sequence of images with as much detail as possible. Do not describe the background.",
default="Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.",
title="Default caption prompt.",
)
object_prompts: dict[str, str] = Field(
@@ -51,7 +51,7 @@ class GenAICameraConfig(BaseModel):
class GenAIConfig(FrigateBaseModel):
enabled: bool = Field(default=False, title="Enable GenAI.")
prompt: str = Field(
default="Describe the {label} in the sequence of images with as much detail as possible. Do not describe the background.",
default="Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.",
title="Default caption prompt.",
)
object_prompts: dict[str, str] = Field(

View File

@@ -12,3 +12,6 @@ class SemanticSearchConfig(FrigateBaseModel):
reindex: Optional[bool] = Field(
default=False, title="Reindex all detections on startup."
)
model_size: str = Field(
default="small", title="The size of the embeddings model used."
)

View File

@@ -17,7 +17,21 @@ PLUS_API_HOST = "https://api.frigate.video"
DEFAULT_ATTRIBUTE_LABEL_MAP = {
"person": ["amazon", "face"],
"car": ["amazon", "fedex", "license_plate", "ups"],
"car": [
"amazon",
"an_post",
"dhl",
"dpd",
"fedex",
"gls",
"license_plate",
"nzpost",
"postnl",
"postnord",
"purolator",
"ups",
"usps",
],
}
LABEL_CONSOLIDATION_MAP = {
"car": 0.8,
@@ -85,6 +99,7 @@ CLEAR_ONGOING_REVIEW_SEGMENTS = "clear_ongoing_review_segments"
UPDATE_CAMERA_ACTIVITY = "update_camera_activity"
UPDATE_EVENT_DESCRIPTION = "update_event_description"
UPDATE_MODEL_STATE = "update_model_state"
UPDATE_EMBEDDINGS_REINDEX_PROGRESS = "handle_embeddings_reindex_progress"
# Stats Values

View File

@@ -20,3 +20,34 @@ class SqliteVecQueueDatabase(SqliteQueueDatabase):
conn.enable_load_extension(True)
conn.load_extension(self.sqlite_vec_path)
conn.enable_load_extension(False)
def delete_embeddings_thumbnail(self, event_ids: list[str]) -> None:
ids = ",".join(["?" for _ in event_ids])
self.execute_sql(f"DELETE FROM vec_thumbnails WHERE id IN ({ids})", event_ids)
def delete_embeddings_description(self, event_ids: list[str]) -> None:
ids = ",".join(["?" for _ in event_ids])
self.execute_sql(f"DELETE FROM vec_descriptions WHERE id IN ({ids})", event_ids)
def drop_embeddings_tables(self) -> None:
self.execute_sql("""
DROP TABLE vec_descriptions;
""")
self.execute_sql("""
DROP TABLE vec_thumbnails;
""")
def create_embeddings_tables(self) -> None:
"""Create vec0 virtual table for embeddings"""
self.execute_sql("""
CREATE VIRTUAL TABLE IF NOT EXISTS vec_thumbnails USING vec0(
id TEXT PRIMARY KEY,
thumbnail_embedding FLOAT[768] distance_metric=cosine
);
""")
self.execute_sql("""
CREATE VIRTUAL TABLE IF NOT EXISTS vec_descriptions USING vec0(
id TEXT PRIMARY KEY,
description_embedding FLOAT[768] distance_metric=cosine
);
""")

View File

@@ -59,6 +59,7 @@ class ModelConfig(BaseModel):
_merged_labelmap: Optional[Dict[int, str]] = PrivateAttr()
_colormap: Dict[int, Tuple[int, int, int]] = PrivateAttr()
_all_attributes: list[str] = PrivateAttr()
_all_attribute_logos: list[str] = PrivateAttr()
_model_hash: str = PrivateAttr()
@property
@@ -73,6 +74,10 @@ class ModelConfig(BaseModel):
def all_attributes(self) -> list[str]:
return self._all_attributes
@property
def all_attribute_logos(self) -> list[str]:
return self._all_attribute_logos
@property
def model_hash(self) -> str:
return self._model_hash
@@ -93,6 +98,9 @@ class ModelConfig(BaseModel):
unique_attributes.update(attributes)
self._all_attributes = list(unique_attributes)
self._all_attribute_logos = list(
unique_attributes - set(["face", "license_plate"])
)
def check_and_load_plus_model(
self, plus_api: PlusApi, detector: str = None
@@ -140,6 +148,9 @@ class ModelConfig(BaseModel):
unique_attributes.update(attributes)
self._all_attributes = list(unique_attributes)
self._all_attribute_logos = list(
unique_attributes - set(["face", "license_plate"])
)
self._merged_labelmap = {
**{int(key): val for key, val in model_info["labelMap"].items()},
@@ -157,10 +168,14 @@ class ModelConfig(BaseModel):
self._model_hash = file_hash.hexdigest()
def create_colormap(self, enabled_labels: set[str]) -> None:
"""Get a list of colors for enabled labels."""
colors = generate_color_palette(len(enabled_labels))
self._colormap = {label: color for label, color in zip(enabled_labels, colors)}
"""Get a list of colors for enabled labels that aren't attributes."""
enabled_trackable_labels = list(
filter(lambda label: label not in self._all_attributes, enabled_labels)
)
colors = generate_color_palette(len(enabled_trackable_labels))
self._colormap = {
label: color for label, color in zip(enabled_trackable_labels, colors)
}
model_config = ConfigDict(extra="forbid", protected_namespaces=())

View File

@@ -3,6 +3,7 @@ import os
import numpy as np
import openvino as ov
import openvino.properties as props
from pydantic import Field
from typing_extensions import Literal
@@ -34,6 +35,8 @@ class OvDetector(DetectionApi):
logger.error(f"OpenVino model file {detector_config.model.path} not found.")
raise FileNotFoundError
os.makedirs("/config/model_cache/openvino", exist_ok=True)
self.ov_core.set_property({props.cache_dir: "/config/model_cache/openvino"})
self.interpreter = self.ov_core.compile_model(
model=detector_config.model.path, device_name=detector_config.device
)

View File

@@ -7,17 +7,18 @@ import os
import signal
import threading
from types import FrameType
from typing import Optional
from typing import Optional, Union
from setproctitle import setproctitle
from frigate.comms.embeddings_updater import EmbeddingsRequestEnum, EmbeddingsRequestor
from frigate.config import FrigateConfig
from frigate.const import CONFIG_DIR
from frigate.db.sqlitevecq import SqliteVecQueueDatabase
from frigate.models import Event
from frigate.util.builtin import serialize
from frigate.util.services import listen
from .embeddings import Embeddings
from .maintainer import EmbeddingMaintainer
from .util import ZScoreNormalization
@@ -55,12 +56,6 @@ def manage_embeddings(config: FrigateConfig) -> None:
models = [Event]
db.bind(models)
embeddings = Embeddings(db)
# Check if we need to re-index events
if config.semantic_search.reindex:
embeddings.reindex()
maintainer = EmbeddingMaintainer(
db,
config,
@@ -71,9 +66,10 @@ def manage_embeddings(config: FrigateConfig) -> None:
class EmbeddingsContext:
def __init__(self, db: SqliteVecQueueDatabase):
self.embeddings = Embeddings(db)
self.db = db
self.thumb_stats = ZScoreNormalization()
self.desc_stats = ZScoreNormalization(scale_factor=3, bias=-2.5)
self.desc_stats = ZScoreNormalization()
self.requestor = EmbeddingsRequestor()
# load stats from disk
try:
@@ -84,7 +80,7 @@ class EmbeddingsContext:
except FileNotFoundError:
pass
def save_stats(self):
def stop(self):
"""Write the stats to disk as JSON on exit."""
contents = {
"thumb_stats": self.thumb_stats.to_dict(),
@@ -92,3 +88,109 @@ class EmbeddingsContext:
}
with open(os.path.join(CONFIG_DIR, ".search_stats.json"), "w") as f:
json.dump(contents, f)
self.requestor.stop()
def search_thumbnail(
self, query: Union[Event, str], event_ids: list[str] = None
) -> list[tuple[str, float]]:
if query.__class__ == Event:
cursor = self.db.execute_sql(
"""
SELECT thumbnail_embedding FROM vec_thumbnails WHERE id = ?
""",
[query.id],
)
row = cursor.fetchone() if cursor else None
if row:
query_embedding = row[0]
else:
# If no embedding found, generate it and return it
data = self.requestor.send_data(
EmbeddingsRequestEnum.embed_thumbnail.value,
{"id": str(query.id), "thumbnail": str(query.thumbnail)},
)
if not data:
return []
query_embedding = serialize(data)
else:
data = self.requestor.send_data(
EmbeddingsRequestEnum.generate_search.value, query
)
if not data:
return []
query_embedding = serialize(data)
sql_query = """
SELECT
id,
distance
FROM vec_thumbnails
WHERE thumbnail_embedding MATCH ?
AND k = 100
"""
# Add the IN clause if event_ids is provided and not empty
# this is the only filter supported by sqlite-vec as of 0.1.3
# but it seems to be broken in this version
if event_ids:
sql_query += " AND id IN ({})".format(",".join("?" * len(event_ids)))
# order by distance DESC is not implemented in this version of sqlite-vec
# when it's implemented, we can use cosine similarity
sql_query += " ORDER BY distance"
parameters = [query_embedding] + event_ids if event_ids else [query_embedding]
results = self.db.execute_sql(sql_query, parameters).fetchall()
return results
def search_description(
self, query_text: str, event_ids: list[str] = None
) -> list[tuple[str, float]]:
data = self.requestor.send_data(
EmbeddingsRequestEnum.generate_search.value, query_text
)
if not data:
return []
query_embedding = serialize(data)
# Prepare the base SQL query
sql_query = """
SELECT
id,
distance
FROM vec_descriptions
WHERE description_embedding MATCH ?
AND k = 100
"""
# Add the IN clause if event_ids is provided and not empty
# this is the only filter supported by sqlite-vec as of 0.1.3
# but it seems to be broken in this version
if event_ids:
sql_query += " AND id IN ({})".format(",".join("?" * len(event_ids)))
# order by distance DESC is not implemented in this version of sqlite-vec
# when it's implemented, we can use cosine similarity
sql_query += " ORDER BY distance"
parameters = [query_embedding] + event_ids if event_ids else [query_embedding]
results = self.db.execute_sql(sql_query, parameters).fetchall()
return results
def update_description(self, event_id: str, description: str) -> None:
self.requestor.send_data(
EmbeddingsRequestEnum.embed_description.value,
{"id": event_id, "description": description},
)

View File

@@ -3,21 +3,26 @@
import base64
import io
import logging
import struct
import os
import time
from typing import List, Tuple, Union
from numpy import ndarray
from PIL import Image
from playhouse.shortcuts import model_to_dict
from frigate.comms.inter_process import InterProcessRequestor
from frigate.const import UPDATE_MODEL_STATE
from frigate.config.semantic_search import SemanticSearchConfig
from frigate.const import (
CONFIG_DIR,
UPDATE_EMBEDDINGS_REINDEX_PROGRESS,
UPDATE_MODEL_STATE,
)
from frigate.db.sqlitevecq import SqliteVecQueueDatabase
from frigate.models import Event
from frigate.types import ModelStatusTypesEnum
from frigate.util.builtin import serialize
from .functions.clip import ClipEmbedding
from .functions.minilm_l6_v2 import MiniLMEmbedding
from .functions.onnx import GenericONNXEmbedding
logger = logging.getLogger(__name__)
@@ -53,31 +58,26 @@ def get_metadata(event: Event) -> dict:
)
def serialize(vector: List[float]) -> bytes:
"""Serializes a list of floats into a compact "raw bytes" format"""
return struct.pack("%sf" % len(vector), *vector)
def deserialize(bytes_data: bytes) -> List[float]:
"""Deserializes a compact "raw bytes" format into a list of floats"""
return list(struct.unpack("%sf" % (len(bytes_data) // 4), bytes_data))
class Embeddings:
"""SQLite-vec embeddings database."""
def __init__(self, db: SqliteVecQueueDatabase) -> None:
def __init__(
self, config: SemanticSearchConfig, db: SqliteVecQueueDatabase
) -> None:
self.config = config
self.db = db
self.requestor = InterProcessRequestor()
# Create tables if they don't exist
self._create_tables()
self.db.create_embeddings_tables()
models = [
"sentence-transformers/all-MiniLM-L6-v2-model.onnx",
"sentence-transformers/all-MiniLM-L6-v2-tokenizer",
"clip-clip_image_model_vitb32.onnx",
"clip-clip_text_model_vitb32.onnx",
"jinaai/jina-clip-v1-text_model_fp16.onnx",
"jinaai/jina-clip-v1-tokenizer",
"jinaai/jina-clip-v1-vision_model_fp16.onnx"
if config.model_size == "large"
else "jinaai/jina-clip-v1-vision_model_quantized.onnx",
"jinaai/jina-clip-v1-preprocessor_config.json",
]
for model in models:
@@ -89,35 +89,44 @@ class Embeddings:
},
)
self.clip_embedding = ClipEmbedding(
preferred_providers=["CPUExecutionProvider"]
)
self.minilm_embedding = MiniLMEmbedding(
preferred_providers=["CPUExecutionProvider"],
self.text_embedding = GenericONNXEmbedding(
model_name="jinaai/jina-clip-v1",
model_file="text_model_fp16.onnx",
tokenizer_file="tokenizer",
download_urls={
"text_model_fp16.onnx": "https://huggingface.co/jinaai/jina-clip-v1/resolve/main/onnx/text_model_fp16.onnx",
},
model_size=config.model_size,
model_type="text",
requestor=self.requestor,
device="CPU",
)
def _create_tables(self):
# Create vec0 virtual table for thumbnail embeddings
self.db.execute_sql("""
CREATE VIRTUAL TABLE IF NOT EXISTS vec_thumbnails USING vec0(
id TEXT PRIMARY KEY,
thumbnail_embedding FLOAT[512]
);
""")
model_file = (
"vision_model_fp16.onnx"
if self.config.model_size == "large"
else "vision_model_quantized.onnx"
)
# Create vec0 virtual table for description embeddings
self.db.execute_sql("""
CREATE VIRTUAL TABLE IF NOT EXISTS vec_descriptions USING vec0(
id TEXT PRIMARY KEY,
description_embedding FLOAT[384]
);
""")
download_urls = {
model_file: f"https://huggingface.co/jinaai/jina-clip-v1/resolve/main/onnx/{model_file}",
"preprocessor_config.json": "https://huggingface.co/jinaai/jina-clip-v1/resolve/main/preprocessor_config.json",
}
def upsert_thumbnail(self, event_id: str, thumbnail: bytes):
self.vision_embedding = GenericONNXEmbedding(
model_name="jinaai/jina-clip-v1",
model_file=model_file,
download_urls=download_urls,
model_size=config.model_size,
model_type="vision",
requestor=self.requestor,
device="GPU" if config.model_size == "large" else "CPU",
)
def upsert_thumbnail(self, event_id: str, thumbnail: bytes) -> ndarray:
# Convert thumbnail bytes to PIL Image
image = Image.open(io.BytesIO(thumbnail)).convert("RGB")
# Generate embedding using CLIP
embedding = self.clip_embedding([image])[0]
embedding = self.vision_embedding([image])[0]
self.db.execute_sql(
"""
@@ -129,10 +138,31 @@ class Embeddings:
return embedding
def upsert_description(self, event_id: str, description: str):
# Generate embedding using MiniLM
embedding = self.minilm_embedding([description])[0]
def batch_upsert_thumbnail(self, event_thumbs: dict[str, bytes]) -> list[ndarray]:
images = [
Image.open(io.BytesIO(thumb)).convert("RGB")
for thumb in event_thumbs.values()
]
ids = list(event_thumbs.keys())
embeddings = self.vision_embedding(images)
items = []
for i in range(len(ids)):
items.append(ids[i])
items.append(serialize(embeddings[i]))
self.db.execute_sql(
"""
INSERT OR REPLACE INTO vec_thumbnails(id, thumbnail_embedding)
VALUES {}
""".format(", ".join(["(?, ?)"] * len(ids))),
items,
)
return embeddings
def upsert_description(self, event_id: str, description: str) -> ndarray:
embedding = self.text_embedding([description])[0]
self.db.execute_sql(
"""
INSERT OR REPLACE INTO vec_descriptions(id, description_embedding)
@@ -143,117 +173,69 @@ class Embeddings:
return embedding
def delete_thumbnail(self, event_ids: List[str]) -> None:
ids = ",".join(["?" for _ in event_ids])
def batch_upsert_description(self, event_descriptions: dict[str, str]) -> ndarray:
# upsert embeddings one by one to avoid token limit
embeddings = []
for desc in event_descriptions.values():
embeddings.append(self.text_embedding([desc])[0])
ids = list(event_descriptions.keys())
items = []
for i in range(len(ids)):
items.append(ids[i])
items.append(serialize(embeddings[i]))
self.db.execute_sql(
f"DELETE FROM vec_thumbnails WHERE id IN ({ids})", event_ids
"""
INSERT OR REPLACE INTO vec_descriptions(id, description_embedding)
VALUES {}
""".format(", ".join(["(?, ?)"] * len(ids))),
items,
)
def delete_description(self, event_ids: List[str]) -> None:
ids = ",".join(["?" for _ in event_ids])
self.db.execute_sql(
f"DELETE FROM vec_descriptions WHERE id IN ({ids})", event_ids
)
def search_thumbnail(
self, query: Union[Event, str], event_ids: List[str] = None
) -> List[Tuple[str, float]]:
if query.__class__ == Event:
cursor = self.db.execute_sql(
"""
SELECT thumbnail_embedding FROM vec_thumbnails WHERE id = ?
""",
[query.id],
)
row = cursor.fetchone() if cursor else None
if row:
query_embedding = deserialize(
row[0]
) # Deserialize the thumbnail embedding
else:
# If no embedding found, generate it and return it
thumbnail = base64.b64decode(query.thumbnail)
query_embedding = self.upsert_thumbnail(query.id, thumbnail)
else:
query_embedding = self.clip_embedding([query])[0]
sql_query = """
SELECT
id,
distance
FROM vec_thumbnails
WHERE thumbnail_embedding MATCH ?
AND k = 100
"""
# Add the IN clause if event_ids is provided and not empty
# this is the only filter supported by sqlite-vec as of 0.1.3
# but it seems to be broken in this version
if event_ids:
sql_query += " AND id IN ({})".format(",".join("?" * len(event_ids)))
# order by distance DESC is not implemented in this version of sqlite-vec
# when it's implemented, we can use cosine similarity
sql_query += " ORDER BY distance"
parameters = (
[serialize(query_embedding)] + event_ids
if event_ids
else [serialize(query_embedding)]
)
results = self.db.execute_sql(sql_query, parameters).fetchall()
return results
def search_description(
self, query_text: str, event_ids: List[str] = None
) -> List[Tuple[str, float]]:
query_embedding = self.minilm_embedding([query_text])[0]
# Prepare the base SQL query
sql_query = """
SELECT
id,
distance
FROM vec_descriptions
WHERE description_embedding MATCH ?
AND k = 100
"""
# Add the IN clause if event_ids is provided and not empty
# this is the only filter supported by sqlite-vec as of 0.1.3
# but it seems to be broken in this version
if event_ids:
sql_query += " AND id IN ({})".format(",".join("?" * len(event_ids)))
# order by distance DESC is not implemented in this version of sqlite-vec
# when it's implemented, we can use cosine similarity
sql_query += " ORDER BY distance"
parameters = (
[serialize(query_embedding)] + event_ids
if event_ids
else [serialize(query_embedding)]
)
results = self.db.execute_sql(sql_query, parameters).fetchall()
return results
return embeddings
def reindex(self) -> None:
logger.info("Indexing event embeddings...")
logger.info("Indexing tracked object embeddings...")
self.db.drop_embeddings_tables()
logger.debug("Dropped embeddings tables.")
self.db.create_embeddings_tables()
logger.debug("Created embeddings tables.")
# Delete the saved stats file
if os.path.exists(os.path.join(CONFIG_DIR, ".search_stats.json")):
os.remove(os.path.join(CONFIG_DIR, ".search_stats.json"))
st = time.time()
# Get total count of events to process
total_events = (
Event.select()
.where(
(Event.has_clip == True | Event.has_snapshot == True)
& Event.thumbnail.is_null(False)
)
.count()
)
batch_size = 32
current_page = 1
totals = {
"thumb": 0,
"desc": 0,
"thumbnails": 0,
"descriptions": 0,
"processed_objects": total_events - 1 if total_events < batch_size else 0,
"total_objects": total_events,
"time_remaining": 0 if total_events < batch_size else -1,
"status": "indexing",
}
batch_size = 100
current_page = 1
self.requestor.send_data(UPDATE_EMBEDDINGS_REINDEX_PROGRESS, totals)
events = (
Event.select()
.where(
@@ -266,14 +248,45 @@ class Embeddings:
while len(events) > 0:
event: Event
batch_thumbs = {}
batch_descs = {}
for event in events:
thumbnail = base64.b64decode(event.thumbnail)
self.upsert_thumbnail(event.id, thumbnail)
totals["thumb"] += 1
if description := event.data.get("description", "").strip():
totals["desc"] += 1
self.upsert_description(event.id, description)
batch_thumbs[event.id] = base64.b64decode(event.thumbnail)
totals["thumbnails"] += 1
if description := event.data.get("description", "").strip():
batch_descs[event.id] = description
totals["descriptions"] += 1
totals["processed_objects"] += 1
# run batch embedding
self.batch_upsert_thumbnail(batch_thumbs)
if batch_descs:
self.batch_upsert_description(batch_descs)
# report progress every batch so we don't spam the logs
progress = (totals["processed_objects"] / total_events) * 100
logger.debug(
"Processed %d/%d events (%.2f%% complete) | Thumbnails: %d, Descriptions: %d",
totals["processed_objects"],
total_events,
progress,
totals["thumbnails"],
totals["descriptions"],
)
# Calculate time remaining
elapsed_time = time.time() - st
avg_time_per_event = elapsed_time / totals["processed_objects"]
remaining_events = total_events - totals["processed_objects"]
time_remaining = avg_time_per_event * remaining_events
totals["time_remaining"] = int(time_remaining)
self.requestor.send_data(UPDATE_EMBEDDINGS_REINDEX_PROGRESS, totals)
# Move to the next page
current_page += 1
events = (
Event.select()
@@ -287,7 +300,10 @@ class Embeddings:
logger.info(
"Embedded %d thumbnails and %d descriptions in %s seconds",
totals["thumb"],
totals["desc"],
time.time() - st,
totals["thumbnails"],
totals["descriptions"],
round(time.time() - st, 1),
)
totals["status"] = "completed"
self.requestor.send_data(UPDATE_EMBEDDINGS_REINDEX_PROGRESS, totals)

View File

@@ -1,166 +0,0 @@
import logging
import os
from typing import List, Optional, Union
import numpy as np
import onnxruntime as ort
from onnx_clip import OnnxClip, Preprocessor, Tokenizer
from PIL import Image
from frigate.const import MODEL_CACHE_DIR, UPDATE_MODEL_STATE
from frigate.types import ModelStatusTypesEnum
from frigate.util.downloader import ModelDownloader
logger = logging.getLogger(__name__)
class Clip(OnnxClip):
"""Override load models to use pre-downloaded models from cache directory."""
def __init__(
self,
model: str = "ViT-B/32",
batch_size: Optional[int] = None,
providers: List[str] = ["CPUExecutionProvider"],
):
"""
Instantiates the model and required encoding classes.
Args:
model: The model to utilize. Currently ViT-B/32 and RN50 are
allowed.
batch_size: If set, splits the lists in `get_image_embeddings`
and `get_text_embeddings` into batches of this size before
passing them to the model. The embeddings are then concatenated
back together before being returned. This is necessary when
passing large amounts of data (perhaps ~100 or more).
"""
allowed_models = ["ViT-B/32", "RN50"]
if model not in allowed_models:
raise ValueError(f"`model` must be in {allowed_models}. Got {model}.")
if model == "ViT-B/32":
self.embedding_size = 512
elif model == "RN50":
self.embedding_size = 1024
self.image_model, self.text_model = self._load_models(model, providers)
self._tokenizer = Tokenizer()
self._preprocessor = Preprocessor()
self._batch_size = batch_size
@staticmethod
def _load_models(
model: str,
providers: List[str],
) -> tuple[ort.InferenceSession, ort.InferenceSession]:
"""
Load models from cache directory.
"""
if model == "ViT-B/32":
IMAGE_MODEL_FILE = "clip_image_model_vitb32.onnx"
TEXT_MODEL_FILE = "clip_text_model_vitb32.onnx"
elif model == "RN50":
IMAGE_MODEL_FILE = "clip_image_model_rn50.onnx"
TEXT_MODEL_FILE = "clip_text_model_rn50.onnx"
else:
raise ValueError(f"Unexpected model {model}. No `.onnx` file found.")
models = []
for model_file in [IMAGE_MODEL_FILE, TEXT_MODEL_FILE]:
path = os.path.join(MODEL_CACHE_DIR, "clip", model_file)
models.append(Clip._load_model(path, providers))
return models[0], models[1]
@staticmethod
def _load_model(path: str, providers: List[str]):
if os.path.exists(path):
return ort.InferenceSession(path, providers=providers)
else:
logger.warning(f"CLIP model file {path} not found.")
return None
class ClipEmbedding:
"""Embedding function for CLIP model."""
def __init__(
self,
model: str = "ViT-B/32",
silent: bool = False,
preferred_providers: List[str] = ["CPUExecutionProvider"],
):
self.model_name = model
self.silent = silent
self.preferred_providers = preferred_providers
self.model_files = self._get_model_files()
self.model = None
self.downloader = ModelDownloader(
model_name="clip",
download_path=os.path.join(MODEL_CACHE_DIR, "clip"),
file_names=self.model_files,
download_func=self._download_model,
silent=self.silent,
)
self.downloader.ensure_model_files()
def _get_model_files(self):
if self.model_name == "ViT-B/32":
return ["clip_image_model_vitb32.onnx", "clip_text_model_vitb32.onnx"]
elif self.model_name == "RN50":
return ["clip_image_model_rn50.onnx", "clip_text_model_rn50.onnx"]
else:
raise ValueError(
f"Unexpected model {self.model_name}. No `.onnx` file found."
)
def _download_model(self, path: str):
s3_url = (
f"https://lakera-clip.s3.eu-west-1.amazonaws.com/{os.path.basename(path)}"
)
try:
ModelDownloader.download_from_url(s3_url, path, self.silent)
self.downloader.requestor.send_data(
UPDATE_MODEL_STATE,
{
"model": f"{self.model_name}-{os.path.basename(path)}",
"state": ModelStatusTypesEnum.downloaded,
},
)
except Exception:
self.downloader.requestor.send_data(
UPDATE_MODEL_STATE,
{
"model": f"{self.model_name}-{os.path.basename(path)}",
"state": ModelStatusTypesEnum.error,
},
)
def _load_model(self):
if self.model is None:
self.downloader.wait_for_download()
self.model = Clip(self.model_name, providers=self.preferred_providers)
def __call__(self, input: Union[List[str], List[Image.Image]]) -> List[np.ndarray]:
self._load_model()
if (
self.model is None
or self.model.image_model is None
or self.model.text_model is None
):
logger.info(
"CLIP model is not fully loaded. Please wait for the download to complete."
)
return []
embeddings = []
for item in input:
if isinstance(item, Image.Image):
result = self.model.get_image_embeddings([item])
embeddings.append(result[0])
elif isinstance(item, str):
result = self.model.get_text_embeddings([item])
embeddings.append(result[0])
else:
raise ValueError(f"Unsupported input type: {type(item)}")
return embeddings

View File

@@ -1,107 +0,0 @@
import logging
import os
from typing import List
import numpy as np
import onnxruntime as ort
# importing this without pytorch or others causes a warning
# https://github.com/huggingface/transformers/issues/27214
# suppressed by setting env TRANSFORMERS_NO_ADVISORY_WARNINGS=1
from transformers import AutoTokenizer
from frigate.const import MODEL_CACHE_DIR, UPDATE_MODEL_STATE
from frigate.types import ModelStatusTypesEnum
from frigate.util.downloader import ModelDownloader
logger = logging.getLogger(__name__)
class MiniLMEmbedding:
"""Embedding function for ONNX MiniLM-L6 model."""
DOWNLOAD_PATH = f"{MODEL_CACHE_DIR}/all-MiniLM-L6-v2"
MODEL_NAME = "sentence-transformers/all-MiniLM-L6-v2"
IMAGE_MODEL_FILE = "model.onnx"
TOKENIZER_FILE = "tokenizer"
def __init__(self, preferred_providers=["CPUExecutionProvider"]):
self.preferred_providers = preferred_providers
self.tokenizer = None
self.session = None
self.downloader = ModelDownloader(
model_name=self.MODEL_NAME,
download_path=self.DOWNLOAD_PATH,
file_names=[self.IMAGE_MODEL_FILE, self.TOKENIZER_FILE],
download_func=self._download_model,
)
self.downloader.ensure_model_files()
def _download_model(self, path: str):
try:
if os.path.basename(path) == self.IMAGE_MODEL_FILE:
s3_url = f"https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2/resolve/main/onnx/{self.IMAGE_MODEL_FILE}"
ModelDownloader.download_from_url(s3_url, path)
elif os.path.basename(path) == self.TOKENIZER_FILE:
logger.info("Downloading MiniLM tokenizer")
tokenizer = AutoTokenizer.from_pretrained(
self.MODEL_NAME, clean_up_tokenization_spaces=True
)
tokenizer.save_pretrained(path)
self.downloader.requestor.send_data(
UPDATE_MODEL_STATE,
{
"model": f"{self.MODEL_NAME}-{os.path.basename(path)}",
"state": ModelStatusTypesEnum.downloaded,
},
)
except Exception:
self.downloader.requestor.send_data(
UPDATE_MODEL_STATE,
{
"model": f"{self.MODEL_NAME}-{os.path.basename(path)}",
"state": ModelStatusTypesEnum.error,
},
)
def _load_model_and_tokenizer(self):
if self.tokenizer is None or self.session is None:
self.downloader.wait_for_download()
self.tokenizer = self._load_tokenizer()
self.session = self._load_model(
os.path.join(self.DOWNLOAD_PATH, self.IMAGE_MODEL_FILE),
self.preferred_providers,
)
def _load_tokenizer(self):
tokenizer_path = os.path.join(self.DOWNLOAD_PATH, self.TOKENIZER_FILE)
return AutoTokenizer.from_pretrained(
tokenizer_path, clean_up_tokenization_spaces=True
)
def _load_model(self, path: str, providers: List[str]):
if os.path.exists(path):
return ort.InferenceSession(path, providers=providers)
else:
logger.warning(f"MiniLM model file {path} not found.")
return None
def __call__(self, texts: List[str]) -> List[np.ndarray]:
self._load_model_and_tokenizer()
if self.session is None or self.tokenizer is None:
logger.error("MiniLM model or tokenizer is not loaded.")
return []
inputs = self.tokenizer(
texts, padding=True, truncation=True, return_tensors="np"
)
input_names = [input.name for input in self.session.get_inputs()]
onnx_inputs = {name: inputs[name] for name in input_names if name in inputs}
outputs = self.session.run(None, onnx_inputs)
embeddings = outputs[0].mean(axis=1)
return [embedding for embedding in embeddings]

View File

@@ -0,0 +1,200 @@
import logging
import os
import warnings
from io import BytesIO
from typing import Dict, List, Optional, Union
import numpy as np
import requests
from PIL import Image
# importing this without pytorch or others causes a warning
# https://github.com/huggingface/transformers/issues/27214
# suppressed by setting env TRANSFORMERS_NO_ADVISORY_WARNINGS=1
from transformers import AutoFeatureExtractor, AutoTokenizer
from transformers.utils.logging import disable_progress_bar
from frigate.comms.inter_process import InterProcessRequestor
from frigate.const import MODEL_CACHE_DIR, UPDATE_MODEL_STATE
from frigate.types import ModelStatusTypesEnum
from frigate.util.downloader import ModelDownloader
from frigate.util.model import ONNXModelRunner
warnings.filterwarnings(
"ignore",
category=FutureWarning,
message="The class CLIPFeatureExtractor is deprecated",
)
# disables the progress bar for downloading tokenizers and feature extractors
disable_progress_bar()
logger = logging.getLogger(__name__)
class GenericONNXEmbedding:
"""Generic embedding function for ONNX models (text and vision)."""
def __init__(
self,
model_name: str,
model_file: str,
download_urls: Dict[str, str],
model_size: str,
model_type: str,
requestor: InterProcessRequestor,
tokenizer_file: Optional[str] = None,
device: str = "AUTO",
):
self.model_name = model_name
self.model_file = model_file
self.tokenizer_file = tokenizer_file
self.requestor = requestor
self.download_urls = download_urls
self.model_type = model_type # 'text' or 'vision'
self.model_size = model_size
self.device = device
self.download_path = os.path.join(MODEL_CACHE_DIR, self.model_name)
self.tokenizer = None
self.feature_extractor = None
self.runner = None
files_names = list(self.download_urls.keys()) + (
[self.tokenizer_file] if self.tokenizer_file else []
)
if not all(
os.path.exists(os.path.join(self.download_path, n)) for n in files_names
):
logger.debug(f"starting model download for {self.model_name}")
self.downloader = ModelDownloader(
model_name=self.model_name,
download_path=self.download_path,
file_names=files_names,
download_func=self._download_model,
)
self.downloader.ensure_model_files()
else:
self.downloader = None
ModelDownloader.mark_files_state(
self.requestor,
self.model_name,
files_names,
ModelStatusTypesEnum.downloaded,
)
self._load_model_and_tokenizer()
logger.debug(f"models are already downloaded for {self.model_name}")
def _download_model(self, path: str):
try:
file_name = os.path.basename(path)
if file_name in self.download_urls:
ModelDownloader.download_from_url(self.download_urls[file_name], path)
elif file_name == self.tokenizer_file and self.model_type == "text":
if not os.path.exists(path + "/" + self.model_name):
logger.info(f"Downloading {self.model_name} tokenizer")
tokenizer = AutoTokenizer.from_pretrained(
self.model_name,
trust_remote_code=True,
cache_dir=f"{MODEL_CACHE_DIR}/{self.model_name}/tokenizer",
clean_up_tokenization_spaces=True,
)
tokenizer.save_pretrained(path)
self.downloader.requestor.send_data(
UPDATE_MODEL_STATE,
{
"model": f"{self.model_name}-{file_name}",
"state": ModelStatusTypesEnum.downloaded,
},
)
except Exception:
self.downloader.requestor.send_data(
UPDATE_MODEL_STATE,
{
"model": f"{self.model_name}-{file_name}",
"state": ModelStatusTypesEnum.error,
},
)
def _load_model_and_tokenizer(self):
if self.runner is None:
if self.downloader:
self.downloader.wait_for_download()
if self.model_type == "text":
self.tokenizer = self._load_tokenizer()
else:
self.feature_extractor = self._load_feature_extractor()
self.runner = ONNXModelRunner(
os.path.join(self.download_path, self.model_file),
self.device,
self.model_size,
)
def _load_tokenizer(self):
tokenizer_path = os.path.join(f"{MODEL_CACHE_DIR}/{self.model_name}/tokenizer")
return AutoTokenizer.from_pretrained(
self.model_name,
cache_dir=tokenizer_path,
trust_remote_code=True,
clean_up_tokenization_spaces=True,
)
def _load_feature_extractor(self):
return AutoFeatureExtractor.from_pretrained(
f"{MODEL_CACHE_DIR}/{self.model_name}",
)
def _process_image(self, image):
if isinstance(image, str):
if image.startswith("http"):
response = requests.get(image)
image = Image.open(BytesIO(response.content)).convert("RGB")
return image
def __call__(
self, inputs: Union[List[str], List[Image.Image], List[str]]
) -> List[np.ndarray]:
self._load_model_and_tokenizer()
if self.runner is None or (
self.tokenizer is None and self.feature_extractor is None
):
logger.error(
f"{self.model_name} model or tokenizer/feature extractor is not loaded."
)
return []
if self.model_type == "text":
max_length = max(len(self.tokenizer.encode(text)) for text in inputs)
processed_inputs = [
self.tokenizer(
text,
padding="max_length",
truncation=True,
max_length=max_length,
return_tensors="np",
)
for text in inputs
]
else:
processed_images = [self._process_image(img) for img in inputs]
processed_inputs = [
self.feature_extractor(images=image, return_tensors="np")
for image in processed_images
]
input_names = self.runner.get_input_names()
onnx_inputs = {name: [] for name in input_names}
input: dict[str, any]
for input in processed_inputs:
for key, value in input.items():
if key in input_names:
onnx_inputs[key].append(value[0])
for key in input_names:
if onnx_inputs.get(key):
onnx_inputs[key] = np.stack(onnx_inputs[key])
else:
logger.warning(f"Expected input '{key}' not found in onnx_inputs")
embeddings = self.runner.run(onnx_inputs)[0]
return [embedding for embedding in embeddings]

View File

@@ -12,6 +12,7 @@ import numpy as np
from peewee import DoesNotExist
from playhouse.sqliteq import SqliteQueueDatabase
from frigate.comms.embeddings_updater import EmbeddingsRequestEnum, EmbeddingsResponder
from frigate.comms.event_metadata_updater import (
EventMetadataSubscriber,
EventMetadataTypeEnum,
@@ -23,12 +24,15 @@ from frigate.const import CLIPS_DIR, UPDATE_EVENT_DESCRIPTION
from frigate.events.types import EventTypeEnum
from frigate.genai import get_genai_client
from frigate.models import Event
from frigate.util.builtin import serialize
from frigate.util.image import SharedMemoryFrameManager, calculate_region
from .embeddings import Embeddings
logger = logging.getLogger(__name__)
MAX_THUMBNAILS = 10
class EmbeddingMaintainer(threading.Thread):
"""Handle embedding queue and post event updates."""
@@ -39,15 +43,20 @@ class EmbeddingMaintainer(threading.Thread):
config: FrigateConfig,
stop_event: MpEvent,
) -> None:
threading.Thread.__init__(self)
self.name = "embeddings_maintainer"
super().__init__(name="embeddings_maintainer")
self.config = config
self.embeddings = Embeddings(db)
self.embeddings = Embeddings(config.semantic_search, db)
# Check if we need to re-index events
if config.semantic_search.reindex:
self.embeddings.reindex()
self.event_subscriber = EventUpdateSubscriber()
self.event_end_subscriber = EventEndSubscriber()
self.event_metadata_subscriber = EventMetadataSubscriber(
EventMetadataTypeEnum.regenerate_description
)
self.embeddings_responder = EmbeddingsResponder()
self.frame_manager = SharedMemoryFrameManager()
# create communication for updating event descriptions
self.requestor = InterProcessRequestor()
@@ -58,6 +67,7 @@ class EmbeddingMaintainer(threading.Thread):
def run(self) -> None:
"""Maintain a SQLite-vec database for semantic search."""
while not self.stop_event.is_set():
self._process_requests()
self._process_updates()
self._process_finalized()
self._process_event_metadata()
@@ -65,12 +75,40 @@ class EmbeddingMaintainer(threading.Thread):
self.event_subscriber.stop()
self.event_end_subscriber.stop()
self.event_metadata_subscriber.stop()
self.embeddings_responder.stop()
self.requestor.stop()
logger.info("Exiting embeddings maintenance...")
def _process_requests(self) -> None:
"""Process embeddings requests"""
def _handle_request(topic: str, data: str) -> str:
try:
if topic == EmbeddingsRequestEnum.embed_description.value:
return serialize(
self.embeddings.upsert_description(
data["id"], data["description"]
),
pack=False,
)
elif topic == EmbeddingsRequestEnum.embed_thumbnail.value:
thumbnail = base64.b64decode(data["thumbnail"])
return serialize(
self.embeddings.upsert_thumbnail(data["id"], thumbnail),
pack=False,
)
elif topic == EmbeddingsRequestEnum.generate_search.value:
return serialize(
self.embeddings.text_embedding([data])[0], pack=False
)
except Exception as e:
logger.error(f"Unable to handle embeddings request {e}")
self.embeddings_responder.check_for_request(_handle_request)
def _process_updates(self) -> None:
"""Process event updates"""
update = self.event_subscriber.check_for_update()
update = self.event_subscriber.check_for_update(timeout=0.1)
if update is None:
return
@@ -81,6 +119,15 @@ class EmbeddingMaintainer(threading.Thread):
return
camera_config = self.config.cameras[camera]
# no need to save our own thumbnails if genai is not enabled
# or if the object has become stationary
if (
not camera_config.genai.enabled
or self.genai_client is None
or data["stationary"]
):
return
if data["id"] not in self.tracked_events:
self.tracked_events[data["id"]] = []
@@ -91,7 +138,14 @@ class EmbeddingMaintainer(threading.Thread):
if yuv_frame is not None:
data["thumbnail"] = self._create_thumbnail(yuv_frame, data["box"])
# Limit the number of thumbnails saved
if len(self.tracked_events[data["id"]]) >= MAX_THUMBNAILS:
# Always keep the first thumbnail for the event
self.tracked_events[data["id"]].pop(1)
self.tracked_events[data["id"]].append(data)
self.frame_manager.close(frame_id)
except FileNotFoundError:
pass
@@ -99,7 +153,7 @@ class EmbeddingMaintainer(threading.Thread):
def _process_finalized(self) -> None:
"""Process the end of an event."""
while True:
ended = self.event_end_subscriber.check_for_update()
ended = self.event_end_subscriber.check_for_update(timeout=0.1)
if ended == None:
break
@@ -136,9 +190,6 @@ class EmbeddingMaintainer(threading.Thread):
or set(event.zones) & set(camera_config.genai.required_zones)
)
):
logger.debug(
f"Description generation for {event}, has_snapshot: {event.has_snapshot}"
)
if event.has_snapshot and camera_config.genai.use_snapshot:
with open(
os.path.join(CLIPS_DIR, f"{event.camera}-{event.id}.jpg"),
@@ -192,7 +243,7 @@ class EmbeddingMaintainer(threading.Thread):
def _process_event_metadata(self):
# Check for regenerate description requests
(topic, event_id, source) = self.event_metadata_subscriber.check_for_update(
timeout=1
timeout=0.1
)
if topic is None:
@@ -226,7 +277,7 @@ class EmbeddingMaintainer(threading.Thread):
camera_config = self.config.cameras[event.camera]
description = self.genai_client.generate_description(
camera_config, thumbnails, event.label
camera_config, thumbnails, event
)
if not description:

View File

@@ -20,10 +20,11 @@ class ZScoreNormalization:
@property
def stddev(self):
return math.sqrt(self.variance)
return math.sqrt(self.variance) if self.variance > 0 else 0.0
def normalize(self, distances: list[float]):
self._update(distances)
def normalize(self, distances: list[float], save_stats: bool):
if save_stats:
self._update(distances)
if self.stddev == 0:
return distances
return [

View File

@@ -8,11 +8,9 @@ from enum import Enum
from multiprocessing.synchronize import Event as MpEvent
from pathlib import Path
from playhouse.sqliteq import SqliteQueueDatabase
from frigate.config import FrigateConfig
from frigate.const import CLIPS_DIR
from frigate.embeddings.embeddings import Embeddings
from frigate.db.sqlitevecq import SqliteVecQueueDatabase
from frigate.models import Event, Timeline
logger = logging.getLogger(__name__)
@@ -25,7 +23,7 @@ class EventCleanupType(str, Enum):
class EventCleanup(threading.Thread):
def __init__(
self, config: FrigateConfig, stop_event: MpEvent, db: SqliteQueueDatabase
self, config: FrigateConfig, stop_event: MpEvent, db: SqliteVecQueueDatabase
):
super().__init__(name="event_cleanup")
self.config = config
@@ -35,9 +33,6 @@ class EventCleanup(threading.Thread):
self.removed_camera_labels: list[str] = None
self.camera_labels: dict[str, dict[str, any]] = {}
if self.config.semantic_search.enabled:
self.embeddings = Embeddings(self.db)
def get_removed_camera_labels(self) -> list[Event]:
"""Get a list of distinct labels for removed cameras."""
if self.removed_camera_labels is None:
@@ -234,8 +229,8 @@ class EventCleanup(threading.Thread):
Event.delete().where(Event.id << chunk).execute()
if self.config.semantic_search.enabled:
self.embeddings.delete_description(chunk)
self.embeddings.delete_thumbnail(chunk)
self.db.delete_embeddings_description(event_ids=chunk)
self.db.delete_embeddings_thumbnail(event_ids=chunk)
logger.debug(f"Deleted {len(events_to_delete)} embeddings")
logger.info("Exiting event cleanup...")

View File

@@ -4,7 +4,10 @@ import importlib
import os
from typing import Optional
from playhouse.shortcuts import model_to_dict
from frigate.config import CameraConfig, GenAIConfig, GenAIProviderEnum
from frigate.models import Event
PROVIDERS = {}
@@ -31,12 +34,13 @@ class GenAIClient:
self,
camera_config: CameraConfig,
thumbnails: list[bytes],
label: str,
event: Event,
) -> Optional[str]:
"""Generate a description for the frame."""
prompt = camera_config.genai.object_prompts.get(
label, camera_config.genai.prompt
)
event.label,
camera_config.genai.prompt,
).format(**model_to_dict(event))
return self._send(prompt, thumbnails)
def _init_provider(self):

View File

@@ -21,12 +21,20 @@ class OllamaClient(GenAIClient):
def _init_provider(self):
"""Initialize the client."""
client = ApiClient(host=self.genai_config.base_url, timeout=self.timeout)
response = client.pull(self.genai_config.model)
if response["status"] != "success":
logger.error("Failed to pull %s model from Ollama", self.genai_config.model)
try:
client = ApiClient(host=self.genai_config.base_url, timeout=self.timeout)
# ensure the model is available locally
response = client.show(self.genai_config.model)
if response.get("error"):
logger.error(
"Ollama error: %s",
response["error"],
)
return None
return client
except Exception as e:
logger.warning("Error initializing Ollama: %s", str(e))
return None
return client
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
"""Submit a request to Ollama"""

View File

@@ -1,4 +1,3 @@
import base64
import datetime
import json
import logging
@@ -7,7 +6,6 @@ import queue
import threading
from collections import Counter, defaultdict
from multiprocessing.synchronize import Event as MpEvent
from statistics import median
from typing import Callable
import cv2
@@ -18,7 +16,6 @@ from frigate.comms.dispatcher import Dispatcher
from frigate.comms.events_updater import EventEndSubscriber, EventUpdatePublisher
from frigate.comms.inter_process import InterProcessRequestor
from frigate.config import (
CameraConfig,
FrigateConfig,
MqttConfig,
RecordConfig,
@@ -28,458 +25,18 @@ from frigate.config import (
from frigate.const import CLIPS_DIR, UPDATE_CAMERA_ACTIVITY
from frigate.events.types import EventStateEnum, EventTypeEnum
from frigate.ptz.autotrack import PtzAutoTrackerThread
from frigate.track.tracked_object import TrackedObject
from frigate.util.image import (
SharedMemoryFrameManager,
area,
calculate_region,
draw_box_with_label,
draw_timestamp,
is_better_thumbnail,
is_label_printable,
)
logger = logging.getLogger(__name__)
def on_edge(box, frame_shape):
if (
box[0] == 0
or box[1] == 0
or box[2] == frame_shape[1] - 1
or box[3] == frame_shape[0] - 1
):
return True
def has_better_attr(current_thumb, new_obj, attr_label) -> bool:
max_new_attr = max(
[0]
+ [area(a["box"]) for a in new_obj["attributes"] if a["label"] == attr_label]
)
max_current_attr = max(
[0]
+ [
area(a["box"])
for a in current_thumb["attributes"]
if a["label"] == attr_label
]
)
# if the thumb has a higher scoring attr
return max_new_attr > max_current_attr
def is_better_thumbnail(label, current_thumb, new_obj, frame_shape) -> bool:
# larger is better
# cutoff images are less ideal, but they should also be smaller?
# better scores are obviously better too
# check face on person
if label == "person":
if has_better_attr(current_thumb, new_obj, "face"):
return True
# if the current thumb has a face attr, dont update unless it gets better
if any([a["label"] == "face" for a in current_thumb["attributes"]]):
return False
# check license_plate on car
if label == "car":
if has_better_attr(current_thumb, new_obj, "license_plate"):
return True
# if the current thumb has a license_plate attr, dont update unless it gets better
if any([a["label"] == "license_plate" for a in current_thumb["attributes"]]):
return False
# if the new_thumb is on an edge, and the current thumb is not
if on_edge(new_obj["box"], frame_shape) and not on_edge(
current_thumb["box"], frame_shape
):
return False
# if the score is better by more than 5%
if new_obj["score"] > current_thumb["score"] + 0.05:
return True
# if the area is 10% larger
if new_obj["area"] > current_thumb["area"] * 1.1:
return True
return False
class TrackedObject:
def __init__(
self,
camera,
colormap,
camera_config: CameraConfig,
frame_cache,
obj_data: dict[str, any],
):
# set the score history then remove as it is not part of object state
self.score_history = obj_data["score_history"]
del obj_data["score_history"]
self.obj_data = obj_data
self.camera = camera
self.colormap = colormap
self.camera_config = camera_config
self.frame_cache = frame_cache
self.zone_presence: dict[str, int] = {}
self.zone_loitering: dict[str, int] = {}
self.current_zones = []
self.entered_zones = []
self.attributes = defaultdict(float)
self.false_positive = True
self.has_clip = False
self.has_snapshot = False
self.top_score = self.computed_score = 0.0
self.thumbnail_data = None
self.last_updated = 0
self.last_published = 0
self.frame = None
self.active = True
self.previous = self.to_dict()
def _is_false_positive(self):
# once a true positive, always a true positive
if not self.false_positive:
return False
threshold = self.camera_config.objects.filters[self.obj_data["label"]].threshold
return self.computed_score < threshold
def compute_score(self):
"""get median of scores for object."""
return median(self.score_history)
def update(self, current_frame_time: float, obj_data, has_valid_frame: bool):
thumb_update = False
significant_change = False
autotracker_update = False
# if the object is not in the current frame, add a 0.0 to the score history
if obj_data["frame_time"] != current_frame_time:
self.score_history.append(0.0)
else:
self.score_history.append(obj_data["score"])
# only keep the last 10 scores
if len(self.score_history) > 10:
self.score_history = self.score_history[-10:]
# calculate if this is a false positive
self.computed_score = self.compute_score()
if self.computed_score > self.top_score:
self.top_score = self.computed_score
self.false_positive = self._is_false_positive()
self.active = self.is_active()
if not self.false_positive and has_valid_frame:
# determine if this frame is a better thumbnail
if self.thumbnail_data is None or is_better_thumbnail(
self.obj_data["label"],
self.thumbnail_data,
obj_data,
self.camera_config.frame_shape,
):
self.thumbnail_data = {
"frame_time": current_frame_time,
"box": obj_data["box"],
"area": obj_data["area"],
"region": obj_data["region"],
"score": obj_data["score"],
"attributes": obj_data["attributes"],
}
thumb_update = True
# check zones
current_zones = []
bottom_center = (obj_data["centroid"][0], obj_data["box"][3])
# check each zone
for name, zone in self.camera_config.zones.items():
# if the zone is not for this object type, skip
if len(zone.objects) > 0 and obj_data["label"] not in zone.objects:
continue
contour = zone.contour
zone_score = self.zone_presence.get(name, 0) + 1
# check if the object is in the zone
if cv2.pointPolygonTest(contour, bottom_center, False) >= 0:
# if the object passed the filters once, dont apply again
if name in self.current_zones or not zone_filtered(self, zone.filters):
# an object is only considered present in a zone if it has a zone inertia of 3+
if zone_score >= zone.inertia:
loitering_score = self.zone_loitering.get(name, 0) + 1
# loitering time is configured as seconds, convert to count of frames
if loitering_score >= (
self.camera_config.zones[name].loitering_time
* self.camera_config.detect.fps
):
current_zones.append(name)
if name not in self.entered_zones:
self.entered_zones.append(name)
else:
self.zone_loitering[name] = loitering_score
else:
self.zone_presence[name] = zone_score
else:
# once an object has a zone inertia of 3+ it is not checked anymore
if 0 < zone_score < zone.inertia:
self.zone_presence[name] = zone_score - 1
# maintain attributes
for attr in obj_data["attributes"]:
if self.attributes[attr["label"]] < attr["score"]:
self.attributes[attr["label"]] = attr["score"]
# populate the sub_label for object with highest scoring logo
if self.obj_data["label"] in ["car", "package", "person"]:
recognized_logos = {
k: self.attributes[k]
for k in ["ups", "fedex", "amazon"]
if k in self.attributes
}
if len(recognized_logos) > 0:
max_logo = max(recognized_logos, key=recognized_logos.get)
# don't overwrite sub label if it is already set
if (
self.obj_data.get("sub_label") is None
or self.obj_data["sub_label"][0] == max_logo
):
self.obj_data["sub_label"] = (max_logo, recognized_logos[max_logo])
# check for significant change
if not self.false_positive:
# if the zones changed, signal an update
if set(self.current_zones) != set(current_zones):
significant_change = True
# if the position changed, signal an update
if self.obj_data["position_changes"] != obj_data["position_changes"]:
significant_change = True
if self.obj_data["attributes"] != obj_data["attributes"]:
significant_change = True
# if the state changed between stationary and active
if self.previous["active"] != self.active:
significant_change = True
# update at least once per minute
if self.obj_data["frame_time"] - self.previous["frame_time"] > 60:
significant_change = True
# update autotrack at most 3 objects per second
if self.obj_data["frame_time"] - self.previous["frame_time"] >= (1 / 3):
autotracker_update = True
self.obj_data.update(obj_data)
self.current_zones = current_zones
return (thumb_update, significant_change, autotracker_update)
def to_dict(self, include_thumbnail: bool = False):
event = {
"id": self.obj_data["id"],
"camera": self.camera,
"frame_time": self.obj_data["frame_time"],
"snapshot": self.thumbnail_data,
"label": self.obj_data["label"],
"sub_label": self.obj_data.get("sub_label"),
"top_score": self.top_score,
"false_positive": self.false_positive,
"start_time": self.obj_data["start_time"],
"end_time": self.obj_data.get("end_time", None),
"score": self.obj_data["score"],
"box": self.obj_data["box"],
"area": self.obj_data["area"],
"ratio": self.obj_data["ratio"],
"region": self.obj_data["region"],
"active": self.active,
"stationary": not self.active,
"motionless_count": self.obj_data["motionless_count"],
"position_changes": self.obj_data["position_changes"],
"current_zones": self.current_zones.copy(),
"entered_zones": self.entered_zones.copy(),
"has_clip": self.has_clip,
"has_snapshot": self.has_snapshot,
"attributes": self.attributes,
"current_attributes": self.obj_data["attributes"],
}
if include_thumbnail:
event["thumbnail"] = base64.b64encode(self.get_thumbnail()).decode("utf-8")
return event
def is_active(self):
return not self.is_stationary()
def is_stationary(self):
return (
self.obj_data["motionless_count"]
> self.camera_config.detect.stationary.threshold
)
def get_thumbnail(self):
if (
self.thumbnail_data is None
or self.thumbnail_data["frame_time"] not in self.frame_cache
):
ret, jpg = cv2.imencode(".jpg", np.zeros((175, 175, 3), np.uint8))
jpg_bytes = self.get_jpg_bytes(
timestamp=False, bounding_box=False, crop=True, height=175
)
if jpg_bytes:
return jpg_bytes
else:
ret, jpg = cv2.imencode(".jpg", np.zeros((175, 175, 3), np.uint8))
return jpg.tobytes()
def get_clean_png(self):
if self.thumbnail_data is None:
return None
try:
best_frame = cv2.cvtColor(
self.frame_cache[self.thumbnail_data["frame_time"]],
cv2.COLOR_YUV2BGR_I420,
)
except KeyError:
logger.warning(
f"Unable to create clean png because frame {self.thumbnail_data['frame_time']} is not in the cache"
)
return None
ret, png = cv2.imencode(".png", best_frame)
if ret:
return png.tobytes()
else:
return None
def get_jpg_bytes(
self, timestamp=False, bounding_box=False, crop=False, height=None, quality=70
):
if self.thumbnail_data is None:
return None
try:
best_frame = cv2.cvtColor(
self.frame_cache[self.thumbnail_data["frame_time"]],
cv2.COLOR_YUV2BGR_I420,
)
except KeyError:
logger.warning(
f"Unable to create jpg because frame {self.thumbnail_data['frame_time']} is not in the cache"
)
return None
if bounding_box:
thickness = 2
color = self.colormap[self.obj_data["label"]]
# draw the bounding boxes on the frame
box = self.thumbnail_data["box"]
draw_box_with_label(
best_frame,
box[0],
box[1],
box[2],
box[3],
self.obj_data["label"],
f"{int(self.thumbnail_data['score']*100)}% {int(self.thumbnail_data['area'])}",
thickness=thickness,
color=color,
)
# draw any attributes
for attribute in self.thumbnail_data["attributes"]:
box = attribute["box"]
draw_box_with_label(
best_frame,
box[0],
box[1],
box[2],
box[3],
attribute["label"],
f"{attribute['score']:.0%}",
thickness=thickness,
color=color,
)
if crop:
box = self.thumbnail_data["box"]
box_size = 300
region = calculate_region(
best_frame.shape,
box[0],
box[1],
box[2],
box[3],
box_size,
multiplier=1.1,
)
best_frame = best_frame[region[1] : region[3], region[0] : region[2]]
if height:
width = int(height * best_frame.shape[1] / best_frame.shape[0])
best_frame = cv2.resize(
best_frame, dsize=(width, height), interpolation=cv2.INTER_AREA
)
if timestamp:
color = self.camera_config.timestamp_style.color
draw_timestamp(
best_frame,
self.thumbnail_data["frame_time"],
self.camera_config.timestamp_style.format,
font_effect=self.camera_config.timestamp_style.effect,
font_thickness=self.camera_config.timestamp_style.thickness,
font_color=(color.blue, color.green, color.red),
position=self.camera_config.timestamp_style.position,
)
ret, jpg = cv2.imencode(
".jpg", best_frame, [int(cv2.IMWRITE_JPEG_QUALITY), quality]
)
if ret:
return jpg.tobytes()
else:
return None
def zone_filtered(obj: TrackedObject, object_config):
object_name = obj.obj_data["label"]
if object_name in object_config:
obj_settings = object_config[object_name]
# if the min area is larger than the
# detected object, don't add it to detected objects
if obj_settings.min_area > obj.obj_data["area"]:
return True
# if the detected object is larger than the
# max area, don't add it to detected objects
if obj_settings.max_area < obj.obj_data["area"]:
return True
# if the score is lower than the threshold, skip
if obj_settings.threshold > obj.computed_score:
return True
# if the object is not proportionally wide enough
if obj_settings.min_ratio > obj.obj_data["ratio"]:
return True
# if the object is proportionally too wide
if obj_settings.max_ratio < obj.obj_data["ratio"]:
return True
return False
# Maintains the state of a camera
class CameraState:
def __init__(
@@ -696,8 +253,7 @@ class CameraState:
for id in new_ids:
new_obj = tracked_objects[id] = TrackedObject(
self.name,
self.config.model.colormap,
self.config.model,
self.camera_config,
self.frame_cache,
current_detections[id],

View File

@@ -32,6 +32,7 @@ from frigate.const import (
CONFIG_DIR,
)
from frigate.ptz.onvif import OnvifController
from frigate.track.tracked_object import TrackedObject
from frigate.util.builtin import update_yaml_file
from frigate.util.image import SharedMemoryFrameManager, intersection_over_union
@@ -214,7 +215,7 @@ class PtzAutoTracker:
):
self._autotracker_setup(camera_config, camera)
def _autotracker_setup(self, camera_config, camera):
def _autotracker_setup(self, camera_config: CameraConfig, camera: str):
logger.debug(f"{camera}: Autotracker init")
self.object_types[camera] = camera_config.onvif.autotracking.track
@@ -852,7 +853,7 @@ class PtzAutoTracker:
logger.debug(f"{camera}: Valid velocity ")
return True, velocities.flatten()
def _get_distance_threshold(self, camera, obj):
def _get_distance_threshold(self, camera: str, obj: TrackedObject):
# Returns true if Euclidean distance from object to center of frame is
# less than 10% of the of the larger dimension (width or height) of the frame,
# multiplied by a scaling factor for object size.
@@ -888,7 +889,9 @@ class PtzAutoTracker:
return distance_threshold
def _should_zoom_in(self, camera, obj, box, predicted_time, debug_zooming=False):
def _should_zoom_in(
self, camera: str, obj: TrackedObject, box, predicted_time, debug_zooming=False
):
# returns True if we should zoom in, False if we should zoom out, None to do nothing
camera_config = self.config.cameras[camera]
camera_width = camera_config.frame_shape[1]
@@ -1019,7 +1022,7 @@ class PtzAutoTracker:
# Don't zoom at all
return None
def _autotrack_move_ptz(self, camera, obj):
def _autotrack_move_ptz(self, camera: str, obj: TrackedObject):
camera_config = self.config.cameras[camera]
camera_width = camera_config.frame_shape[1]
camera_height = camera_config.frame_shape[0]
@@ -1090,7 +1093,12 @@ class PtzAutoTracker:
self._enqueue_move(camera, obj.obj_data["frame_time"], 0, 0, zoom)
def _get_zoom_amount(
self, camera, obj, predicted_box, predicted_movement_time, debug_zoom=True
self,
camera: str,
obj: TrackedObject,
predicted_box,
predicted_movement_time,
debug_zoom=True,
):
camera_config = self.config.cameras[camera]
@@ -1186,13 +1194,13 @@ class PtzAutoTracker:
return zoom
def is_autotracking(self, camera):
def is_autotracking(self, camera: str):
return self.tracked_object[camera] is not None
def autotracked_object_region(self, camera):
def autotracked_object_region(self, camera: str):
return self.tracked_object[camera]["region"]
def autotrack_object(self, camera, obj):
def autotrack_object(self, camera: str, obj: TrackedObject):
camera_config = self.config.cameras[camera]
if camera_config.onvif.autotracking.enabled:
@@ -1208,7 +1216,7 @@ class PtzAutoTracker:
if (
# new object
self.tracked_object[camera] is None
and obj.camera == camera
and obj.camera_config.name == camera
and obj.obj_data["label"] in self.object_types[camera]
and set(obj.entered_zones) & set(self.required_zones[camera])
and not obj.previous["false_positive"]
@@ -1267,7 +1275,7 @@ class PtzAutoTracker:
# If it's within bounds, start tracking that object.
# Should we check region (maybe too broad) or expand the previous object's box a bit and check that?
self.tracked_object[camera] is None
and obj.camera == camera
and obj.camera_config.name == camera
and obj.obj_data["label"] in self.object_types[camera]
and not obj.previous["false_positive"]
and not obj.false_positive

View File

@@ -142,6 +142,8 @@ class RecordingMaintainer(threading.Thread):
)
)
)
# see if the recording mover is too slow and segments need to be deleted
if processed_segment_count > keep_count:
logger.warning(
f"Unable to keep up with recording segments in cache for {camera}. Keeping the {keep_count} most recent segments out of {processed_segment_count} and discarding the rest..."
@@ -153,6 +155,21 @@ class RecordingMaintainer(threading.Thread):
self.end_time_cache.pop(cache_path, None)
grouped_recordings[camera] = grouped_recordings[camera][-keep_count:]
# see if detection has failed and unprocessed segments need to be deleted
unprocessed_segment_count = (
len(grouped_recordings[camera]) - processed_segment_count
)
if unprocessed_segment_count > keep_count:
logger.warning(
f"Too many unprocessed recording segments in cache for {camera}. This likely indicates an issue with the detect stream, keeping the {keep_count} most recent segments out of {unprocessed_segment_count} and discarding the rest..."
)
to_remove = grouped_recordings[camera][:-keep_count]
for rec in to_remove:
cache_path = rec["cache_path"]
Path(cache_path).unlink(missing_ok=True)
self.end_time_cache.pop(cache_path, None)
grouped_recordings[camera] = grouped_recordings[camera][-keep_count:]
tasks = []
for camera, recordings in grouped_recordings.items():
# clear out all the object recording info for old frames

View File

@@ -167,7 +167,7 @@ class ReviewSegmentMaintainer(threading.Thread):
# clear ongoing review segments from last instance
self.requestor.send_data(CLEAR_ONGOING_REVIEW_SEGMENTS, "")
def new_segment(
def _publish_segment_start(
self,
segment: PendingReviewSegment,
) -> None:
@@ -186,7 +186,7 @@ class ReviewSegmentMaintainer(threading.Thread):
),
)
def update_segment(
def _publish_segment_update(
self,
segment: PendingReviewSegment,
camera_config: CameraConfig,
@@ -211,7 +211,7 @@ class ReviewSegmentMaintainer(threading.Thread):
),
)
def end_segment(
def _publish_segment_end(
self,
segment: PendingReviewSegment,
prev_data: dict[str, any],
@@ -239,10 +239,16 @@ class ReviewSegmentMaintainer(threading.Thread):
) -> None:
"""Validate if existing review segment should continue."""
camera_config = self.config.cameras[segment.camera]
active_objects = get_active_objects(frame_time, camera_config, objects)
# get active objects + objects loitering in loitering zones
active_objects = get_active_objects(
frame_time, camera_config, objects
) + get_loitering_objects(frame_time, camera_config, objects)
prev_data = segment.get_data(False)
has_activity = False
if len(active_objects) > 0:
has_activity = True
should_update = False
if frame_time > segment.last_update:
@@ -295,13 +301,14 @@ class ReviewSegmentMaintainer(threading.Thread):
logger.debug(f"Failed to get frame {frame_id} from SHM")
return
self.update_segment(
self._publish_segment_update(
segment, camera_config, yuv_frame, active_objects, prev_data
)
self.frame_manager.close(frame_id)
except FileNotFoundError:
return
else:
if not has_activity:
if not segment.has_frame:
try:
frame_id = f"{camera_config.name}{frame_time}"
@@ -315,16 +322,18 @@ class ReviewSegmentMaintainer(threading.Thread):
segment.save_full_frame(camera_config, yuv_frame)
self.frame_manager.close(frame_id)
self.update_segment(segment, camera_config, None, [], prev_data)
self._publish_segment_update(
segment, camera_config, None, [], prev_data
)
except FileNotFoundError:
return
if segment.severity == SeverityEnum.alert and frame_time > (
segment.last_update + THRESHOLD_ALERT_ACTIVITY
):
self.end_segment(segment, prev_data)
self._publish_segment_end(segment, prev_data)
elif frame_time > (segment.last_update + THRESHOLD_DETECTION_ACTIVITY):
self.end_segment(segment, prev_data)
self._publish_segment_end(segment, prev_data)
def check_if_new_segment(
self,
@@ -418,7 +427,7 @@ class ReviewSegmentMaintainer(threading.Thread):
camera_config, yuv_frame, active_objects
)
self.frame_manager.close(frame_id)
self.new_segment(self.active_review_segments[camera])
self._publish_segment_start(self.active_review_segments[camera])
except FileNotFoundError:
return
@@ -609,3 +618,24 @@ def get_active_objects(
)
) # object must be in the alerts or detections label list
]
def get_loitering_objects(
frame_time: float, camera_config: CameraConfig, all_objects: list[TrackedObject]
) -> list[TrackedObject]:
"""get loitering objects for detection."""
return [
o
for o in all_objects
if o["pending_loitering"] # object must be pending loitering
and o["position_changes"] > 0 # object must have moved at least once
and o["frame_time"] == frame_time # object must be detected in this frame
and not o["false_positive"] # object must not be a false positive
and (
o["label"] in camera_config.review.alerts.labels
or (
camera_config.review.detections.labels is None
or o["label"] in camera_config.review.detections.labels
)
) # object must be in the alerts or detections label list
]

View File

@@ -1,11 +1,11 @@
import unittest
from frigate.track.object_attribute import ObjectAttribute
from frigate.track.tracked_object import TrackedObjectAttribute
class TestAttribute(unittest.TestCase):
def test_overlapping_object_selection(self) -> None:
attribute = ObjectAttribute(
attribute = TrackedObjectAttribute(
(
"amazon",
0.80078125,

View File

@@ -1,44 +0,0 @@
"""Object attribute."""
from frigate.util.object import area, box_inside
class ObjectAttribute:
def __init__(self, raw_data: tuple) -> None:
self.label = raw_data[0]
self.score = raw_data[1]
self.box = raw_data[2]
self.area = raw_data[3]
self.ratio = raw_data[4]
self.region = raw_data[5]
def get_tracking_data(self) -> dict[str, any]:
"""Return data saved to the object."""
return {
"label": self.label,
"score": self.score,
"box": self.box,
}
def find_best_object(self, objects: list[dict[str, any]]) -> str:
"""Find the best attribute for each object and return its ID."""
best_object_area = None
best_object_id = None
for obj in objects:
if not box_inside(obj["box"], self.box):
continue
object_area = area(obj["box"])
# if multiple objects have the same attribute then they
# are overlapping, it is most likely that the smaller object
# is the one with the attribute
if best_object_area is None:
best_object_area = object_area
best_object_id = obj["id"]
elif object_area < best_object_area:
best_object_area = object_area
best_object_id = obj["id"]
return best_object_id

View File

@@ -0,0 +1,447 @@
"""Object attribute."""
import base64
import logging
from collections import defaultdict
from statistics import median
import cv2
import numpy as np
from frigate.config import (
CameraConfig,
ModelConfig,
)
from frigate.util.image import (
area,
calculate_region,
draw_box_with_label,
draw_timestamp,
is_better_thumbnail,
)
from frigate.util.object import box_inside
logger = logging.getLogger(__name__)
class TrackedObject:
def __init__(
self,
model_config: ModelConfig,
camera_config: CameraConfig,
frame_cache,
obj_data: dict[str, any],
):
# set the score history then remove as it is not part of object state
self.score_history = obj_data["score_history"]
del obj_data["score_history"]
self.obj_data = obj_data
self.colormap = model_config.colormap
self.logos = model_config.all_attribute_logos
self.camera_config = camera_config
self.frame_cache = frame_cache
self.zone_presence: dict[str, int] = {}
self.zone_loitering: dict[str, int] = {}
self.current_zones = []
self.entered_zones = []
self.attributes = defaultdict(float)
self.false_positive = True
self.has_clip = False
self.has_snapshot = False
self.top_score = self.computed_score = 0.0
self.thumbnail_data = None
self.last_updated = 0
self.last_published = 0
self.frame = None
self.active = True
self.pending_loitering = False
self.previous = self.to_dict()
def _is_false_positive(self):
# once a true positive, always a true positive
if not self.false_positive:
return False
threshold = self.camera_config.objects.filters[self.obj_data["label"]].threshold
return self.computed_score < threshold
def compute_score(self):
"""get median of scores for object."""
return median(self.score_history)
def update(self, current_frame_time: float, obj_data, has_valid_frame: bool):
thumb_update = False
significant_change = False
autotracker_update = False
# if the object is not in the current frame, add a 0.0 to the score history
if obj_data["frame_time"] != current_frame_time:
self.score_history.append(0.0)
else:
self.score_history.append(obj_data["score"])
# only keep the last 10 scores
if len(self.score_history) > 10:
self.score_history = self.score_history[-10:]
# calculate if this is a false positive
self.computed_score = self.compute_score()
if self.computed_score > self.top_score:
self.top_score = self.computed_score
self.false_positive = self._is_false_positive()
self.active = self.is_active()
if not self.false_positive and has_valid_frame:
# determine if this frame is a better thumbnail
if self.thumbnail_data is None or is_better_thumbnail(
self.obj_data["label"],
self.thumbnail_data,
obj_data,
self.camera_config.frame_shape,
):
self.thumbnail_data = {
"frame_time": current_frame_time,
"box": obj_data["box"],
"area": obj_data["area"],
"region": obj_data["region"],
"score": obj_data["score"],
"attributes": obj_data["attributes"],
}
thumb_update = True
# check zones
current_zones = []
bottom_center = (obj_data["centroid"][0], obj_data["box"][3])
in_loitering_zone = False
# check each zone
for name, zone in self.camera_config.zones.items():
# if the zone is not for this object type, skip
if len(zone.objects) > 0 and obj_data["label"] not in zone.objects:
continue
contour = zone.contour
zone_score = self.zone_presence.get(name, 0) + 1
# check if the object is in the zone
if cv2.pointPolygonTest(contour, bottom_center, False) >= 0:
# if the object passed the filters once, dont apply again
if name in self.current_zones or not zone_filtered(self, zone.filters):
# an object is only considered present in a zone if it has a zone inertia of 3+
if zone_score >= zone.inertia:
# if the zone has loitering time, update loitering status
if zone.loitering_time > 0:
in_loitering_zone = True
loitering_score = self.zone_loitering.get(name, 0) + 1
# loitering time is configured as seconds, convert to count of frames
if loitering_score >= (
self.camera_config.zones[name].loitering_time
* self.camera_config.detect.fps
):
current_zones.append(name)
if name not in self.entered_zones:
self.entered_zones.append(name)
else:
self.zone_loitering[name] = loitering_score
else:
self.zone_presence[name] = zone_score
else:
# once an object has a zone inertia of 3+ it is not checked anymore
if 0 < zone_score < zone.inertia:
self.zone_presence[name] = zone_score - 1
# update loitering status
self.pending_loitering = in_loitering_zone
# maintain attributes
for attr in obj_data["attributes"]:
if self.attributes[attr["label"]] < attr["score"]:
self.attributes[attr["label"]] = attr["score"]
# populate the sub_label for object with highest scoring logo
if self.obj_data["label"] in ["car", "package", "person"]:
recognized_logos = {
k: self.attributes[k] for k in self.logos if k in self.attributes
}
if len(recognized_logos) > 0:
max_logo = max(recognized_logos, key=recognized_logos.get)
# don't overwrite sub label if it is already set
if (
self.obj_data.get("sub_label") is None
or self.obj_data["sub_label"][0] == max_logo
):
self.obj_data["sub_label"] = (max_logo, recognized_logos[max_logo])
# check for significant change
if not self.false_positive:
# if the zones changed, signal an update
if set(self.current_zones) != set(current_zones):
significant_change = True
# if the position changed, signal an update
if self.obj_data["position_changes"] != obj_data["position_changes"]:
significant_change = True
if self.obj_data["attributes"] != obj_data["attributes"]:
significant_change = True
# if the state changed between stationary and active
if self.previous["active"] != self.active:
significant_change = True
# update at least once per minute
if self.obj_data["frame_time"] - self.previous["frame_time"] > 60:
significant_change = True
# update autotrack at most 3 objects per second
if self.obj_data["frame_time"] - self.previous["frame_time"] >= (1 / 3):
autotracker_update = True
self.obj_data.update(obj_data)
self.current_zones = current_zones
return (thumb_update, significant_change, autotracker_update)
def to_dict(self, include_thumbnail: bool = False):
event = {
"id": self.obj_data["id"],
"camera": self.camera_config.name,
"frame_time": self.obj_data["frame_time"],
"snapshot": self.thumbnail_data,
"label": self.obj_data["label"],
"sub_label": self.obj_data.get("sub_label"),
"top_score": self.top_score,
"false_positive": self.false_positive,
"start_time": self.obj_data["start_time"],
"end_time": self.obj_data.get("end_time", None),
"score": self.obj_data["score"],
"box": self.obj_data["box"],
"area": self.obj_data["area"],
"ratio": self.obj_data["ratio"],
"region": self.obj_data["region"],
"active": self.active,
"stationary": not self.active,
"motionless_count": self.obj_data["motionless_count"],
"position_changes": self.obj_data["position_changes"],
"current_zones": self.current_zones.copy(),
"entered_zones": self.entered_zones.copy(),
"has_clip": self.has_clip,
"has_snapshot": self.has_snapshot,
"attributes": self.attributes,
"current_attributes": self.obj_data["attributes"],
"pending_loitering": self.pending_loitering,
}
if include_thumbnail:
event["thumbnail"] = base64.b64encode(self.get_thumbnail()).decode("utf-8")
return event
def is_active(self):
return not self.is_stationary()
def is_stationary(self):
return (
self.obj_data["motionless_count"]
> self.camera_config.detect.stationary.threshold
)
def get_thumbnail(self):
if (
self.thumbnail_data is None
or self.thumbnail_data["frame_time"] not in self.frame_cache
):
ret, jpg = cv2.imencode(".jpg", np.zeros((175, 175, 3), np.uint8))
jpg_bytes = self.get_jpg_bytes(
timestamp=False, bounding_box=False, crop=True, height=175
)
if jpg_bytes:
return jpg_bytes
else:
ret, jpg = cv2.imencode(".jpg", np.zeros((175, 175, 3), np.uint8))
return jpg.tobytes()
def get_clean_png(self):
if self.thumbnail_data is None:
return None
try:
best_frame = cv2.cvtColor(
self.frame_cache[self.thumbnail_data["frame_time"]],
cv2.COLOR_YUV2BGR_I420,
)
except KeyError:
logger.warning(
f"Unable to create clean png because frame {self.thumbnail_data['frame_time']} is not in the cache"
)
return None
ret, png = cv2.imencode(".png", best_frame)
if ret:
return png.tobytes()
else:
return None
def get_jpg_bytes(
self, timestamp=False, bounding_box=False, crop=False, height=None, quality=70
):
if self.thumbnail_data is None:
return None
try:
best_frame = cv2.cvtColor(
self.frame_cache[self.thumbnail_data["frame_time"]],
cv2.COLOR_YUV2BGR_I420,
)
except KeyError:
logger.warning(
f"Unable to create jpg because frame {self.thumbnail_data['frame_time']} is not in the cache"
)
return None
if bounding_box:
thickness = 2
color = self.colormap[self.obj_data["label"]]
# draw the bounding boxes on the frame
box = self.thumbnail_data["box"]
draw_box_with_label(
best_frame,
box[0],
box[1],
box[2],
box[3],
self.obj_data["label"],
f"{int(self.thumbnail_data['score']*100)}% {int(self.thumbnail_data['area'])}",
thickness=thickness,
color=color,
)
# draw any attributes
for attribute in self.thumbnail_data["attributes"]:
box = attribute["box"]
draw_box_with_label(
best_frame,
box[0],
box[1],
box[2],
box[3],
attribute["label"],
f"{attribute['score']:.0%}",
thickness=thickness,
color=color,
)
if crop:
box = self.thumbnail_data["box"]
box_size = 300
region = calculate_region(
best_frame.shape,
box[0],
box[1],
box[2],
box[3],
box_size,
multiplier=1.1,
)
best_frame = best_frame[region[1] : region[3], region[0] : region[2]]
if height:
width = int(height * best_frame.shape[1] / best_frame.shape[0])
best_frame = cv2.resize(
best_frame, dsize=(width, height), interpolation=cv2.INTER_AREA
)
if timestamp:
color = self.camera_config.timestamp_style.color
draw_timestamp(
best_frame,
self.thumbnail_data["frame_time"],
self.camera_config.timestamp_style.format,
font_effect=self.camera_config.timestamp_style.effect,
font_thickness=self.camera_config.timestamp_style.thickness,
font_color=(color.blue, color.green, color.red),
position=self.camera_config.timestamp_style.position,
)
ret, jpg = cv2.imencode(
".jpg", best_frame, [int(cv2.IMWRITE_JPEG_QUALITY), quality]
)
if ret:
return jpg.tobytes()
else:
return None
def zone_filtered(obj: TrackedObject, object_config):
object_name = obj.obj_data["label"]
if object_name in object_config:
obj_settings = object_config[object_name]
# if the min area is larger than the
# detected object, don't add it to detected objects
if obj_settings.min_area > obj.obj_data["area"]:
return True
# if the detected object is larger than the
# max area, don't add it to detected objects
if obj_settings.max_area < obj.obj_data["area"]:
return True
# if the score is lower than the threshold, skip
if obj_settings.threshold > obj.computed_score:
return True
# if the object is not proportionally wide enough
if obj_settings.min_ratio > obj.obj_data["ratio"]:
return True
# if the object is proportionally too wide
if obj_settings.max_ratio < obj.obj_data["ratio"]:
return True
return False
class TrackedObjectAttribute:
def __init__(self, raw_data: tuple) -> None:
self.label = raw_data[0]
self.score = raw_data[1]
self.box = raw_data[2]
self.area = raw_data[3]
self.ratio = raw_data[4]
self.region = raw_data[5]
def get_tracking_data(self) -> dict[str, any]:
"""Return data saved to the object."""
return {
"label": self.label,
"score": self.score,
"box": self.box,
}
def find_best_object(self, objects: list[dict[str, any]]) -> str:
"""Find the best attribute for each object and return its ID."""
best_object_area = None
best_object_id = None
for obj in objects:
if not box_inside(obj["box"], self.box):
continue
object_area = area(obj["box"])
# if multiple objects have the same attribute then they
# are overlapping, it is most likely that the smaller object
# is the one with the attribute
if best_object_area is None:
best_object_area = object_area
best_object_id = obj["id"]
elif object_area < best_object_area:
best_object_area = object_area
best_object_id = obj["id"]
return best_object_id

View File

@@ -8,10 +8,11 @@ import multiprocessing as mp
import queue
import re
import shlex
import struct
import urllib.parse
from collections.abc import Mapping
from pathlib import Path
from typing import Any, Optional, Tuple
from typing import Any, Optional, Tuple, Union
import numpy as np
import pytz
@@ -182,16 +183,11 @@ def update_yaml_from_url(file_path, url):
update_yaml_file(file_path, key_path, new_value_list)
else:
value = new_value_list[0]
if "," in value:
# Skip conversion if we're a mask or zone string
update_yaml_file(file_path, key_path, value)
else:
try:
value = ast.literal_eval(value)
except (ValueError, SyntaxError):
pass
update_yaml_file(file_path, key_path, value)
try:
# no need to convert if we have a mask/zone string
value = ast.literal_eval(value) if "," not in value else value
except (ValueError, SyntaxError):
pass
update_yaml_file(file_path, key_path, value)
@@ -342,3 +338,32 @@ def generate_color_palette(n):
colors.append(interpolate(color1, color2, factor))
return colors
def serialize(
vector: Union[list[float], np.ndarray, float], pack: bool = True
) -> bytes:
"""Serializes a list of floats, numpy array, or single float into a compact "raw bytes" format"""
if isinstance(vector, np.ndarray):
# Convert numpy array to list of floats
vector = vector.flatten().tolist()
elif isinstance(vector, (float, np.float32, np.float64)):
# Handle single float values
vector = [vector]
elif not isinstance(vector, list):
raise TypeError(
f"Input must be a list of floats, a numpy array, or a single float. Got {type(vector)}"
)
try:
if pack:
return struct.pack("%sf" % len(vector), *vector)
else:
return vector
except struct.error as e:
raise ValueError(f"Failed to pack vector: {e}. Vector: {vector}")
def deserialize(bytes_data: bytes) -> list[float]:
"""Deserializes a compact "raw bytes" format into a list of floats"""
return list(struct.unpack("%sf" % (len(bytes_data) // 4), bytes_data))

View File

@@ -19,6 +19,13 @@ class FileLock:
self.path = path
self.lock_file = f"{path}.lock"
# we have not acquired the lock yet so it should not exist
if os.path.exists(self.lock_file):
try:
os.remove(self.lock_file)
except Exception:
pass
def acquire(self):
parent_dir = os.path.dirname(self.lock_file)
os.makedirs(parent_dir, exist_ok=True)
@@ -56,14 +63,12 @@ class ModelDownloader:
self.download_complete = threading.Event()
def ensure_model_files(self):
for file in self.file_names:
self.requestor.send_data(
UPDATE_MODEL_STATE,
{
"model": f"{self.model_name}-{file}",
"state": ModelStatusTypesEnum.downloading,
},
)
self.mark_files_state(
self.requestor,
self.model_name,
self.file_names,
ModelStatusTypesEnum.downloading,
)
self.download_thread = threading.Thread(
target=self._download_models,
name=f"_download_model_{self.model_name}",
@@ -92,6 +97,7 @@ class ModelDownloader:
},
)
self.requestor.stop()
self.download_complete.set()
@staticmethod
@@ -119,5 +125,21 @@ class ModelDownloader:
if not silent:
logger.info(f"Downloading complete: {url}")
@staticmethod
def mark_files_state(
requestor: InterProcessRequestor,
model_name: str,
files: list[str],
state: ModelStatusTypesEnum,
) -> None:
for file_name in files:
requestor.send_data(
UPDATE_MODEL_STATE,
{
"model": f"{model_name}-{file_name}",
"state": state,
},
)
def wait_for_download(self):
self.download_complete.wait()

View File

@@ -36,6 +36,72 @@ def transliterate_to_latin(text: str) -> str:
return unidecode(text)
def on_edge(box, frame_shape):
if (
box[0] == 0
or box[1] == 0
or box[2] == frame_shape[1] - 1
or box[3] == frame_shape[0] - 1
):
return True
def has_better_attr(current_thumb, new_obj, attr_label) -> bool:
max_new_attr = max(
[0]
+ [area(a["box"]) for a in new_obj["attributes"] if a["label"] == attr_label]
)
max_current_attr = max(
[0]
+ [
area(a["box"])
for a in current_thumb["attributes"]
if a["label"] == attr_label
]
)
# if the thumb has a higher scoring attr
return max_new_attr > max_current_attr
def is_better_thumbnail(label, current_thumb, new_obj, frame_shape) -> bool:
# larger is better
# cutoff images are less ideal, but they should also be smaller?
# better scores are obviously better too
# check face on person
if label == "person":
if has_better_attr(current_thumb, new_obj, "face"):
return True
# if the current thumb has a face attr, dont update unless it gets better
if any([a["label"] == "face" for a in current_thumb["attributes"]]):
return False
# check license_plate on car
if label == "car":
if has_better_attr(current_thumb, new_obj, "license_plate"):
return True
# if the current thumb has a license_plate attr, dont update unless it gets better
if any([a["label"] == "license_plate" for a in current_thumb["attributes"]]):
return False
# if the new_thumb is on an edge, and the current thumb is not
if on_edge(new_obj["box"], frame_shape) and not on_edge(
current_thumb["box"], frame_shape
):
return False
# if the score is better by more than 5%
if new_obj["score"] > current_thumb["score"] + 0.05:
return True
# if the area is 10% larger
if new_obj["area"] > current_thumb["area"] * 1.1:
return True
return False
def draw_timestamp(
frame,
timestamp,

View File

@@ -1,39 +1,118 @@
"""Model Utils"""
import os
from typing import Any
import onnxruntime as ort
try:
import openvino as ov
except ImportError:
# openvino is not included
pass
def get_ort_providers(
force_cpu: bool = False, openvino_device: str = "AUTO"
force_cpu: bool = False, openvino_device: str = "AUTO", requires_fp16: bool = False
) -> tuple[list[str], list[dict[str, any]]]:
if force_cpu:
return (["CPUExecutionProvider"], [{}])
return (
["CPUExecutionProvider"],
[
{
"enable_cpu_mem_arena": False,
}
],
)
providers = ort.get_available_providers()
providers = []
options = []
for provider in providers:
if provider == "TensorrtExecutionProvider":
os.makedirs("/config/model_cache/tensorrt/ort/trt-engines", exist_ok=True)
for provider in ort.get_available_providers():
if provider == "CUDAExecutionProvider":
providers.append(provider)
options.append(
{
"trt_timing_cache_enable": True,
"trt_engine_cache_enable": True,
"trt_timing_cache_path": "/config/model_cache/tensorrt/ort",
"trt_engine_cache_path": "/config/model_cache/tensorrt/ort/trt-engines",
"arena_extend_strategy": "kSameAsRequested",
}
)
elif provider == "TensorrtExecutionProvider":
# TensorrtExecutionProvider uses too much memory without options to control it
pass
elif provider == "OpenVINOExecutionProvider":
os.makedirs("/config/model_cache/openvino/ort", exist_ok=True)
providers.append(provider)
options.append(
{
"arena_extend_strategy": "kSameAsRequested",
"cache_dir": "/config/model_cache/openvino/ort",
"device_type": openvino_device,
}
)
elif provider == "CPUExecutionProvider":
providers.append(provider)
options.append(
{
"enable_cpu_mem_arena": False,
}
)
else:
providers.append(provider)
options.append({})
return (providers, options)
class ONNXModelRunner:
"""Run onnx models optimally based on available hardware."""
def __init__(self, model_path: str, device: str, requires_fp16: bool = False):
self.model_path = model_path
self.ort: ort.InferenceSession = None
self.ov: ov.Core = None
providers, options = get_ort_providers(device == "CPU", device, requires_fp16)
if "OpenVINOExecutionProvider" in providers:
# use OpenVINO directly
self.type = "ov"
self.ov = ov.Core()
self.ov.set_property(
{ov.properties.cache_dir: "/config/model_cache/openvino"}
)
self.interpreter = self.ov.compile_model(
model=model_path, device_name=device
)
else:
# Use ONNXRuntime
self.type = "ort"
self.ort = ort.InferenceSession(
model_path,
providers=providers,
provider_options=options,
)
def get_input_names(self) -> list[str]:
if self.type == "ov":
input_names = []
for input in self.interpreter.inputs:
input_names.extend(input.names)
return input_names
elif self.type == "ort":
return [input.name for input in self.ort.get_inputs()]
def run(self, input: dict[str, Any]) -> Any:
if self.type == "ov":
infer_request = self.interpreter.create_infer_request()
input_tensor = list(input.values())
if len(input_tensor) == 1:
input_tensor = ov.Tensor(array=input_tensor[0])
else:
input_tensor = ov.Tensor(array=input_tensor)
infer_request.infer(input_tensor)
return [infer_request.get_output_tensor().data]
elif self.type == "ort":
return self.ort.run(None, input)

View File

@@ -318,10 +318,11 @@ def get_intel_gpu_stats() -> dict[str, str]:
if video_frame is not None:
video[key].append(float(video_frame))
results["gpu"] = (
f"{round(((sum(render['global']) / len(render['global'])) + (sum(video['global']) / len(video['global']))) / 2, 2)}%"
)
results["mem"] = "-%"
if render["global"]:
results["gpu"] = (
f"{round(((sum(render['global']) / len(render['global'])) + (sum(video['global']) / len(video['global']))) / 2, 2)}%"
)
results["mem"] = "-%"
if len(render.keys()) > 1:
results["clients"] = {}

View File

@@ -27,7 +27,7 @@ from frigate.object_detection import RemoteObjectDetector
from frigate.ptz.autotrack import ptz_moving_at_frame_time
from frigate.track import ObjectTracker
from frigate.track.norfair_tracker import NorfairTracker
from frigate.track.object_attribute import ObjectAttribute
from frigate.track.tracked_object import TrackedObjectAttribute
from frigate.util.builtin import EventsPerSecond, get_tomorrow_at_time
from frigate.util.image import (
FrameManager,
@@ -734,10 +734,10 @@ def process_frames(
object_tracker.update_frame_times(frame_time)
# group the attribute detections based on what label they apply to
attribute_detections: dict[str, list[ObjectAttribute]] = {}
attribute_detections: dict[str, list[TrackedObjectAttribute]] = {}
for label, attribute_labels in model_config.attributes_map.items():
attribute_detections[label] = [
ObjectAttribute(d)
TrackedObjectAttribute(d)
for d in consolidated_detections
if d[0] in attribute_labels
]

View File

@@ -2,6 +2,7 @@ import { baseUrl } from "./baseUrl";
import { useCallback, useEffect, useState } from "react";
import useWebSocket, { ReadyState } from "react-use-websocket";
import {
EmbeddingsReindexProgressType,
FrigateCameraState,
FrigateEvent,
FrigateReview,
@@ -302,6 +303,42 @@ export function useModelState(
return { payload: data ? data[model] : undefined };
}
export function useEmbeddingsReindexProgress(
revalidateOnFocus: boolean = true,
): {
payload: EmbeddingsReindexProgressType;
} {
const {
value: { payload },
send: sendCommand,
} = useWs("embeddings_reindex_progress", "embeddingsReindexProgress");
const data = useDeepMemo(JSON.parse(payload as string));
useEffect(() => {
let listener = undefined;
if (revalidateOnFocus) {
sendCommand("embeddingsReindexProgress");
listener = () => {
if (document.visibilityState == "visible") {
sendCommand("embeddingsReindexProgress");
}
};
addEventListener("visibilitychange", listener);
}
return () => {
if (listener) {
removeEventListener("visibilitychange", listener);
}
};
// we know that these deps are correct
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [revalidateOnFocus]);
return { payload: data };
}
export function useMotionActivity(camera: string): { payload: string } {
const {
value: { payload },

View File

@@ -1,3 +1,4 @@
import { useEmbeddingsReindexProgress } from "@/api/ws";
import {
StatusBarMessagesContext,
StatusMessage,
@@ -41,6 +42,23 @@ export default function Statusbar() {
});
}, [potentialProblems, addMessage, clearMessages]);
const { payload: reindexState } = useEmbeddingsReindexProgress();
useEffect(() => {
if (reindexState) {
if (reindexState.status == "indexing") {
clearMessages("embeddings-reindex");
addMessage(
"embeddings-reindex",
`Reindexing embeddings (${Math.floor((reindexState.processed_objects / reindexState.total_objects) * 100)}% complete)`,
);
}
if (reindexState.status === "completed") {
clearMessages("embeddings-reindex");
}
}
}, [reindexState, addMessage, clearMessages]);
return (
<div className="absolute bottom-0 left-0 right-0 z-10 flex h-8 w-full items-center justify-between border-t border-secondary-highlight bg-background_alt px-4 dark:text-secondary-foreground">
<div className="flex h-full items-center gap-2">

View File

@@ -0,0 +1,65 @@
import { useState } from "react";
import { Button } from "@/components/ui/button";
import { toast } from "sonner";
import ActivityIndicator from "../indicators/activity-indicator";
import { FaDownload } from "react-icons/fa";
import { formatUnixTimestampToDateTime } from "@/utils/dateUtil";
type DownloadVideoButtonProps = {
source: string;
camera: string;
startTime: number;
};
export function DownloadVideoButton({
source,
camera,
startTime,
}: DownloadVideoButtonProps) {
const [isDownloading, setIsDownloading] = useState(false);
const formattedDate = formatUnixTimestampToDateTime(startTime, {
strftime_fmt: "%D-%T",
time_style: "medium",
date_style: "medium",
});
const filename = `${camera}_${formattedDate}.mp4`;
const handleDownloadStart = () => {
setIsDownloading(true);
toast.success("Your review item video has started downloading.", {
position: "top-center",
});
};
const handleDownloadEnd = () => {
setIsDownloading(false);
toast.success("Download completed successfully.", {
position: "top-center",
});
};
return (
<div className="flex justify-center">
<Button
asChild
disabled={isDownloading}
className="flex items-center gap-2"
size="sm"
>
<a
href={source}
download={filename}
onClick={handleDownloadStart}
onBlur={handleDownloadEnd}
>
{isDownloading ? (
<ActivityIndicator className="size-4" />
) : (
<FaDownload className="size-4 text-secondary-foreground" />
)}
</a>
</Button>
</div>
);
}

View File

@@ -34,6 +34,7 @@ import { toast } from "sonner";
import useKeyboardListener from "@/hooks/use-keyboard-listener";
import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
import { capitalizeFirstLetter } from "@/utils/stringUtil";
import { buttonVariants } from "../ui/button";
type ReviewCardProps = {
event: ReviewSegment;
@@ -228,7 +229,10 @@ export default function ReviewCard({
<AlertDialogCancel onClick={() => setOptionsOpen(false)}>
Cancel
</AlertDialogCancel>
<AlertDialogAction className="bg-destructive" onClick={onDelete}>
<AlertDialogAction
className={buttonVariants({ variant: "destructive" })}
onClick={onDelete}
>
Delete
</AlertDialogAction>
</AlertDialogFooter>
@@ -295,7 +299,10 @@ export default function ReviewCard({
<AlertDialogCancel onClick={() => setOptionsOpen(false)}>
Cancel
</AlertDialogCancel>
<AlertDialogAction className="bg-destructive" onClick={onDelete}>
<AlertDialogAction
className={buttonVariants({ variant: "destructive" })}
onClick={onDelete}
>
Delete
</AlertDialogAction>
</AlertDialogFooter>

View File

@@ -1,50 +1,56 @@
import { useCallback } from "react";
import { useCallback, useMemo } from "react";
import { useApiHost } from "@/api";
import { getIconForLabel } from "@/utils/iconUtil";
import TimeAgo from "../dynamic/TimeAgo";
import useSWR from "swr";
import { FrigateConfig } from "@/types/frigateConfig";
import { isIOS, isSafari } from "react-device-detect";
import Chip from "@/components/indicators/Chip";
import { useFormattedTimestamp } from "@/hooks/use-date-utils";
import useImageLoaded from "@/hooks/use-image-loaded";
import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
import ImageLoadingIndicator from "../indicators/ImageLoadingIndicator";
import ActivityIndicator from "../indicators/activity-indicator";
import { capitalizeFirstLetter } from "@/utils/stringUtil";
import { SearchResult } from "@/types/search";
import useContextMenu from "@/hooks/use-contextmenu";
import { cn } from "@/lib/utils";
import { TooltipPortal } from "@radix-ui/react-tooltip";
type SearchThumbnailProps = {
searchResult: SearchResult;
findSimilar: () => void;
onClick: (searchResult: SearchResult) => void;
};
export default function SearchThumbnail({
searchResult,
findSimilar,
onClick,
}: SearchThumbnailProps) {
const apiHost = useApiHost();
const { data: config } = useSWR<FrigateConfig>("config");
const [imgRef, imgLoaded, onImgLoad] = useImageLoaded();
useContextMenu(imgRef, findSimilar);
// interactions
const handleOnClick = useCallback(() => {
onClick(searchResult);
}, [searchResult, onClick]);
// date
const objectLabel = useMemo(() => {
if (
!config ||
!searchResult.sub_label ||
!config.model.attributes_map[searchResult.label]
) {
return searchResult.label;
}
const formattedDate = useFormattedTimestamp(
searchResult.start_time,
config?.ui.time_format == "24hour" ? "%b %-d, %H:%M" : "%b %-d, %I:%M %p",
config?.ui.timezone,
);
if (
config.model.attributes_map[searchResult.label].includes(
searchResult.sub_label,
)
) {
return searchResult.sub_label;
}
return `${searchResult.label}-verified`;
}, [config, searchResult]);
return (
<div className="relative size-full cursor-pointer" onClick={handleOnClick}>
@@ -80,17 +86,23 @@ export default function SearchThumbnail({
<TooltipTrigger asChild>
<div className="mx-3 pb-1 text-sm text-white">
<Chip
className={`z-0 flex items-start justify-between space-x-1 bg-gray-500 bg-gradient-to-br from-gray-400 to-gray-500`}
className={`z-0 flex items-center justify-between gap-1 space-x-1 bg-gray-500 bg-gradient-to-br from-gray-400 to-gray-500 text-xs`}
onClick={() => onClick(searchResult)}
>
{getIconForLabel(searchResult.label, "size-3 text-white")}
{getIconForLabel(objectLabel, "size-3 text-white")}
{Math.round(
(searchResult.data.score ??
searchResult.data.top_score ??
searchResult.top_score) * 100,
)}
%
</Chip>
</div>
</TooltipTrigger>
</div>
<TooltipPortal>
<TooltipContent className="capitalize">
{[...new Set([searchResult.label])]
{[objectLabel]
.filter(
(item) => item !== undefined && !item.includes("-verified"),
)
@@ -103,18 +115,7 @@ export default function SearchThumbnail({
</Tooltip>
</div>
<div className="rounded-t-l pointer-events-none absolute inset-x-0 top-0 z-10 h-[30%] w-full bg-gradient-to-b from-black/60 to-transparent"></div>
<div className="rounded-b-l pointer-events-none absolute inset-x-0 bottom-0 z-10 h-[20%] w-full bg-gradient-to-t from-black/60 to-transparent">
<div className="mx-3 flex h-full items-end justify-between pb-1 text-sm text-white">
{searchResult.end_time ? (
<TimeAgo time={searchResult.start_time * 1000} dense />
) : (
<div>
<ActivityIndicator size={24} />
</div>
)}
{formattedDate}
</div>
</div>
<div className="rounded-b-l pointer-events-none absolute inset-x-0 bottom-0 z-10 flex h-[20%] items-end bg-gradient-to-t from-black/60 to-transparent"></div>
</div>
</div>
);

View File

@@ -0,0 +1,229 @@
import { useCallback, useState } from "react";
import TimeAgo from "../dynamic/TimeAgo";
import useSWR from "swr";
import { FrigateConfig } from "@/types/frigateConfig";
import { useFormattedTimestamp } from "@/hooks/use-date-utils";
import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
import ActivityIndicator from "../indicators/activity-indicator";
import { SearchResult } from "@/types/search";
import {
DropdownMenu,
DropdownMenuContent,
DropdownMenuItem,
DropdownMenuTrigger,
} from "@/components/ui/dropdown-menu";
import {
AlertDialog,
AlertDialogAction,
AlertDialogCancel,
AlertDialogContent,
AlertDialogDescription,
AlertDialogFooter,
AlertDialogHeader,
AlertDialogTitle,
} from "../ui/alert-dialog";
import { LuCamera, LuDownload, LuMoreVertical, LuTrash2 } from "react-icons/lu";
import FrigatePlusIcon from "@/components/icons/FrigatePlusIcon";
import { FrigatePlusDialog } from "../overlay/dialog/FrigatePlusDialog";
import { Event } from "@/types/event";
import { FaArrowsRotate } from "react-icons/fa6";
import { baseUrl } from "@/api/baseUrl";
import axios from "axios";
import { toast } from "sonner";
import { MdImageSearch } from "react-icons/md";
import { isMobileOnly } from "react-device-detect";
import { buttonVariants } from "../ui/button";
import { cn } from "@/lib/utils";
type SearchThumbnailProps = {
searchResult: SearchResult;
columns: number;
findSimilar: () => void;
refreshResults: () => void;
showObjectLifecycle: () => void;
};
export default function SearchThumbnailFooter({
searchResult,
columns,
findSimilar,
refreshResults,
showObjectLifecycle,
}: SearchThumbnailProps) {
const { data: config } = useSWR<FrigateConfig>("config");
// interactions
const [showFrigatePlus, setShowFrigatePlus] = useState(false);
const [deleteDialogOpen, setDeleteDialogOpen] = useState(false);
const handleDelete = useCallback(() => {
axios
.delete(`events/${searchResult.id}`)
.then((resp) => {
if (resp.status == 200) {
toast.success("Tracked object deleted successfully.", {
position: "top-center",
});
refreshResults();
}
})
.catch(() => {
toast.error("Failed to delete tracked object.", {
position: "top-center",
});
});
}, [searchResult, refreshResults]);
// date
const formattedDate = useFormattedTimestamp(
searchResult.start_time,
config?.ui.time_format == "24hour" ? "%b %-d, %H:%M" : "%b %-d, %I:%M %p",
config?.ui.timezone,
);
return (
<>
<AlertDialog
open={deleteDialogOpen}
onOpenChange={() => setDeleteDialogOpen(!deleteDialogOpen)}
>
<AlertDialogContent>
<AlertDialogHeader>
<AlertDialogTitle>Confirm Delete</AlertDialogTitle>
</AlertDialogHeader>
<AlertDialogDescription>
Are you sure you want to delete this tracked object?
</AlertDialogDescription>
<AlertDialogFooter>
<AlertDialogCancel>Cancel</AlertDialogCancel>
<AlertDialogAction
className={buttonVariants({ variant: "destructive" })}
onClick={handleDelete}
>
Delete
</AlertDialogAction>
</AlertDialogFooter>
</AlertDialogContent>
</AlertDialog>
<FrigatePlusDialog
upload={
showFrigatePlus ? (searchResult as unknown as Event) : undefined
}
onClose={() => setShowFrigatePlus(false)}
onEventUploaded={() => {
searchResult.plus_id = "submitted";
}}
/>
<div
className={cn(
"flex w-full flex-row items-center justify-between",
columns > 4 &&
"items-start sm:flex-col sm:gap-2 lg:flex-row lg:items-center lg:gap-1",
)}
>
<div className="flex flex-col items-start text-xs text-primary-variant">
{searchResult.end_time ? (
<TimeAgo time={searchResult.start_time * 1000} dense />
) : (
<div>
<ActivityIndicator size={14} />
</div>
)}
{formattedDate}
</div>
<div className="flex flex-row items-center justify-end gap-6 md:gap-4">
{!isMobileOnly &&
config?.plus?.enabled &&
searchResult.has_snapshot &&
searchResult.end_time &&
!searchResult.plus_id && (
<Tooltip>
<TooltipTrigger>
<FrigatePlusIcon
className="size-5 cursor-pointer text-primary-variant hover:text-primary"
onClick={() => setShowFrigatePlus(true)}
/>
</TooltipTrigger>
<TooltipContent>Submit to Frigate+</TooltipContent>
</Tooltip>
)}
{config?.semantic_search?.enabled && (
<Tooltip>
<TooltipTrigger>
<MdImageSearch
className="size-5 cursor-pointer text-primary-variant hover:text-primary"
onClick={findSimilar}
/>
</TooltipTrigger>
<TooltipContent>Find similar</TooltipContent>
</Tooltip>
)}
<DropdownMenu>
<DropdownMenuTrigger>
<LuMoreVertical className="size-5 cursor-pointer text-primary-variant hover:text-primary" />
</DropdownMenuTrigger>
<DropdownMenuContent align={"end"}>
{searchResult.has_clip && (
<DropdownMenuItem>
<a
className="justify_start flex items-center"
href={`${baseUrl}api/events/${searchResult.id}/clip.mp4`}
download={`${searchResult.camera}_${searchResult.label}.mp4`}
>
<LuDownload className="mr-2 size-4" />
<span>Download video</span>
</a>
</DropdownMenuItem>
)}
{searchResult.has_snapshot && (
<DropdownMenuItem>
<a
className="justify_start flex items-center"
href={`${baseUrl}api/events/${searchResult.id}/snapshot.jpg`}
download={`${searchResult.camera}_${searchResult.label}.jpg`}
>
<LuCamera className="mr-2 size-4" />
<span>Download snapshot</span>
</a>
</DropdownMenuItem>
)}
<DropdownMenuItem
className="cursor-pointer"
onClick={showObjectLifecycle}
>
<FaArrowsRotate className="mr-2 size-4" />
<span>View object lifecycle</span>
</DropdownMenuItem>
{isMobileOnly &&
config?.plus?.enabled &&
searchResult.has_snapshot &&
searchResult.end_time &&
!searchResult.plus_id && (
<DropdownMenuItem
className="cursor-pointer"
onClick={() => setShowFrigatePlus(true)}
>
<FrigatePlusIcon className="mr-2 size-4 cursor-pointer text-primary" />
<span>Submit to Frigate+</span>
</DropdownMenuItem>
)}
<DropdownMenuItem
className="cursor-pointer"
onClick={() => setDeleteDialogOpen(true)}
>
<LuTrash2 className="mr-2 size-4" />
<span>Delete</span>
</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>
</div>
</div>
</>
);
}

View File

@@ -3,7 +3,7 @@ import { isDesktop, isMobile } from "react-device-detect";
import useSWR from "swr";
import { MdHome } from "react-icons/md";
import { usePersistedOverlayState } from "@/hooks/use-overlay-state";
import { Button } from "../ui/button";
import { Button, buttonVariants } from "../ui/button";
import { useCallback, useMemo, useState } from "react";
import { Tooltip, TooltipContent, TooltipTrigger } from "../ui/tooltip";
import { LuPencil, LuPlus } from "react-icons/lu";
@@ -518,7 +518,10 @@ export function CameraGroupRow({
</AlertDialogDescription>
<AlertDialogFooter>
<AlertDialogCancel>Cancel</AlertDialogCancel>
<AlertDialogAction onClick={onDeleteGroup}>
<AlertDialogAction
className={buttonVariants({ variant: "destructive" })}
onClick={onDeleteGroup}
>
Delete
</AlertDialogAction>
</AlertDialogFooter>
@@ -643,6 +646,11 @@ export function CameraGroupEdit({
setIsLoading(true);
let renamingQuery = "";
if (editingGroup && editingGroup[0] !== values.name) {
renamingQuery = `camera_groups.${editingGroup[0]}&`;
}
const order =
editingGroup === undefined
? currentGroups.length + 1
@@ -655,9 +663,12 @@ export function CameraGroupEdit({
.join("");
axios
.put(`config/set?${orderQuery}&${iconQuery}${cameraQueries}`, {
requires_restart: 0,
})
.put(
`config/set?${renamingQuery}${orderQuery}&${iconQuery}${cameraQueries}`,
{
requires_restart: 0,
},
)
.then((res) => {
if (res.status === 200) {
toast.success(`Camera group (${values.name}) has been saved.`, {
@@ -712,7 +723,6 @@ export function CameraGroupEdit({
<Input
className="text-md w-full border border-input bg-background p-2 hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
placeholder="Enter a name..."
disabled={editingGroup !== undefined}
{...field}
/>
</FormControl>

View File

@@ -69,6 +69,70 @@ export function CamerasFilterButton({
</Button>
);
const content = (
<CamerasFilterContent
allCameras={allCameras}
groups={groups}
currentCameras={currentCameras}
setCurrentCameras={setCurrentCameras}
setOpen={setOpen}
updateCameraFilter={updateCameraFilter}
/>
);
if (isMobile) {
return (
<Drawer
open={open}
onOpenChange={(open) => {
if (!open) {
setCurrentCameras(selectedCameras);
}
setOpen(open);
}}
>
<DrawerTrigger asChild>{trigger}</DrawerTrigger>
<DrawerContent className="max-h-[75dvh] overflow-hidden">
{content}
</DrawerContent>
</Drawer>
);
}
return (
<DropdownMenu
modal={false}
open={open}
onOpenChange={(open) => {
if (!open) {
setCurrentCameras(selectedCameras);
}
setOpen(open);
}}
>
<DropdownMenuTrigger asChild>{trigger}</DropdownMenuTrigger>
<DropdownMenuContent>{content}</DropdownMenuContent>
</DropdownMenu>
);
}
type CamerasFilterContentProps = {
allCameras: string[];
currentCameras: string[] | undefined;
groups: [string, CameraGroupConfig][];
setCurrentCameras: (cameras: string[] | undefined) => void;
setOpen: (open: boolean) => void;
updateCameraFilter: (cameras: string[] | undefined) => void;
};
export function CamerasFilterContent({
allCameras,
currentCameras,
groups,
setCurrentCameras,
setOpen,
updateCameraFilter,
}: CamerasFilterContentProps) {
return (
<>
{isMobile && (
<>
@@ -158,40 +222,4 @@ export function CamerasFilterButton({
</div>
</>
);
if (isMobile) {
return (
<Drawer
open={open}
onOpenChange={(open) => {
if (!open) {
setCurrentCameras(selectedCameras);
}
setOpen(open);
}}
>
<DrawerTrigger asChild>{trigger}</DrawerTrigger>
<DrawerContent className="max-h-[75dvh] overflow-hidden">
{content}
</DrawerContent>
</Drawer>
);
}
return (
<DropdownMenu
modal={false}
open={open}
onOpenChange={(open) => {
if (!open) {
setCurrentCameras(selectedCameras);
}
setOpen(open);
}}
>
<DropdownMenuTrigger asChild>{trigger}</DropdownMenuTrigger>
<DropdownMenuContent>{content}</DropdownMenuContent>
</DropdownMenu>
);
}

View File

@@ -1,7 +1,7 @@
import { FaCircleCheck } from "react-icons/fa6";
import { useCallback, useState } from "react";
import axios from "axios";
import { Button } from "../ui/button";
import { Button, buttonVariants } from "../ui/button";
import { isDesktop } from "react-device-detect";
import { FaCompactDisc } from "react-icons/fa";
import { HiTrash } from "react-icons/hi";
@@ -79,7 +79,10 @@ export default function ReviewActionGroup({
</AlertDialogDescription>
<AlertDialogFooter>
<AlertDialogCancel>Cancel</AlertDialogCancel>
<AlertDialogAction className="bg-destructive" onClick={onDelete}>
<AlertDialogAction
className={buttonVariants({ variant: "destructive" })}
onClick={onDelete}
>
Delete
</AlertDialogAction>
</AlertDialogFooter>

View File

@@ -241,6 +241,8 @@ export default function ReviewFilterGroup({
mode="none"
setMode={() => {}}
setRange={() => {}}
showExportPreview={false}
setShowExportPreview={() => {}}
/>
)}
</div>

View File

@@ -1,5 +1,4 @@
import { Button } from "../ui/button";
import { Popover, PopoverContent, PopoverTrigger } from "../ui/popover";
import useSWR from "swr";
import { FrigateConfig } from "@/types/frigateConfig";
import { useCallback, useEffect, useMemo, useState } from "react";
@@ -10,25 +9,19 @@ import { Switch } from "../ui/switch";
import { Label } from "../ui/label";
import FilterSwitch from "./FilterSwitch";
import { FilterList } from "@/types/filter";
import { CalendarRangeFilterButton } from "./CalendarFilterButton";
import { CamerasFilterButton } from "./CamerasFilterButton";
import {
DEFAULT_SEARCH_FILTERS,
SearchFilter,
SearchFilters,
SearchSource,
DEFAULT_TIME_RANGE_AFTER,
DEFAULT_TIME_RANGE_BEFORE,
} from "@/types/search";
import { DateRange } from "react-day-picker";
import { cn } from "@/lib/utils";
import SubFilterIcon from "../icons/SubFilterIcon";
import { FaLocationDot } from "react-icons/fa6";
import { MdLabel } from "react-icons/md";
import SearchSourceIcon from "../icons/SearchSourceIcon";
import PlatformAwareDialog from "../overlay/dialog/PlatformAwareDialog";
import { FaArrowRight, FaClock } from "react-icons/fa";
import { useFormattedHour } from "@/hooks/use-date-utils";
import SearchFilterDialog from "../overlay/dialog/SearchFilterDialog";
import { CalendarRangeFilterButton } from "./CalendarFilterButton";
type SearchFilterGroupProps = {
className: string;
@@ -79,8 +72,6 @@ export default function SearchFilterGroup({
return [...labels].sort();
}, [config, filterList, filter]);
const { data: allSubLabels } = useSWR(["sub_labels", { split_joined: 1 }]);
const allZones = useMemo<string[]>(() => {
if (filterList?.zones) {
return filterList.zones;
@@ -159,6 +150,15 @@ export default function SearchFilterGroup({
}}
/>
)}
{filters.includes("general") && (
<GeneralFilterButton
allLabels={filterValues.labels}
selectedLabels={filter?.labels}
updateLabelFilter={(newLabels) => {
onUpdateFilter({ ...filter, labels: newLabels });
}}
/>
)}
{filters.includes("date") && (
<CalendarRangeFilterButton
range={
@@ -173,54 +173,12 @@ export default function SearchFilterGroup({
updateSelectedRange={onUpdateSelectedRange}
/>
)}
{filters.includes("time") && (
<TimeRangeFilterButton
config={config}
timeRange={filter?.time_range}
updateTimeRange={(time_range) =>
onUpdateFilter({ ...filter, time_range })
}
/>
)}
{filters.includes("zone") && allZones.length > 0 && (
<ZoneFilterButton
allZones={filterValues.zones}
selectedZones={filter?.zones}
updateZoneFilter={(newZones) =>
onUpdateFilter({ ...filter, zones: newZones })
}
/>
)}
{filters.includes("general") && (
<GeneralFilterButton
allLabels={filterValues.labels}
selectedLabels={filter?.labels}
updateLabelFilter={(newLabels) => {
onUpdateFilter({ ...filter, labels: newLabels });
}}
/>
)}
{filters.includes("sub") && (
<SubFilterButton
allSubLabels={allSubLabels}
selectedSubLabels={filter?.sub_labels}
updateSubLabelFilter={(newSubLabels) =>
onUpdateFilter({ ...filter, sub_labels: newSubLabels })
}
/>
)}
{config?.semantic_search?.enabled &&
filters.includes("source") &&
!filter?.search_type?.includes("similarity") && (
<SearchTypeButton
selectedSearchSources={
filter?.search_type ?? ["thumbnail", "description"]
}
updateSearchSourceFilter={(newSearchSource) =>
onUpdateFilter({ ...filter, search_type: newSearchSource })
}
/>
)}
<SearchFilterDialog
config={config}
filter={filter}
filterValues={filterValues}
onUpdateFilter={onUpdateFilter}
/>
</div>
);
}
@@ -295,7 +253,11 @@ function GeneralFilterButton({
<PlatformAwareDialog
trigger={trigger}
content={content}
contentClassName={isDesktop ? "" : "max-h-[75dvh] overflow-hidden p-4"}
contentClassName={
isDesktop
? "scrollbar-container h-auto max-h-[80dvh] overflow-y-auto"
: "max-h-[75dvh] overflow-hidden p-4"
}
open={open}
onOpenChange={(open) => {
if (!open) {
@@ -326,7 +288,7 @@ export function GeneralFilterContent({
}: GeneralFilterContentProps) {
return (
<>
<div className="scrollbar-container h-auto max-h-[80dvh] overflow-y-auto overflow-x-hidden">
<div className="overflow-x-hidden">
<div className="mb-5 mt-2.5 flex items-center justify-between">
<Label
className="mx-2 cursor-pointer text-primary"
@@ -397,681 +359,3 @@ export function GeneralFilterContent({
</>
);
}
type TimeRangeFilterButtonProps = {
config?: FrigateConfig;
timeRange?: string;
updateTimeRange: (range: string | undefined) => void;
};
function TimeRangeFilterButton({
config,
timeRange,
updateTimeRange,
}: TimeRangeFilterButtonProps) {
const [open, setOpen] = useState(false);
const [startOpen, setStartOpen] = useState(false);
const [endOpen, setEndOpen] = useState(false);
const [afterHour, beforeHour] = useMemo(() => {
if (!timeRange || !timeRange.includes(",")) {
return [DEFAULT_TIME_RANGE_AFTER, DEFAULT_TIME_RANGE_BEFORE];
}
return timeRange.split(",");
}, [timeRange]);
const [selectedAfterHour, setSelectedAfterHour] = useState(afterHour);
const [selectedBeforeHour, setSelectedBeforeHour] = useState(beforeHour);
// format based on locale
const formattedAfter = useFormattedHour(config, afterHour);
const formattedBefore = useFormattedHour(config, beforeHour);
const formattedSelectedAfter = useFormattedHour(config, selectedAfterHour);
const formattedSelectedBefore = useFormattedHour(config, selectedBeforeHour);
useEffect(() => {
setSelectedAfterHour(afterHour);
setSelectedBeforeHour(beforeHour);
// only refresh when state changes
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [timeRange]);
const trigger = (
<Button
size="sm"
variant={timeRange ? "select" : "default"}
className="flex items-center gap-2 capitalize"
>
<FaClock
className={`${timeRange ? "text-selected-foreground" : "text-secondary-foreground"}`}
/>
<div
className={`${timeRange ? "text-selected-foreground" : "text-primary"}`}
>
{timeRange ? `${formattedAfter} - ${formattedBefore}` : "All Times"}
</div>
</Button>
);
const content = (
<div className="scrollbar-container h-auto max-h-[80dvh] overflow-y-auto overflow-x-hidden">
<div className="my-5 flex flex-row items-center justify-center gap-2">
<Popover
open={startOpen}
onOpenChange={(open) => {
if (!open) {
setStartOpen(false);
}
}}
>
<PopoverTrigger asChild>
<Button
className={`text-primary ${isDesktop ? "" : "text-xs"} `}
variant={startOpen ? "select" : "default"}
size="sm"
onClick={() => {
setStartOpen(true);
setEndOpen(false);
}}
>
{formattedSelectedAfter}
</Button>
</PopoverTrigger>
<PopoverContent className="flex flex-row items-center justify-center">
<input
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
id="startTime"
type="time"
value={selectedAfterHour}
step="60"
onChange={(e) => {
const clock = e.target.value;
const [hour, minute, _] = clock.split(":");
setSelectedAfterHour(`${hour}:${minute}`);
}}
/>
</PopoverContent>
</Popover>
<FaArrowRight className="size-4 text-primary" />
<Popover
open={endOpen}
onOpenChange={(open) => {
if (!open) {
setEndOpen(false);
}
}}
>
<PopoverTrigger asChild>
<Button
className={`text-primary ${isDesktop ? "" : "text-xs"}`}
variant={endOpen ? "select" : "default"}
size="sm"
onClick={() => {
setEndOpen(true);
setStartOpen(false);
}}
>
{formattedSelectedBefore}
</Button>
</PopoverTrigger>
<PopoverContent className="flex flex-col items-center">
<input
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
id="startTime"
type="time"
value={
selectedBeforeHour == "24:00" ? "23:59" : selectedBeforeHour
}
step="60"
onChange={(e) => {
const clock = e.target.value;
const [hour, minute, _] = clock.split(":");
setSelectedBeforeHour(`${hour}:${minute}`);
}}
/>
</PopoverContent>
</Popover>
</div>
<DropdownMenuSeparator />
<div className="flex items-center justify-evenly p-2">
<Button
variant="select"
onClick={() => {
if (
selectedAfterHour == DEFAULT_TIME_RANGE_AFTER &&
selectedBeforeHour == DEFAULT_TIME_RANGE_BEFORE
) {
updateTimeRange(undefined);
} else {
updateTimeRange(`${selectedAfterHour},${selectedBeforeHour}`);
}
setOpen(false);
}}
>
Apply
</Button>
<Button
onClick={() => {
setSelectedAfterHour(DEFAULT_TIME_RANGE_AFTER);
setSelectedBeforeHour(DEFAULT_TIME_RANGE_BEFORE);
updateTimeRange(undefined);
}}
>
Reset
</Button>
</div>
</div>
);
return (
<PlatformAwareDialog
trigger={trigger}
content={content}
open={open}
onOpenChange={(open) => {
setOpen(open);
}}
/>
);
}
type ZoneFilterButtonProps = {
allZones: string[];
selectedZones?: string[];
updateZoneFilter: (zones: string[] | undefined) => void;
};
function ZoneFilterButton({
allZones,
selectedZones,
updateZoneFilter,
}: ZoneFilterButtonProps) {
const [open, setOpen] = useState(false);
const [currentZones, setCurrentZones] = useState<string[] | undefined>(
selectedZones,
);
const buttonText = useMemo(() => {
if (isMobile) {
return "Zones";
}
if (!selectedZones || selectedZones.length == 0) {
return "All Zones";
}
if (selectedZones.length == 1) {
return selectedZones[0];
}
return `${selectedZones.length} Zones`;
}, [selectedZones]);
// ui
useEffect(() => {
setCurrentZones(selectedZones);
// only refresh when state changes
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [selectedZones]);
const trigger = (
<Button
size="sm"
variant={selectedZones?.length ? "select" : "default"}
className="flex items-center gap-2 capitalize"
>
<FaLocationDot
className={`${selectedZones?.length ? "text-selected-foreground" : "text-secondary-foreground"}`}
/>
<div
className={`${selectedZones?.length ? "text-selected-foreground" : "text-primary"}`}
>
{buttonText}
</div>
</Button>
);
const content = (
<ZoneFilterContent
allZones={allZones}
selectedZones={selectedZones}
currentZones={currentZones}
setCurrentZones={setCurrentZones}
updateZoneFilter={updateZoneFilter}
onClose={() => setOpen(false)}
/>
);
return (
<PlatformAwareDialog
trigger={trigger}
content={content}
open={open}
onOpenChange={(open) => {
if (!open) {
setCurrentZones(selectedZones);
}
setOpen(open);
}}
/>
);
}
type ZoneFilterContentProps = {
allZones?: string[];
selectedZones?: string[];
currentZones?: string[];
updateZoneFilter?: (zones: string[] | undefined) => void;
setCurrentZones?: (zones: string[] | undefined) => void;
onClose: () => void;
};
export function ZoneFilterContent({
allZones,
selectedZones,
currentZones,
updateZoneFilter,
setCurrentZones,
onClose,
}: ZoneFilterContentProps) {
return (
<>
<div className="scrollbar-container h-auto max-h-[80dvh] overflow-y-auto overflow-x-hidden">
{allZones && setCurrentZones && (
<>
{isDesktop && <DropdownMenuSeparator />}
<div className="mb-5 mt-2.5 flex items-center justify-between">
<Label
className="mx-2 cursor-pointer text-primary"
htmlFor="allZones"
>
All Zones
</Label>
<Switch
className="ml-1"
id="allZones"
checked={currentZones == undefined}
onCheckedChange={(isChecked) => {
if (isChecked) {
setCurrentZones(undefined);
}
}}
/>
</div>
<div className="my-2.5 flex flex-col gap-2.5">
{allZones.map((item) => (
<FilterSwitch
key={item}
label={item.replaceAll("_", " ")}
isChecked={currentZones?.includes(item) ?? false}
onCheckedChange={(isChecked) => {
if (isChecked) {
const updatedZones = currentZones
? [...currentZones]
: [];
updatedZones.push(item);
setCurrentZones(updatedZones);
} else {
const updatedZones = currentZones
? [...currentZones]
: [];
// can not deselect the last item
if (updatedZones.length > 1) {
updatedZones.splice(updatedZones.indexOf(item), 1);
setCurrentZones(updatedZones);
}
}
}}
/>
))}
</div>
</>
)}
</div>
{isDesktop && <DropdownMenuSeparator />}
<div className="flex items-center justify-evenly p-2">
<Button
variant="select"
onClick={() => {
if (updateZoneFilter && selectedZones != currentZones) {
updateZoneFilter(currentZones);
}
onClose();
}}
>
Apply
</Button>
<Button
onClick={() => {
setCurrentZones?.(undefined);
updateZoneFilter?.(undefined);
}}
>
Reset
</Button>
</div>
</>
);
}
type SubFilterButtonProps = {
allSubLabels: string[];
selectedSubLabels: string[] | undefined;
updateSubLabelFilter: (labels: string[] | undefined) => void;
};
function SubFilterButton({
allSubLabels,
selectedSubLabels,
updateSubLabelFilter,
}: SubFilterButtonProps) {
const [open, setOpen] = useState(false);
const [currentSubLabels, setCurrentSubLabels] = useState<
string[] | undefined
>(selectedSubLabels);
const buttonText = useMemo(() => {
if (isMobile) {
return "Sub Labels";
}
if (!selectedSubLabels || selectedSubLabels.length == 0) {
return "All Sub Labels";
}
if (selectedSubLabels.length == 1) {
return selectedSubLabels[0];
}
return `${selectedSubLabels.length} Sub Labels`;
}, [selectedSubLabels]);
const trigger = (
<Button
size="sm"
variant={selectedSubLabels?.length ? "select" : "default"}
className="flex items-center gap-2 capitalize"
>
<SubFilterIcon
className={`${selectedSubLabels?.length || selectedSubLabels?.length ? "text-selected-foreground" : "text-secondary-foreground"}`}
/>
<div
className={`${selectedSubLabels?.length ? "text-selected-foreground" : "text-primary"}`}
>
{buttonText}
</div>
</Button>
);
const content = (
<SubFilterContent
allSubLabels={allSubLabels}
selectedSubLabels={selectedSubLabels}
currentSubLabels={currentSubLabels}
setCurrentSubLabels={setCurrentSubLabels}
updateSubLabelFilter={updateSubLabelFilter}
onClose={() => setOpen(false)}
/>
);
return (
<PlatformAwareDialog
trigger={trigger}
content={content}
open={open}
onOpenChange={(open) => {
if (!open) {
setCurrentSubLabels(selectedSubLabels);
}
setOpen(open);
}}
/>
);
}
type SubFilterContentProps = {
allSubLabels: string[];
selectedSubLabels: string[] | undefined;
currentSubLabels: string[] | undefined;
updateSubLabelFilter: (labels: string[] | undefined) => void;
setCurrentSubLabels: (labels: string[] | undefined) => void;
onClose: () => void;
};
export function SubFilterContent({
allSubLabels,
selectedSubLabels,
currentSubLabels,
updateSubLabelFilter,
setCurrentSubLabels,
onClose,
}: SubFilterContentProps) {
return (
<>
<div className="scrollbar-container h-auto max-h-[80dvh] overflow-y-auto overflow-x-hidden">
<div className="mb-5 mt-2.5 flex items-center justify-between">
<Label
className="mx-2 cursor-pointer text-primary"
htmlFor="allLabels"
>
All Sub Labels
</Label>
<Switch
className="ml-1"
id="allLabels"
checked={currentSubLabels == undefined}
onCheckedChange={(isChecked) => {
if (isChecked) {
setCurrentSubLabels(undefined);
}
}}
/>
</div>
<div className="my-2.5 flex flex-col gap-2.5">
{allSubLabels.map((item) => (
<FilterSwitch
key={item}
label={item.replaceAll("_", " ")}
isChecked={currentSubLabels?.includes(item) ?? false}
onCheckedChange={(isChecked) => {
if (isChecked) {
const updatedLabels = currentSubLabels
? [...currentSubLabels]
: [];
updatedLabels.push(item);
setCurrentSubLabels(updatedLabels);
} else {
const updatedLabels = currentSubLabels
? [...currentSubLabels]
: [];
// can not deselect the last item
if (updatedLabels.length > 1) {
updatedLabels.splice(updatedLabels.indexOf(item), 1);
setCurrentSubLabels(updatedLabels);
}
}
}}
/>
))}
</div>
</div>
{isDesktop && <DropdownMenuSeparator />}
<div className="flex items-center justify-evenly p-2">
<Button
variant="select"
onClick={() => {
if (selectedSubLabels != currentSubLabels) {
updateSubLabelFilter(currentSubLabels);
}
onClose();
}}
>
Apply
</Button>
<Button
onClick={() => {
updateSubLabelFilter(undefined);
}}
>
Reset
</Button>
</div>
</>
);
}
type SearchTypeButtonProps = {
selectedSearchSources: SearchSource[] | undefined;
updateSearchSourceFilter: (sources: SearchSource[] | undefined) => void;
};
function SearchTypeButton({
selectedSearchSources,
updateSearchSourceFilter,
}: SearchTypeButtonProps) {
const [open, setOpen] = useState(false);
const buttonText = useMemo(() => {
if (isMobile) {
return "Sources";
}
if (
!selectedSearchSources ||
selectedSearchSources.length == 0 ||
selectedSearchSources.length == 2
) {
return "All Search Sources";
}
if (selectedSearchSources.length == 1) {
return selectedSearchSources[0];
}
return `${selectedSearchSources.length} Search Sources`;
}, [selectedSearchSources]);
const trigger = (
<Button
size="sm"
variant={selectedSearchSources?.length != 2 ? "select" : "default"}
className="flex items-center gap-2 capitalize"
>
<SearchSourceIcon
className={`${selectedSearchSources?.length != 2 ? "text-selected-foreground" : "text-secondary-foreground"}`}
/>
<div
className={`${selectedSearchSources?.length != 2 ? "text-selected-foreground" : "text-primary"}`}
>
{buttonText}
</div>
</Button>
);
const content = (
<SearchTypeContent
selectedSearchSources={selectedSearchSources}
updateSearchSourceFilter={updateSearchSourceFilter}
onClose={() => setOpen(false)}
/>
);
return (
<PlatformAwareDialog
trigger={trigger}
content={content}
open={open}
onOpenChange={setOpen}
/>
);
}
type SearchTypeContentProps = {
selectedSearchSources: SearchSource[] | undefined;
updateSearchSourceFilter: (sources: SearchSource[] | undefined) => void;
onClose: () => void;
};
export function SearchTypeContent({
selectedSearchSources,
updateSearchSourceFilter,
onClose,
}: SearchTypeContentProps) {
const [currentSearchSources, setCurrentSearchSources] = useState<
SearchSource[] | undefined
>(selectedSearchSources);
return (
<>
<div className="scrollbar-container h-auto max-h-[80dvh] overflow-y-auto overflow-x-hidden">
<div className="my-2.5 flex flex-col gap-2.5">
<FilterSwitch
label="Thumbnail Image"
isChecked={currentSearchSources?.includes("thumbnail") ?? false}
onCheckedChange={(isChecked) => {
const updatedSources = currentSearchSources
? [...currentSearchSources]
: [];
if (isChecked) {
updatedSources.push("thumbnail");
setCurrentSearchSources(updatedSources);
} else {
if (updatedSources.length > 1) {
const index = updatedSources.indexOf("thumbnail");
if (index !== -1) updatedSources.splice(index, 1);
setCurrentSearchSources(updatedSources);
}
}
}}
/>
<FilterSwitch
label="Description"
isChecked={currentSearchSources?.includes("description") ?? false}
onCheckedChange={(isChecked) => {
const updatedSources = currentSearchSources
? [...currentSearchSources]
: [];
if (isChecked) {
updatedSources.push("description");
setCurrentSearchSources(updatedSources);
} else {
if (updatedSources.length > 1) {
const index = updatedSources.indexOf("description");
if (index !== -1) updatedSources.splice(index, 1);
setCurrentSearchSources(updatedSources);
}
}
}}
/>
</div>
{isDesktop && <DropdownMenuSeparator />}
<div className="flex items-center justify-evenly p-2">
<Button
variant="select"
onClick={() => {
if (selectedSearchSources != currentSearchSources) {
updateSearchSourceFilter(currentSearchSources);
}
onClose();
}}
>
Apply
</Button>
<Button
onClick={() => {
updateSearchSourceFilter(undefined);
setCurrentSearchSources(["thumbnail", "description"]);
}}
>
Reset
</Button>
</div>
</div>
</>
);
}

View File

@@ -8,6 +8,7 @@ import {
AlertDialogHeader,
AlertDialogTitle,
} from "@/components/ui/alert-dialog";
import { buttonVariants } from "../ui/button";
type DeleteSearchDialogProps = {
isOpen: boolean;
@@ -35,7 +36,7 @@ export function DeleteSearchDialog({
<AlertDialogCancel onClick={onClose}>Cancel</AlertDialogCancel>
<AlertDialogAction
onClick={onConfirm}
className="bg-destructive text-white"
className={buttonVariants({ variant: "destructive" })}
>
Delete
</AlertDialogAction>

View File

@@ -2,11 +2,11 @@ import React, { useState, useRef, useEffect, useCallback } from "react";
import {
LuX,
LuFilter,
LuImage,
LuChevronDown,
LuChevronUp,
LuTrash2,
LuStar,
LuSearch,
} from "react-icons/lu";
import {
FilterType,
@@ -43,6 +43,7 @@ import {
import { toast } from "sonner";
import useSWR from "swr";
import { FrigateConfig } from "@/types/frigateConfig";
import { MdImageSearch } from "react-icons/md";
type InputWithTagsProps = {
inputFocused: boolean;
@@ -161,8 +162,12 @@ export default function InputWithTags({
.map((word) => word.trim())
.lastIndexOf(words.filter((word) => word.trim() !== "").pop() || "");
const currentWord = words[lastNonEmptyWordIndex];
if (words.at(-1) === "") {
return current_suggestions;
}
return current_suggestions.filter((suggestion) =>
suggestion.toLowerCase().includes(currentWord.toLowerCase()),
suggestion.toLowerCase().startsWith(currentWord),
);
},
[inputValue, suggestions, currentFilterType],
@@ -196,10 +201,13 @@ export default function InputWithTags({
allSuggestions[type as FilterType]?.includes(value) ||
type == "before" ||
type == "after" ||
type == "time_range"
type == "time_range" ||
type == "min_score" ||
type == "max_score"
) {
const newFilters = { ...filters };
let timestamp = 0;
let score = 0;
switch (type) {
case "before":
@@ -239,6 +247,40 @@ export default function InputWithTags({
newFilters[type] = timestamp / 1000;
}
break;
case "min_score":
case "max_score":
score = parseInt(value);
if (score >= 0) {
// Check for conflicts between min_score and max_score
if (
type === "min_score" &&
filters.max_score !== undefined &&
score > filters.max_score * 100
) {
toast.error(
"The 'min_score' must be less than or equal to the 'max_score'.",
{
position: "top-center",
},
);
return;
}
if (
type === "max_score" &&
filters.min_score !== undefined &&
score < filters.min_score * 100
) {
toast.error(
"The 'max_score' must be greater than or equal to the 'min_score'.",
{
position: "top-center",
},
);
return;
}
newFilters[type] = score / 100;
}
break;
case "time_range":
newFilters[type] = value;
break;
@@ -254,6 +296,14 @@ export default function InputWithTags({
);
}
break;
case "has_snapshot":
if (!newFilters.has_snapshot) newFilters.has_snapshot = undefined;
newFilters.has_snapshot = value == "yes" ? 1 : 0;
break;
case "has_clip":
if (!newFilters.has_clip) newFilters.has_clip = undefined;
newFilters.has_clip = value == "yes" ? 1 : 0;
break;
case "event_id":
newFilters.event_id = value;
break;
@@ -297,6 +347,10 @@ export default function InputWithTags({
} - ${
config?.ui.time_format === "24hour" ? endTime : convertTo12Hour(endTime)
}`;
} else if (filterType === "min_score" || filterType === "max_score") {
return Math.round(Number(filterValues) * 100).toString() + "%";
} else if (filterType === "has_clip" || filterType === "has_snapshot") {
return filterValues ? "Yes" : "No";
} else {
return filterValues as string;
}
@@ -315,7 +369,11 @@ export default function InputWithTags({
isValidTimeRange(
trimmedValue.replace("-", ","),
config?.ui.time_format,
))
)) ||
((filterType === "min_score" || filterType === "max_score") &&
!isNaN(Number(trimmedValue)) &&
Number(trimmedValue) >= 50 &&
Number(trimmedValue) <= 100)
) {
createFilter(
filterType,
@@ -397,6 +455,11 @@ export default function InputWithTags({
setIsSimilaritySearch(false);
}, [setFilters, resetSuggestions, setSearch, setInputFocused]);
const handleClearSimilarity = useCallback(() => {
removeFilter("event_id", filters.event_id!);
removeFilter("search_type", "similarity");
}, [removeFilter, filters]);
const handleInputBlur = useCallback(
(e: React.FocusEvent) => {
if (
@@ -504,7 +567,7 @@ export default function InputWithTags({
onFocus={handleInputFocus}
onBlur={handleInputBlur}
onKeyDown={handleInputKeyDown}
className="text-md h-9 pr-24"
className="text-md h-9 pr-32"
placeholder="Search..."
/>
<div className="absolute right-3 top-0 flex h-full flex-row items-center justify-center gap-5">
@@ -539,7 +602,7 @@ export default function InputWithTags({
{isSimilaritySearch && (
<Tooltip>
<TooltipTrigger className="cursor-default">
<LuImage
<MdImageSearch
aria-label="Similarity search active"
className="size-4 text-selected"
/>
@@ -631,14 +694,26 @@ export default function InputWithTags({
inputFocused ? "visible" : "hidden",
)}
>
{(Object.keys(filters).length > 0 || isSimilaritySearch) && (
{!currentFilterType && inputValue && (
<CommandGroup heading="Search">
<CommandItem
className="cursor-pointer"
onSelect={() => handleSearch(inputValue)}
>
<LuSearch className="mr-2 h-4 w-4" />
Search for "{inputValue}"
</CommandItem>
</CommandGroup>
)}
{(Object.keys(filters).filter((key) => key !== "query").length > 0 ||
isSimilaritySearch) && (
<CommandGroup heading="Active Filters">
<div className="my-2 flex flex-wrap gap-2 px-2">
{isSimilaritySearch && (
<span className="inline-flex items-center whitespace-nowrap rounded-full bg-blue-100 px-2 py-0.5 text-sm text-blue-800">
Similarity Search
<button
onClick={handleClearInput}
onClick={handleClearSimilarity}
className="ml-1 focus:outline-none"
aria-label="Clear similarity search"
>

View File

@@ -1,29 +1,101 @@
import { createContext, useContext, useEffect, useState } from "react";
import { createPortal } from "react-dom";
import { motion, AnimatePresence } from "framer-motion";
import { IoMdArrowRoundBack } from "react-icons/io";
import { cn } from "@/lib/utils";
import { isPWA } from "@/utils/isPWA";
import { ReactNode, useEffect, useState } from "react";
import { Button } from "../ui/button";
import { IoMdArrowRoundBack } from "react-icons/io";
import { motion, AnimatePresence } from "framer-motion";
import { Button } from "@/components/ui/button";
type MobilePageProps = {
children: ReactNode;
const MobilePageContext = createContext<{
open: boolean;
onOpenChange: (open: boolean) => void;
} | null>(null);
type MobilePageProps = {
children: React.ReactNode;
open?: boolean;
onOpenChange?: (open: boolean) => void;
};
export function MobilePage({ children, open, onOpenChange }: MobilePageProps) {
const [isVisible, setIsVisible] = useState(open);
export function MobilePage({
children,
open: controlledOpen,
onOpenChange,
}: MobilePageProps) {
const [uncontrolledOpen, setUncontrolledOpen] = useState(false);
const open = controlledOpen ?? uncontrolledOpen;
const setOpen = onOpenChange ?? setUncontrolledOpen;
return (
<MobilePageContext.Provider value={{ open, onOpenChange: setOpen }}>
{children}
</MobilePageContext.Provider>
);
}
type MobilePageTriggerProps = React.HTMLAttributes<HTMLDivElement>;
export function MobilePageTrigger({
children,
...props
}: MobilePageTriggerProps) {
const context = useContext(MobilePageContext);
if (!context)
throw new Error("MobilePageTrigger must be used within MobilePage");
return (
<div onClick={() => context.onOpenChange(true)} {...props}>
{children}
</div>
);
}
type MobilePagePortalProps = {
children: React.ReactNode;
container?: HTMLElement;
};
export function MobilePagePortal({
children,
container,
}: MobilePagePortalProps) {
const [mounted, setMounted] = useState(false);
useEffect(() => {
if (open) {
setMounted(true);
return () => setMounted(false);
}, []);
if (!mounted) return null;
return createPortal(children, container || document.body);
}
type MobilePageContentProps = {
children: React.ReactNode;
className?: string;
};
export function MobilePageContent({
children,
className,
}: MobilePageContentProps) {
const context = useContext(MobilePageContext);
if (!context)
throw new Error("MobilePageContent must be used within MobilePage");
const [isVisible, setIsVisible] = useState(context.open);
useEffect(() => {
if (context.open) {
setIsVisible(true);
}
}, [open]);
}, [context.open]);
const handleAnimationComplete = () => {
if (!open) {
if (!context.open) {
setIsVisible(false);
onOpenChange(false);
}
};
@@ -35,9 +107,10 @@ export function MobilePage({ children, open, onOpenChange }: MobilePageProps) {
"fixed inset-0 z-50 mb-12 bg-background",
isPWA && "mb-16",
"landscape:mb-14 landscape:md:mb-16",
className,
)}
initial={{ x: "100%" }}
animate={{ x: open ? 0 : "100%" }}
animate={{ x: context.open ? 0 : "100%" }}
exit={{ x: "100%" }}
transition={{ type: "spring", damping: 25, stiffness: 200 }}
onAnimationComplete={handleAnimationComplete}
@@ -49,37 +122,8 @@ export function MobilePage({ children, open, onOpenChange }: MobilePageProps) {
);
}
type MobileComponentProps = {
children: ReactNode;
className?: string;
};
export function MobilePageContent({
children,
className,
...props
}: MobileComponentProps) {
return (
<div className={cn("size-full", className)} {...props}>
{children}
</div>
);
}
export function MobilePageDescription({
children,
className,
...props
}: MobileComponentProps) {
return (
<p className={cn("text-sm text-muted-foreground", className)} {...props}>
{children}
</p>
);
}
interface MobilePageHeaderProps extends React.HTMLAttributes<HTMLDivElement> {
onClose: () => void;
onClose?: () => void;
}
export function MobilePageHeader({
@@ -88,6 +132,18 @@ export function MobilePageHeader({
onClose,
...props
}: MobilePageHeaderProps) {
const context = useContext(MobilePageContext);
if (!context)
throw new Error("MobilePageHeader must be used within MobilePage");
const handleClose = () => {
if (onClose) {
onClose();
} else {
context.onOpenChange(false);
}
};
return (
<div
className={cn(
@@ -99,7 +155,7 @@ export function MobilePageHeader({
<Button
className="absolute left-0 rounded-lg"
size="sm"
onClick={onClose}
onClick={handleClose}
>
<IoMdArrowRoundBack className="size-5 text-secondary-foreground" />
</Button>
@@ -108,14 +164,19 @@ export function MobilePageHeader({
);
}
export function MobilePageTitle({
children,
type MobilePageTitleProps = React.HTMLAttributes<HTMLHeadingElement>;
export function MobilePageTitle({ className, ...props }: MobilePageTitleProps) {
return <h2 className={cn("text-lg font-semibold", className)} {...props} />;
}
type MobilePageDescriptionProps = React.HTMLAttributes<HTMLParagraphElement>;
export function MobilePageDescription({
className,
...props
}: MobileComponentProps) {
}: MobilePageDescriptionProps) {
return (
<h2 className={cn("text-lg font-semibold", className)} {...props}>
{children}
</h2>
<p className={cn("text-sm text-muted-foreground", className)} {...props} />
);
}

View File

@@ -2,6 +2,7 @@ import { useCallback, useMemo, useState } from "react";
import {
Dialog,
DialogContent,
DialogDescription,
DialogFooter,
DialogHeader,
DialogTitle,
@@ -22,10 +23,13 @@ import { FrigateConfig } from "@/types/frigateConfig";
import { Popover, PopoverContent, PopoverTrigger } from "../ui/popover";
import { TimezoneAwareCalendar } from "./ReviewActivityCalendar";
import { SelectSeparator } from "../ui/select";
import { isDesktop, isIOS } from "react-device-detect";
import { isDesktop, isIOS, isMobile } from "react-device-detect";
import { Drawer, DrawerContent, DrawerTrigger } from "../ui/drawer";
import SaveExportOverlay from "./SaveExportOverlay";
import { getUTCOffset } from "@/utils/dateUtil";
import { baseUrl } from "@/api/baseUrl";
import { cn } from "@/lib/utils";
import { GenericVideoPlayer } from "../player/GenericVideoPlayer";
const EXPORT_OPTIONS = [
"1",
@@ -44,8 +48,10 @@ type ExportDialogProps = {
currentTime: number;
range?: TimeRange;
mode: ExportMode;
showPreview: boolean;
setRange: (range: TimeRange | undefined) => void;
setMode: (mode: ExportMode) => void;
setShowPreview: (showPreview: boolean) => void;
};
export default function ExportDialog({
camera,
@@ -53,10 +59,13 @@ export default function ExportDialog({
currentTime,
range,
mode,
showPreview,
setRange,
setMode,
setShowPreview,
}: ExportDialogProps) {
const [name, setName] = useState("");
const onStartExport = useCallback(() => {
if (!range) {
toast.error("No valid time range selected", { position: "top-center" });
@@ -109,9 +118,16 @@ export default function ExportDialog({
return (
<>
<ExportPreviewDialog
camera={camera}
range={range}
showPreview={showPreview}
setShowPreview={setShowPreview}
/>
<SaveExportOverlay
className="pointer-events-none absolute left-1/2 top-8 z-50 -translate-x-1/2"
show={mode == "timeline"}
onPreview={() => setShowPreview(true)}
onSave={() => onStartExport()}
onCancel={() => setMode("none")}
/>
@@ -525,3 +541,44 @@ function CustomTimeSelector({
</div>
);
}
type ExportPreviewDialogProps = {
camera: string;
range?: TimeRange;
showPreview: boolean;
setShowPreview: (showPreview: boolean) => void;
};
export function ExportPreviewDialog({
camera,
range,
showPreview,
setShowPreview,
}: ExportPreviewDialogProps) {
if (!range) {
return null;
}
const source = `${baseUrl}vod/${camera}/start/${range.after}/end/${range.before}/index.m3u8`;
return (
<Dialog open={showPreview} onOpenChange={setShowPreview}>
<DialogContent
className={cn(
"scrollbar-container overflow-y-auto",
isDesktop &&
"max-h-[95dvh] sm:max-w-xl md:max-w-4xl lg:max-w-4xl xl:max-w-7xl",
isMobile && "px-4",
)}
>
<DialogHeader>
<DialogTitle>Preview Export</DialogTitle>
<DialogDescription className="sr-only">
Preview Export
</DialogDescription>
</DialogHeader>
<GenericVideoPlayer source={source} />
</DialogContent>
</Dialog>
);
}

View File

@@ -3,7 +3,7 @@ import { Drawer, DrawerContent, DrawerTrigger } from "../ui/drawer";
import { Button } from "../ui/button";
import { FaArrowDown, FaCalendarAlt, FaCog, FaFilter } from "react-icons/fa";
import { TimeRange } from "@/types/timeline";
import { ExportContent } from "./ExportDialog";
import { ExportContent, ExportPreviewDialog } from "./ExportDialog";
import { ExportMode } from "@/types/filter";
import ReviewActivityCalendar from "./ReviewActivityCalendar";
import { SelectSeparator } from "../ui/select";
@@ -34,12 +34,14 @@ type MobileReviewSettingsDrawerProps = {
currentTime: number;
range?: TimeRange;
mode: ExportMode;
showExportPreview: boolean;
reviewSummary?: ReviewSummary;
allLabels: string[];
allZones: string[];
onUpdateFilter: (filter: ReviewFilter) => void;
setRange: (range: TimeRange | undefined) => void;
setMode: (mode: ExportMode) => void;
setShowExportPreview: (showPreview: boolean) => void;
};
export default function MobileReviewSettingsDrawer({
features = DEFAULT_DRAWER_FEATURES,
@@ -50,12 +52,14 @@ export default function MobileReviewSettingsDrawer({
currentTime,
range,
mode,
showExportPreview,
reviewSummary,
allLabels,
allZones,
onUpdateFilter,
setRange,
setMode,
setShowExportPreview,
}: MobileReviewSettingsDrawerProps) {
const [drawerMode, setDrawerMode] = useState<DrawerMode>("none");
@@ -282,6 +286,13 @@ export default function MobileReviewSettingsDrawer({
show={mode == "timeline"}
onSave={() => onStartExport()}
onCancel={() => setMode("none")}
onPreview={() => setShowExportPreview(true)}
/>
<ExportPreviewDialog
camera={camera}
range={range}
showPreview={showExportPreview}
setShowPreview={setShowExportPreview}
/>
<Drawer
modal={!(isIOS && drawerMode == "export")}

View File

@@ -1,4 +1,4 @@
import { LuX } from "react-icons/lu";
import { LuVideo, LuX } from "react-icons/lu";
import { Button } from "../ui/button";
import { FaCompactDisc } from "react-icons/fa";
import { cn } from "@/lib/utils";
@@ -6,12 +6,14 @@ import { cn } from "@/lib/utils";
type SaveExportOverlayProps = {
className: string;
show: boolean;
onPreview: () => void;
onSave: () => void;
onCancel: () => void;
};
export default function SaveExportOverlay({
className,
show,
onPreview,
onSave,
onCancel,
}: SaveExportOverlayProps) {
@@ -24,6 +26,22 @@ export default function SaveExportOverlay({
"mx-auto mt-5 text-center",
)}
>
<Button
className="flex items-center gap-1 text-primary"
size="sm"
onClick={onCancel}
>
<LuX />
Cancel
</Button>
<Button
className="flex items-center gap-1"
size="sm"
onClick={onPreview}
>
<LuVideo />
Preview Export
</Button>
<Button
className="flex items-center gap-1"
variant="select"
@@ -33,14 +51,6 @@ export default function SaveExportOverlay({
<FaCompactDisc />
Save Export
</Button>
<Button
className="flex items-center gap-1 text-primary"
size="sm"
onClick={onCancel}
>
<LuX />
Cancel
</Button>
</div>
</div>
);

View File

@@ -383,7 +383,7 @@ export default function ObjectLifecycle({
{eventSequence.map((item, index) => (
<CarouselItem key={index}>
<Card className="p-1 text-sm md:p-2" key={index}>
<CardContent className="flex flex-row items-center gap-3 p-1 md:p-6">
<CardContent className="flex flex-row items-center gap-3 p-1 md:p-2">
<div className="flex flex-1 flex-row items-center justify-start p-3 pl-1">
<div
className="rounded-lg p-2"

View File

@@ -38,6 +38,8 @@ import {
MobilePageTitle,
} from "@/components/mobile/MobilePage";
import { useOverlayState } from "@/hooks/use-overlay-state";
import { DownloadVideoButton } from "@/components/button/DownloadVideoButton";
import { TooltipPortal } from "@radix-ui/react-tooltip";
type ReviewDetailDialogProps = {
review?: ReviewSegment;
@@ -143,7 +145,7 @@ export default function ReviewDetailDialog({
<Description className="sr-only">Review item details</Description>
<div
className={cn(
"absolute",
"absolute flex gap-2 lg:flex-col",
isDesktop && "right-1 top-8",
isMobile && "right-0 top-3",
)}
@@ -159,7 +161,21 @@ export default function ReviewDetailDialog({
<FaShareAlt className="size-4 text-secondary-foreground" />
</Button>
</TooltipTrigger>
<TooltipContent>Share this review item</TooltipContent>
<TooltipPortal>
<TooltipContent>Share this review item</TooltipContent>
</TooltipPortal>
</Tooltip>
<Tooltip>
<TooltipTrigger>
<DownloadVideoButton
source={`${baseUrl}api/${review.camera}/start/${review.start_time}/end/${review.end_time || Date.now() / 1000}/clip.mp4`}
camera={review.camera}
startTime={review.start_time}
/>
</TooltipTrigger>
<TooltipPortal>
<TooltipContent>Download</TooltipContent>
</TooltipPortal>
</Tooltip>
</div>
</Header>
@@ -180,7 +196,7 @@ export default function ReviewDetailDialog({
</div>
</div>
<div className="flex w-full flex-col items-center gap-2">
<div className="flex w-full flex-col gap-1.5">
<div className="flex w-full flex-col gap-1.5 lg:pr-8">
<div className="text-sm text-primary/40">Objects</div>
<div className="scrollbar-container flex max-h-32 flex-col items-start gap-2 overflow-y-auto text-sm capitalize">
{events?.map((event) => {

View File

@@ -6,7 +6,7 @@ import { useFormattedTimestamp } from "@/hooks/use-date-utils";
import { getIconForLabel } from "@/utils/iconUtil";
import { useApiHost } from "@/api";
import { Button } from "../../ui/button";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { useCallback, useEffect, useMemo, useState } from "react";
import axios from "axios";
import { toast } from "sonner";
import { Textarea } from "../../ui/textarea";
@@ -21,7 +21,6 @@ import {
DialogTitle,
} from "@/components/ui/dialog";
import { Event } from "@/types/event";
import HlsVideoPlayer from "@/components/player/HlsVideoPlayer";
import { baseUrl } from "@/api/baseUrl";
import { cn } from "@/lib/utils";
import ActivityIndicator from "@/components/indicators/activity-indicator";
@@ -62,8 +61,13 @@ import { TransformComponent, TransformWrapper } from "react-zoom-pan-pinch";
import { Card, CardContent } from "@/components/ui/card";
import useImageLoaded from "@/hooks/use-image-loaded";
import ImageLoadingIndicator from "@/components/indicators/ImageLoadingIndicator";
import { useResizeObserver } from "@/hooks/resize-observer";
import { VideoResolutionType } from "@/types/live";
import { GenericVideoPlayer } from "@/components/player/GenericVideoPlayer";
import {
Popover,
PopoverContent,
PopoverTrigger,
} from "@/components/ui/popover";
import { LuInfo } from "react-icons/lu";
const SEARCH_TABS = [
"details",
@@ -71,16 +75,20 @@ const SEARCH_TABS = [
"video",
"object lifecycle",
] as const;
type SearchTab = (typeof SEARCH_TABS)[number];
export type SearchTab = (typeof SEARCH_TABS)[number];
type SearchDetailDialogProps = {
search?: SearchResult;
page: SearchTab;
setSearch: (search: SearchResult | undefined) => void;
setSearchPage: (page: SearchTab) => void;
setSimilarity?: () => void;
};
export default function SearchDetailDialog({
search,
page,
setSearch,
setSearchPage,
setSimilarity,
}: SearchDetailDialogProps) {
const { data: config } = useSWR<FrigateConfig>("config", {
@@ -89,15 +97,20 @@ export default function SearchDetailDialog({
// tabs
const [page, setPage] = useState<SearchTab>("details");
const [pageToggle, setPageToggle] = useOptimisticState(page, setPage, 100);
const [pageToggle, setPageToggle] = useOptimisticState(
page,
setSearchPage,
100,
);
// dialog and mobile page
const [isOpen, setIsOpen] = useState(search != undefined);
useEffect(() => {
setIsOpen(search != undefined);
if (search) {
setIsOpen(search != undefined);
}
}, [search]);
const searchTabs = useMemo(() => {
@@ -117,12 +130,6 @@ export default function SearchDetailDialog({
views.splice(index, 1);
}
// TODO implement
//if (!config.semantic_search.enabled) {
// const index = views.indexOf("similar-calendar");
// views.splice(index, 1);
// }
return views;
}, [config, search]);
@@ -132,9 +139,9 @@ export default function SearchDetailDialog({
}
if (!searchTabs.includes(pageToggle)) {
setPage("details");
setSearchPage("details");
}
}, [pageToggle, searchTabs]);
}, [pageToggle, searchTabs, setSearchPage]);
if (!search) {
return;
@@ -151,8 +158,8 @@ export default function SearchDetailDialog({
return (
<Overlay
open={isOpen}
onOpenChange={(open) => {
if (!open) {
onOpenChange={() => {
if (search) {
setSearch(undefined);
}
}}
@@ -278,7 +285,7 @@ function ObjectDetailsTab({
return 0;
}
const value = search.score ?? search.data.top_score;
const value = search.data.top_score;
return Math.round(value * 100);
}, [search]);
@@ -368,7 +375,24 @@ function ObjectDetailsTab({
</div>
</div>
<div className="flex flex-col gap-1.5">
<div className="text-sm text-primary/40">Score</div>
<div className="text-sm text-primary/40">
<div className="flex flex-row items-center gap-1">
Top Score
<Popover>
<PopoverTrigger asChild>
<div className="cursor-pointer p-0">
<LuInfo className="size-4" />
<span className="sr-only">Info</span>
</div>
</PopoverTrigger>
<PopoverContent className="w-80">
The top score is the highest median score for the tracked
object, so this may differ from the score shown on the
search result thumbnail.
</PopoverContent>
</Popover>
</div>
</div>
<div className="text-sm">
{score}%{subLabelScore && ` (${subLabelScore}%)`}
</div>
@@ -398,17 +422,19 @@ function ObjectDetailsTab({
draggable={false}
src={`${apiHost}api/events/${search.id}/thumbnail.jpg`}
/>
<Button
onClick={() => {
setSearch(undefined);
{config?.semantic_search.enabled && (
<Button
onClick={() => {
setSearch(undefined);
if (setSimilarity) {
setSimilarity();
}
}}
>
Find Similar
</Button>
if (setSimilarity) {
setSimilarity();
}
}}
>
Find Similar
</Button>
)}
</div>
</div>
<div className="flex flex-col gap-1.5">
@@ -536,57 +562,59 @@ function ObjectSnapshotTab({
/>
)}
</TransformComponent>
<Card className="p-1 text-sm md:p-2">
<CardContent className="flex flex-col items-center justify-between gap-3 p-2 md:flex-row">
<div className={cn("flex flex-col space-y-3")}>
<div
className={
"text-lg font-semibold leading-none tracking-tight"
}
>
Submit To Frigate+
</div>
<div className="text-sm text-muted-foreground">
Objects in locations you want to avoid are not false
positives. Submitting them as false positives will confuse
the model.
</div>
</div>
<div className="flex flex-row justify-center gap-2 md:justify-end">
{state == "reviewing" && (
<>
<Button
className="bg-success"
onClick={() => {
setState("uploading");
onSubmitToPlus(false);
}}
>
This is a {search?.label}
</Button>
<Button
className="text-white"
variant="destructive"
onClick={() => {
setState("uploading");
onSubmitToPlus(true);
}}
>
This is not a {search?.label}
</Button>
</>
)}
{state == "uploading" && <ActivityIndicator />}
{state == "submitted" && (
<div className="flex flex-row items-center justify-center gap-2">
<FaCheckCircle className="text-success" />
Submitted
{search.plus_id !== "not_enabled" && search.end_time && (
<Card className="p-1 text-sm md:p-2">
<CardContent className="flex flex-col items-center justify-between gap-3 p-2 md:flex-row">
<div className={cn("flex flex-col space-y-3")}>
<div
className={
"text-lg font-semibold leading-none tracking-tight"
}
>
Submit To Frigate+
</div>
)}
</div>
</CardContent>
</Card>
<div className="text-sm text-muted-foreground">
Objects in locations you want to avoid are not false
positives. Submitting them as false positives will confuse
the model.
</div>
</div>
<div className="flex flex-row justify-center gap-2 md:justify-end">
{state == "reviewing" && (
<>
<Button
className="bg-success"
onClick={() => {
setState("uploading");
onSubmitToPlus(false);
}}
>
This is a {search?.label}
</Button>
<Button
className="text-white"
variant="destructive"
onClick={() => {
setState("uploading");
onSubmitToPlus(true);
}}
>
This is not a {search?.label}
</Button>
</>
)}
{state == "uploading" && <ActivityIndicator />}
{state == "submitted" && (
<div className="flex flex-row items-center justify-center gap-2">
<FaCheckCircle className="text-success" />
Submitted
</div>
)}
</div>
</CardContent>
</Card>
)}
</div>
</TransformWrapper>
</div>
@@ -597,99 +625,45 @@ function ObjectSnapshotTab({
type VideoTabProps = {
search: SearchResult;
};
function VideoTab({ search }: VideoTabProps) {
const [isLoading, setIsLoading] = useState(true);
const videoRef = useRef<HTMLVideoElement | null>(null);
const endTime = useMemo(() => search.end_time ?? Date.now() / 1000, [search]);
export function VideoTab({ search }: VideoTabProps) {
const navigate = useNavigate();
const { data: reviewItem } = useSWR<ReviewSegment>([
`review/event/${search.id}`,
]);
const endTime = useMemo(() => search.end_time ?? Date.now() / 1000, [search]);
const containerRef = useRef<HTMLDivElement | null>(null);
const [{ width: containerWidth, height: containerHeight }] =
useResizeObserver(containerRef);
const [videoResolution, setVideoResolution] = useState<VideoResolutionType>({
width: 0,
height: 0,
});
const videoAspectRatio = useMemo(() => {
return videoResolution.width / videoResolution.height || 16 / 9;
}, [videoResolution]);
const containerAspectRatio = useMemo(() => {
return containerWidth / containerHeight || 16 / 9;
}, [containerWidth, containerHeight]);
const videoDimensions = useMemo(() => {
if (!containerWidth || !containerHeight)
return { width: "100%", height: "100%" };
if (containerAspectRatio > videoAspectRatio) {
const height = containerHeight;
const width = height * videoAspectRatio;
return { width: `${width}px`, height: `${height}px` };
} else {
const width = containerWidth;
const height = width / videoAspectRatio;
return { width: `${width}px`, height: `${height}px` };
}
}, [containerWidth, containerHeight, videoAspectRatio, containerAspectRatio]);
const source = `${baseUrl}vod/${search.camera}/start/${search.start_time}/end/${endTime}/index.m3u8`;
return (
<div ref={containerRef} className="relative flex h-full w-full flex-col">
<div className="relative flex flex-grow items-center justify-center">
{(isLoading || !reviewItem) && (
<ActivityIndicator className="absolute left-1/2 top-1/2 z-10 -translate-x-1/2 -translate-y-1/2" />
)}
<GenericVideoPlayer source={source}>
{reviewItem && (
<div
className="relative flex items-center justify-center"
style={videoDimensions}
>
<HlsVideoPlayer
videoRef={videoRef}
currentSource={`${baseUrl}vod/${search.camera}/start/${search.start_time}/end/${endTime}/index.m3u8`}
hotKeys
visible
frigateControls={false}
fullscreen={false}
supportsFullscreen={false}
onPlaying={() => setIsLoading(false)}
setFullResolution={setVideoResolution}
/>
{!isLoading && reviewItem && (
<div
className={cn(
"absolute top-2 z-10 flex items-center",
isIOS ? "right-8" : "right-2",
)}
>
<Tooltip>
<TooltipTrigger>
<Chip
className="cursor-pointer rounded-md bg-gray-500 bg-gradient-to-br from-gray-400 to-gray-500"
onClick={() => {
if (reviewItem?.id) {
const params = new URLSearchParams({
id: reviewItem.id,
}).toString();
navigate(`/review?${params}`);
}
}}
>
<FaHistory className="size-4 text-white" />
</Chip>
</TooltipTrigger>
<TooltipContent side="left">View in History</TooltipContent>
</Tooltip>
</div>
className={cn(
"absolute top-2 z-10 flex items-center",
isIOS ? "right-8" : "right-2",
)}
>
<Tooltip>
<TooltipTrigger>
<Chip
className="cursor-pointer rounded-md bg-gray-500 bg-gradient-to-br from-gray-400 to-gray-500"
onClick={() => {
if (reviewItem?.id) {
const params = new URLSearchParams({
id: reviewItem.id,
}).toString();
navigate(`/review?${params}`);
}
}}
>
<FaHistory className="size-4 text-white" />
</Chip>
</TooltipTrigger>
<TooltipContent side="left">View in History</TooltipContent>
</Tooltip>
</div>
</div>
</div>
)}
</GenericVideoPlayer>
);
}

View File

@@ -1,9 +1,25 @@
import {
MobilePage,
MobilePageContent,
MobilePageHeader,
MobilePagePortal,
MobilePageTitle,
MobilePageTrigger,
} from "@/components/mobile/MobilePage";
import { Drawer, DrawerContent, DrawerTrigger } from "@/components/ui/drawer";
import {
Popover,
PopoverContent,
PopoverTrigger,
} from "@/components/ui/popover";
import {
Sheet,
SheetContent,
SheetDescription,
SheetHeader,
SheetTitle,
SheetTrigger,
} from "@/components/ui/sheet";
import { isMobile } from "react-device-detect";
type PlatformAwareDialogProps = {
@@ -42,3 +58,62 @@ export default function PlatformAwareDialog({
</Popover>
);
}
type PlatformAwareSheetProps = {
trigger: JSX.Element;
title?: string | JSX.Element;
content: JSX.Element;
triggerClassName?: string;
titleClassName?: string;
contentClassName?: string;
open: boolean;
onOpenChange: (open: boolean) => void;
};
export function PlatformAwareSheet({
trigger,
title,
content,
triggerClassName = "",
titleClassName = "",
contentClassName = "",
open,
onOpenChange,
}: PlatformAwareSheetProps) {
if (isMobile) {
return (
<MobilePage open={open} onOpenChange={onOpenChange}>
<MobilePageTrigger onClick={() => onOpenChange(true)}>
{trigger}
</MobilePageTrigger>
<MobilePagePortal>
<MobilePageContent className="h-full overflow-hidden">
<MobilePageHeader
className="mx-2"
onClose={() => onOpenChange(false)}
>
<MobilePageTitle>More Filters</MobilePageTitle>
</MobilePageHeader>
<div className={contentClassName}>{content}</div>
</MobilePageContent>
</MobilePagePortal>
</MobilePage>
);
}
return (
<Sheet open={open} onOpenChange={onOpenChange} modal={false}>
<SheetTrigger asChild className={triggerClassName}>
{trigger}
</SheetTrigger>
<SheetContent className={contentClassName}>
<SheetHeader>
<SheetTitle className={title ? titleClassName : "sr-only"}>
{title ?? ""}
</SheetTitle>
<SheetDescription className="sr-only">Information</SheetDescription>
</SheetHeader>
{content}
</SheetContent>
</Sheet>
);
}

View File

@@ -0,0 +1,640 @@
import { FaArrowRight, FaFilter } from "react-icons/fa";
import { useEffect, useMemo, useState } from "react";
import { PlatformAwareSheet } from "./PlatformAwareDialog";
import { Button } from "@/components/ui/button";
import useSWR from "swr";
import {
DEFAULT_TIME_RANGE_AFTER,
DEFAULT_TIME_RANGE_BEFORE,
SearchFilter,
SearchSource,
} from "@/types/search";
import { FrigateConfig } from "@/types/frigateConfig";
import {
Popover,
PopoverContent,
PopoverTrigger,
} from "@/components/ui/popover";
import { isDesktop, isMobileOnly } from "react-device-detect";
import { useFormattedHour } from "@/hooks/use-date-utils";
import FilterSwitch from "@/components/filter/FilterSwitch";
import { Switch } from "@/components/ui/switch";
import { Label } from "@/components/ui/label";
import { DropdownMenuSeparator } from "@/components/ui/dropdown-menu";
import { cn } from "@/lib/utils";
import { DualThumbSlider } from "@/components/ui/slider";
import { Input } from "@/components/ui/input";
import { Checkbox } from "@/components/ui/checkbox";
import { ToggleGroup, ToggleGroupItem } from "@/components/ui/toggle-group";
type SearchFilterDialogProps = {
config?: FrigateConfig;
filter?: SearchFilter;
filterValues: {
cameras: string[];
labels: string[];
zones: string[];
search_type: SearchSource[];
};
onUpdateFilter: (filter: SearchFilter) => void;
};
export default function SearchFilterDialog({
config,
filter,
filterValues,
onUpdateFilter,
}: SearchFilterDialogProps) {
// data
const [currentFilter, setCurrentFilter] = useState(filter ?? {});
const { data: allSubLabels } = useSWR(["sub_labels", { split_joined: 1 }]);
useEffect(() => {
if (filter) {
setCurrentFilter(filter);
}
}, [filter]);
// state
const [open, setOpen] = useState(false);
const moreFiltersSelected = useMemo(
() =>
currentFilter &&
(currentFilter.time_range ||
(currentFilter.min_score ?? 0) > 0.5 ||
(currentFilter.has_snapshot ?? 0) === 1 ||
(currentFilter.has_clip ?? 0) === 1 ||
(currentFilter.max_score ?? 1) < 1 ||
(currentFilter.zones?.length ?? 0) > 0 ||
(currentFilter.sub_labels?.length ?? 0) > 0),
[currentFilter],
);
const trigger = (
<Button
className="flex items-center gap-2"
size="sm"
variant={moreFiltersSelected ? "select" : "default"}
>
<FaFilter
className={cn(
moreFiltersSelected ? "text-white" : "text-secondary-foreground",
)}
/>
More Filters
</Button>
);
const content = (
<div className="space-y-3">
<TimeRangeFilterContent
config={config}
timeRange={currentFilter.time_range}
updateTimeRange={(newRange) =>
setCurrentFilter({ ...currentFilter, time_range: newRange })
}
/>
<ZoneFilterContent
allZones={filterValues.zones}
zones={currentFilter.zones}
updateZones={(newZones) =>
setCurrentFilter({ ...currentFilter, zones: newZones })
}
/>
<SubFilterContent
allSubLabels={allSubLabels}
subLabels={currentFilter.sub_labels}
setSubLabels={(newSubLabels) =>
setCurrentFilter({ ...currentFilter, sub_labels: newSubLabels })
}
/>
<ScoreFilterContent
minScore={currentFilter.min_score}
maxScore={currentFilter.max_score}
setScoreRange={(min, max) =>
setCurrentFilter({ ...currentFilter, min_score: min, max_score: max })
}
/>
<SnapshotClipFilterContent
hasSnapshot={
currentFilter.has_snapshot !== undefined
? currentFilter.has_snapshot === 1
: undefined
}
hasClip={
currentFilter.has_clip !== undefined
? currentFilter.has_clip === 1
: undefined
}
setSnapshotClip={(snapshot, clip) =>
setCurrentFilter({
...currentFilter,
has_snapshot:
snapshot !== undefined ? (snapshot ? 1 : 0) : undefined,
has_clip: clip !== undefined ? (clip ? 1 : 0) : undefined,
})
}
/>
{isDesktop && <DropdownMenuSeparator />}
<div className="flex items-center justify-evenly p-2">
<Button
variant="select"
onClick={() => {
if (currentFilter != filter) {
onUpdateFilter(currentFilter);
}
setOpen(false);
}}
>
Apply
</Button>
<Button
onClick={() => {
setCurrentFilter((prevFilter) => ({
...prevFilter,
time_range: undefined,
zones: undefined,
sub_labels: undefined,
search_type: ["thumbnail", "description"],
min_score: undefined,
max_score: undefined,
has_snapshot: undefined,
has_clip: undefined,
}));
}}
>
Reset
</Button>
</div>
</div>
);
return (
<PlatformAwareSheet
trigger={trigger}
content={content}
contentClassName={cn(
"w-auto lg:min-w-[275px] scrollbar-container h-full overflow-auto px-4",
isMobileOnly && "pb-20",
)}
open={open}
onOpenChange={(open) => {
if (!open) {
setCurrentFilter(filter ?? {});
}
setOpen(open);
}}
/>
);
}
type TimeRangeFilterContentProps = {
config?: FrigateConfig;
timeRange?: string;
updateTimeRange: (range: string | undefined) => void;
};
function TimeRangeFilterContent({
config,
timeRange,
updateTimeRange,
}: TimeRangeFilterContentProps) {
const [startOpen, setStartOpen] = useState(false);
const [endOpen, setEndOpen] = useState(false);
const [afterHour, beforeHour] = useMemo(() => {
if (!timeRange || !timeRange.includes(",")) {
return [DEFAULT_TIME_RANGE_AFTER, DEFAULT_TIME_RANGE_BEFORE];
}
return timeRange.split(",");
}, [timeRange]);
const [selectedAfterHour, setSelectedAfterHour] = useState(afterHour);
const [selectedBeforeHour, setSelectedBeforeHour] = useState(beforeHour);
// format based on locale
const formattedSelectedAfter = useFormattedHour(config, selectedAfterHour);
const formattedSelectedBefore = useFormattedHour(config, selectedBeforeHour);
useEffect(() => {
setSelectedAfterHour(afterHour);
setSelectedBeforeHour(beforeHour);
// only refresh when state changes
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [timeRange]);
useEffect(() => {
if (
selectedAfterHour == DEFAULT_TIME_RANGE_AFTER &&
selectedBeforeHour == DEFAULT_TIME_RANGE_BEFORE
) {
updateTimeRange(undefined);
} else {
updateTimeRange(`${selectedAfterHour},${selectedBeforeHour}`);
}
// only refresh when state changes
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [selectedAfterHour, selectedBeforeHour]);
return (
<div className="overflow-x-hidden">
<div className="text-lg">Time Range</div>
<div className="mt-3 flex flex-row items-center justify-center gap-2">
<Popover
open={startOpen}
onOpenChange={(open) => {
if (!open) {
setStartOpen(false);
}
}}
>
<PopoverTrigger asChild>
<Button
className={`text-primary ${isDesktop ? "" : "text-xs"} `}
variant={startOpen ? "select" : "default"}
size="sm"
onClick={() => {
setStartOpen(true);
setEndOpen(false);
}}
>
{formattedSelectedAfter}
</Button>
</PopoverTrigger>
<PopoverContent className="flex flex-row items-center justify-center">
<input
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
id="startTime"
type="time"
value={selectedAfterHour}
step="60"
onChange={(e) => {
const clock = e.target.value;
const [hour, minute, _] = clock.split(":");
setSelectedAfterHour(`${hour}:${minute}`);
}}
/>
</PopoverContent>
</Popover>
<FaArrowRight className="size-4 text-primary" />
<Popover
open={endOpen}
onOpenChange={(open) => {
if (!open) {
setEndOpen(false);
}
}}
>
<PopoverTrigger asChild>
<Button
className={`text-primary ${isDesktop ? "" : "text-xs"}`}
variant={endOpen ? "select" : "default"}
size="sm"
onClick={() => {
setEndOpen(true);
setStartOpen(false);
}}
>
{formattedSelectedBefore}
</Button>
</PopoverTrigger>
<PopoverContent className="flex flex-col items-center">
<input
className="text-md mx-4 w-full border border-input bg-background p-1 text-secondary-foreground hover:bg-accent hover:text-accent-foreground dark:[color-scheme:dark]"
id="startTime"
type="time"
value={
selectedBeforeHour == "24:00" ? "23:59" : selectedBeforeHour
}
step="60"
onChange={(e) => {
const clock = e.target.value;
const [hour, minute, _] = clock.split(":");
setSelectedBeforeHour(`${hour}:${minute}`);
}}
/>
</PopoverContent>
</Popover>
</div>
</div>
);
}
type ZoneFilterContentProps = {
allZones?: string[];
zones?: string[];
updateZones: (zones: string[] | undefined) => void;
};
export function ZoneFilterContent({
allZones,
zones,
updateZones,
}: ZoneFilterContentProps) {
return (
<>
<div className="overflow-x-hidden">
<DropdownMenuSeparator className="mb-3" />
<div className="text-lg">Zones</div>
{allZones && (
<>
<div className="mb-5 mt-2.5 flex items-center justify-between">
<Label
className="mx-2 cursor-pointer text-primary"
htmlFor="allZones"
>
All Zones
</Label>
<Switch
className="ml-1"
id="allZones"
checked={zones == undefined}
onCheckedChange={(isChecked) => {
if (isChecked) {
updateZones(undefined);
}
}}
/>
</div>
<div className="mt-2.5 flex flex-col gap-2.5">
{allZones.map((item) => (
<FilterSwitch
key={item}
label={item.replaceAll("_", " ")}
isChecked={zones?.includes(item) ?? false}
onCheckedChange={(isChecked) => {
if (isChecked) {
const updatedZones = zones ? [...zones] : [];
updatedZones.push(item);
updateZones(updatedZones);
} else {
const updatedZones = zones ? [...zones] : [];
// can not deselect the last item
if (updatedZones.length > 1) {
updatedZones.splice(updatedZones.indexOf(item), 1);
updateZones(updatedZones);
}
}
}}
/>
))}
</div>
</>
)}
</div>
</>
);
}
type SubFilterContentProps = {
allSubLabels: string[];
subLabels: string[] | undefined;
setSubLabels: (labels: string[] | undefined) => void;
};
export function SubFilterContent({
allSubLabels,
subLabels,
setSubLabels,
}: SubFilterContentProps) {
return (
<div className="overflow-x-hidden">
<DropdownMenuSeparator className="mb-3" />
<div className="text-lg">Sub Labels</div>
<div className="mb-5 mt-2.5 flex items-center justify-between">
<Label className="mx-2 cursor-pointer text-primary" htmlFor="allLabels">
All Sub Labels
</Label>
<Switch
className="ml-1"
id="allLabels"
checked={subLabels == undefined}
onCheckedChange={(isChecked) => {
if (isChecked) {
setSubLabels(undefined);
}
}}
/>
</div>
<div className="mt-2.5 flex flex-col gap-2.5">
{allSubLabels.map((item) => (
<FilterSwitch
key={item}
label={item.replaceAll("_", " ")}
isChecked={subLabels?.includes(item) ?? false}
onCheckedChange={(isChecked) => {
if (isChecked) {
const updatedLabels = subLabels ? [...subLabels] : [];
updatedLabels.push(item);
setSubLabels(updatedLabels);
} else {
const updatedLabels = subLabels ? [...subLabels] : [];
// can not deselect the last item
if (updatedLabels.length > 1) {
updatedLabels.splice(updatedLabels.indexOf(item), 1);
setSubLabels(updatedLabels);
}
}
}}
/>
))}
</div>
</div>
);
}
type ScoreFilterContentProps = {
minScore: number | undefined;
maxScore: number | undefined;
setScoreRange: (min: number | undefined, max: number | undefined) => void;
};
export function ScoreFilterContent({
minScore,
maxScore,
setScoreRange,
}: ScoreFilterContentProps) {
return (
<div className="overflow-x-hidden">
<DropdownMenuSeparator className="mb-3" />
<div className="mb-3 text-lg">Score</div>
<div className="flex items-center gap-1">
<Input
className="w-14 text-center"
inputMode="numeric"
value={Math.round((minScore ?? 0.5) * 100)}
onChange={(e) => {
const value = e.target.value;
if (value) {
setScoreRange(parseInt(value) / 100.0, maxScore ?? 1.0);
}
}}
/>
<DualThumbSlider
className="mx-2 w-full"
min={0.5}
max={1.0}
step={0.01}
value={[minScore ?? 0.5, maxScore ?? 1.0]}
onValueChange={([min, max]) => setScoreRange(min, max)}
/>
<Input
className="w-14 text-center"
inputMode="numeric"
value={Math.round((maxScore ?? 1.0) * 100)}
onChange={(e) => {
const value = e.target.value;
if (value) {
setScoreRange(minScore ?? 0.5, parseInt(value) / 100.0);
}
}}
/>
</div>
</div>
);
}
type SnapshotClipContentProps = {
hasSnapshot: boolean | undefined;
hasClip: boolean | undefined;
setSnapshotClip: (
snapshot: boolean | undefined,
clip: boolean | undefined,
) => void;
};
function SnapshotClipFilterContent({
hasSnapshot,
hasClip,
setSnapshotClip,
}: SnapshotClipContentProps) {
const [isSnapshotFilterActive, setIsSnapshotFilterActive] = useState(
hasSnapshot !== undefined,
);
const [isClipFilterActive, setIsClipFilterActive] = useState(
hasClip !== undefined,
);
useEffect(() => {
setIsSnapshotFilterActive(hasSnapshot !== undefined);
}, [hasSnapshot]);
useEffect(() => {
setIsClipFilterActive(hasClip !== undefined);
}, [hasClip]);
return (
<div className="overflow-x-hidden">
<DropdownMenuSeparator className="mb-3" />
<div className="mb-3 text-lg">Features</div>
<div className="my-2.5 space-y-1">
<div className="flex items-center justify-between">
<div className="flex items-center space-x-2">
<Checkbox
id="snapshot-filter"
className="size-5 text-white accent-white data-[state=checked]:bg-selected data-[state=checked]:text-white"
checked={isSnapshotFilterActive}
onCheckedChange={(checked) => {
setIsSnapshotFilterActive(checked as boolean);
if (checked) {
setSnapshotClip(true, hasClip);
} else {
setSnapshotClip(undefined, hasClip);
}
}}
/>
<Label
htmlFor="snapshot-filter"
className="cursor-pointer text-sm font-medium leading-none"
>
Has a snapshot
</Label>
</div>
<ToggleGroup
type="single"
value={
hasSnapshot === undefined ? undefined : hasSnapshot ? "yes" : "no"
}
onValueChange={(value) => {
if (value === "yes") setSnapshotClip(true, hasClip);
else if (value === "no") setSnapshotClip(false, hasClip);
}}
disabled={!isSnapshotFilterActive}
>
<ToggleGroupItem
value="yes"
aria-label="Yes"
className="data-[state=on]:bg-selected data-[state=on]:text-white data-[state=on]:hover:bg-selected data-[state=on]:hover:text-white"
>
Yes
</ToggleGroupItem>
<ToggleGroupItem
value="no"
aria-label="No"
className="data-[state=on]:bg-selected data-[state=on]:text-white data-[state=on]:hover:bg-selected data-[state=on]:hover:text-white"
>
No
</ToggleGroupItem>
</ToggleGroup>
</div>
<div className="flex items-center justify-between">
<div className="flex items-center space-x-2">
<Checkbox
className="size-5 text-white accent-white data-[state=checked]:bg-selected data-[state=checked]:text-white"
id="clip-filter"
checked={isClipFilterActive}
onCheckedChange={(checked) => {
setIsClipFilterActive(checked as boolean);
if (checked) {
setSnapshotClip(hasSnapshot, true);
} else {
setSnapshotClip(hasSnapshot, undefined);
}
}}
/>
<Label
htmlFor="clip-filter"
className="cursor-pointer text-sm font-medium leading-none"
>
Has a video clip
</Label>
</div>
<ToggleGroup
type="single"
value={hasClip === undefined ? undefined : hasClip ? "yes" : "no"}
onValueChange={(value) => {
if (value === "yes") setSnapshotClip(hasSnapshot, true);
else if (value === "no") setSnapshotClip(hasSnapshot, false);
}}
disabled={!isClipFilterActive}
>
<ToggleGroupItem
value="yes"
aria-label="Yes"
className="data-[state=on]:bg-selected data-[state=on]:text-white data-[state=on]:hover:bg-selected data-[state=on]:hover:text-white"
>
Yes
</ToggleGroupItem>
<ToggleGroupItem
value="no"
aria-label="No"
className="data-[state=on]:bg-selected data-[state=on]:text-white data-[state=on]:hover:bg-selected data-[state=on]:hover:text-white"
>
No
</ToggleGroupItem>
</ToggleGroup>
</div>
</div>
</div>
);
}

View File

@@ -0,0 +1,52 @@
import React, { useState, useRef } from "react";
import { useVideoDimensions } from "@/hooks/use-video-dimensions";
import HlsVideoPlayer from "./HlsVideoPlayer";
import ActivityIndicator from "../indicators/activity-indicator";
type GenericVideoPlayerProps = {
source: string;
onPlaying?: () => void;
children?: React.ReactNode;
};
export function GenericVideoPlayer({
source,
onPlaying,
children,
}: GenericVideoPlayerProps) {
const [isLoading, setIsLoading] = useState(true);
const videoRef = useRef<HTMLVideoElement | null>(null);
const containerRef = useRef<HTMLDivElement | null>(null);
const { videoDimensions, setVideoResolution } =
useVideoDimensions(containerRef);
return (
<div ref={containerRef} className="relative flex h-full w-full flex-col">
<div className="relative flex flex-grow items-center justify-center">
{isLoading && (
<ActivityIndicator className="absolute left-1/2 top-1/2 z-10 -translate-x-1/2 -translate-y-1/2" />
)}
<div
className="relative flex items-center justify-center"
style={videoDimensions}
>
<HlsVideoPlayer
videoRef={videoRef}
currentSource={source}
hotKeys
visible
frigateControls={false}
fullscreen={false}
supportsFullscreen={false}
onPlaying={() => {
setIsLoading(false);
onPlaying?.();
}}
setFullResolution={setVideoResolution}
/>
{!isLoading && children}
</div>
</div>
</div>
);
}

View File

@@ -20,7 +20,7 @@ import {
FormMessage,
} from "@/components/ui/form";
import { useCallback, useEffect, useMemo } from "react";
import { ATTRIBUTE_LABELS, FrigateConfig } from "@/types/frigateConfig";
import { FrigateConfig } from "@/types/frigateConfig";
import useSWR from "swr";
import { zodResolver } from "@hookform/resolvers/zod";
import { useForm } from "react-hook-form";
@@ -37,6 +37,7 @@ import axios from "axios";
import { toast } from "sonner";
import { Toaster } from "../ui/sonner";
import ActivityIndicator from "../indicators/activity-indicator";
import { getAttributeLabels } from "@/utils/iconUtil";
type ObjectMaskEditPaneProps = {
polygons?: Polygon[];
@@ -367,6 +368,14 @@ type ZoneObjectSelectorProps = {
export function ZoneObjectSelector({ camera }: ZoneObjectSelectorProps) {
const { data: config } = useSWR<FrigateConfig>("config");
const attributeLabels = useMemo(() => {
if (!config) {
return [];
}
return getAttributeLabels(config);
}, [config]);
const cameraConfig = useMemo(() => {
if (config && camera) {
return config.cameras[camera];
@@ -382,20 +391,20 @@ export function ZoneObjectSelector({ camera }: ZoneObjectSelectorProps) {
Object.values(config.cameras).forEach((camera) => {
camera.objects.track.forEach((label) => {
if (!ATTRIBUTE_LABELS.includes(label)) {
if (!attributeLabels.includes(label)) {
labels.add(label);
}
});
});
cameraConfig.objects.track.forEach((label) => {
if (!ATTRIBUTE_LABELS.includes(label)) {
if (!attributeLabels.includes(label)) {
labels.add(label);
}
});
return [...labels].sort();
}, [config, cameraConfig]);
}, [config, cameraConfig, attributeLabels]);
return (
<>

View File

@@ -35,6 +35,7 @@ import { FrigateConfig } from "@/types/frigateConfig";
import { reviewQueries } from "@/utils/zoneEdutUtil";
import IconWrapper from "../ui/icon-wrapper";
import { StatusBarMessagesContext } from "@/context/statusbar-provider";
import { buttonVariants } from "../ui/button";
type PolygonItemProps = {
polygon: Polygon;
@@ -257,7 +258,10 @@ export default function PolygonItem({
</AlertDialogDescription>
<AlertDialogFooter>
<AlertDialogCancel>Cancel</AlertDialogCancel>
<AlertDialogAction onClick={handleDelete}>
<AlertDialogAction
className={buttonVariants({ variant: "destructive" })}
onClick={handleDelete}
>
Delete
</AlertDialogAction>
</AlertDialogFooter>

View File

@@ -0,0 +1,199 @@
import { Button } from "../ui/button";
import { useState } from "react";
import { isDesktop, isMobileOnly } from "react-device-detect";
import { cn } from "@/lib/utils";
import PlatformAwareDialog from "../overlay/dialog/PlatformAwareDialog";
import { FaCog } from "react-icons/fa";
import { Slider } from "../ui/slider";
import {
Select,
SelectContent,
SelectGroup,
SelectItem,
SelectTrigger,
} from "@/components/ui/select";
import { DropdownMenuSeparator } from "../ui/dropdown-menu";
import FilterSwitch from "../filter/FilterSwitch";
import { SearchFilter, SearchSource } from "@/types/search";
import useSWR from "swr";
import { FrigateConfig } from "@/types/frigateConfig";
type SearchSettingsProps = {
className?: string;
columns: number;
defaultView: string;
filter?: SearchFilter;
setColumns: (columns: number) => void;
setDefaultView: (view: string) => void;
onUpdateFilter: (filter: SearchFilter) => void;
};
export default function SearchSettings({
className,
columns,
setColumns,
defaultView,
filter,
setDefaultView,
onUpdateFilter,
}: SearchSettingsProps) {
const { data: config } = useSWR<FrigateConfig>("config");
const [open, setOpen] = useState(false);
const [searchSources, setSearchSources] = useState<SearchSource[]>([
"thumbnail",
]);
const trigger = (
<Button className="flex items-center gap-2" size="sm">
<FaCog className="text-secondary-foreground" />
Settings
</Button>
);
const content = (
<div className={cn(className, "my-3 space-y-5 py-3 md:mt-0 md:py-0")}>
<div className="space-y-4">
<div className="space-y-0.5">
<div className="text-md">Default View</div>
<div className="space-y-1 text-xs text-muted-foreground">
When no filters are selected, display a summary of the most recent
tracked objects per label, or display an unfiltered grid.
</div>
</div>
<Select
value={defaultView}
onValueChange={(value) => setDefaultView(value)}
>
<SelectTrigger className="w-full">
{defaultView == "summary" ? "Summary" : "Unfiltered Grid"}
</SelectTrigger>
<SelectContent>
<SelectGroup>
{["summary", "grid"].map((value) => (
<SelectItem
key={value}
className="cursor-pointer"
value={value}
>
{value == "summary" ? "Summary" : "Unfiltered Grid"}
</SelectItem>
))}
</SelectGroup>
</SelectContent>
</Select>
</div>
{!isMobileOnly && (
<>
<DropdownMenuSeparator />
<div className="flex w-full flex-col space-y-4">
<div className="space-y-0.5">
<div className="text-md">Grid Columns</div>
<div className="space-y-1 text-xs text-muted-foreground">
Select the number of columns in the grid view.
</div>
</div>
<div className="flex items-center space-x-4">
<Slider
value={[columns]}
onValueChange={([value]) => setColumns(value)}
max={6}
min={2}
step={1}
className="flex-grow"
/>
<span className="w-9 text-center text-sm font-medium">
{columns}
</span>
</div>
</div>
</>
)}
{config?.semantic_search?.enabled && (
<SearchTypeContent
searchSources={searchSources}
setSearchSources={(sources) => {
setSearchSources(sources as SearchSource[]);
onUpdateFilter({ ...filter, search_type: sources });
}}
/>
)}
</div>
);
return (
<PlatformAwareDialog
trigger={trigger}
content={content}
contentClassName={
isDesktop
? "scrollbar-container h-auto max-h-[80dvh] overflow-y-auto"
: "max-h-[75dvh] overflow-hidden p-4"
}
open={open}
onOpenChange={(open) => {
setOpen(open);
}}
/>
);
}
type SearchTypeContentProps = {
searchSources: SearchSource[] | undefined;
setSearchSources: (sources: SearchSource[] | undefined) => void;
};
export function SearchTypeContent({
searchSources,
setSearchSources,
}: SearchTypeContentProps) {
return (
<>
<div className="overflow-x-hidden">
<DropdownMenuSeparator className="mb-3" />
<div className="space-y-0.5">
<div className="text-md">Search Source</div>
<div className="space-y-1 text-xs text-muted-foreground">
Choose whether to search the thumbnails or descriptions of your
tracked objects.
</div>
</div>
<div className="mt-2.5 flex flex-col gap-2.5">
<FilterSwitch
label="Thumbnail Image"
isChecked={searchSources?.includes("thumbnail") ?? false}
onCheckedChange={(isChecked) => {
const updatedSources = searchSources ? [...searchSources] : [];
if (isChecked) {
updatedSources.push("thumbnail");
setSearchSources(updatedSources);
} else {
if (updatedSources.length > 1) {
const index = updatedSources.indexOf("thumbnail");
if (index !== -1) updatedSources.splice(index, 1);
setSearchSources(updatedSources);
}
}
}}
/>
<FilterSwitch
label="Description"
isChecked={searchSources?.includes("description") ?? false}
onCheckedChange={(isChecked) => {
const updatedSources = searchSources ? [...searchSources] : [];
if (isChecked) {
updatedSources.push("description");
setSearchSources(updatedSources);
} else {
if (updatedSources.length > 1) {
const index = updatedSources.indexOf("description");
if (index !== -1) updatedSources.splice(index, 1);
setSearchSources(updatedSources);
}
}
}}
/>
</div>
</div>
</>
);
}

View File

@@ -12,7 +12,7 @@ import {
} from "@/components/ui/form";
import { Input } from "@/components/ui/input";
import { useCallback, useEffect, useMemo, useState } from "react";
import { ATTRIBUTE_LABELS, FrigateConfig } from "@/types/frigateConfig";
import { FrigateConfig } from "@/types/frigateConfig";
import useSWR from "swr";
import { zodResolver } from "@hookform/resolvers/zod";
import { useForm } from "react-hook-form";
@@ -28,6 +28,7 @@ import { Toaster } from "@/components/ui/sonner";
import { toast } from "sonner";
import { flattenPoints, interpolatePoints } from "@/utils/canvasUtil";
import ActivityIndicator from "../indicators/activity-indicator";
import { getAttributeLabels } from "@/utils/iconUtil";
type ZoneEditPaneProps = {
polygons?: Polygon[];
@@ -505,6 +506,14 @@ export function ZoneObjectSelector({
}: ZoneObjectSelectorProps) {
const { data: config } = useSWR<FrigateConfig>("config");
const attributeLabels = useMemo(() => {
if (!config) {
return [];
}
return getAttributeLabels(config);
}, [config]);
const cameraConfig = useMemo(() => {
if (config && camera) {
return config.cameras[camera];
@@ -519,7 +528,7 @@ export function ZoneObjectSelector({
const labels = new Set<string>();
cameraConfig.objects.track.forEach((label) => {
if (!ATTRIBUTE_LABELS.includes(label)) {
if (!attributeLabels.includes(label)) {
labels.add(label);
}
});
@@ -527,7 +536,7 @@ export function ZoneObjectSelector({
if (zoneName) {
if (cameraConfig.zones[zoneName]) {
cameraConfig.zones[zoneName].objects.forEach((label) => {
if (!ATTRIBUTE_LABELS.includes(label)) {
if (!attributeLabels.includes(label)) {
labels.add(label);
}
});
@@ -535,7 +544,7 @@ export function ZoneObjectSelector({
}
return [...labels].sort() || [];
}, [config, cameraConfig, zoneName]);
}, [config, cameraConfig, attributeLabels, zoneName]);
const [currentLabels, setCurrentLabels] = useState<string[] | undefined>(
selectedLabels,

View File

@@ -0,0 +1,108 @@
import { cn } from "@/lib/utils";
interface Props {
max: number;
value: number;
min: number;
gaugePrimaryColor: string;
gaugeSecondaryColor: string;
className?: string;
}
export default function AnimatedCircularProgressBar({
max = 100,
min = 0,
value = 0,
gaugePrimaryColor,
gaugeSecondaryColor,
className,
}: Props) {
const circumference = 2 * Math.PI * 45;
const percentPx = circumference / 100;
const currentPercent = Math.floor(((value - min) / (max - min)) * 100);
return (
<div
className={cn("relative size-40 text-2xl font-semibold", className)}
style={
{
"--circle-size": "100px",
"--circumference": circumference,
"--percent-to-px": `${percentPx}px`,
"--gap-percent": "5",
"--offset-factor": "0",
"--transition-length": "1s",
"--transition-step": "200ms",
"--delay": "0s",
"--percent-to-deg": "3.6deg",
transform: "translateZ(0)",
} as React.CSSProperties
}
>
<svg
fill="none"
className="size-full"
strokeWidth="2"
viewBox="0 0 100 100"
>
{currentPercent <= 90 && currentPercent >= 0 && (
<circle
cx="50"
cy="50"
r="45"
strokeWidth="10"
strokeDashoffset="0"
strokeLinecap="round"
strokeLinejoin="round"
className="opacity-100"
style={
{
stroke: gaugeSecondaryColor,
"--stroke-percent": 90 - currentPercent,
"--offset-factor-secondary": "calc(1 - var(--offset-factor))",
strokeDasharray:
"calc(var(--stroke-percent) * var(--percent-to-px)) var(--circumference)",
transform:
"rotate(calc(1turn - 90deg - (var(--gap-percent) * var(--percent-to-deg) * var(--offset-factor-secondary)))) scaleY(-1)",
transition: "all var(--transition-length) ease var(--delay)",
transformOrigin:
"calc(var(--circle-size) / 2) calc(var(--circle-size) / 2)",
} as React.CSSProperties
}
/>
)}
<circle
cx="50"
cy="50"
r="45"
strokeWidth="10"
strokeDashoffset="0"
strokeLinecap="round"
strokeLinejoin="round"
className="opacity-100"
style={
{
stroke: gaugePrimaryColor,
"--stroke-percent": currentPercent,
strokeDasharray:
"calc(var(--stroke-percent) * var(--percent-to-px)) var(--circumference)",
transition:
"var(--transition-length) ease var(--delay),stroke var(--transition-length) ease var(--delay)",
transitionProperty: "stroke-dasharray,transform",
transform:
"rotate(calc(-90deg + var(--gap-percent) * var(--offset-factor) * var(--percent-to-deg)))",
transformOrigin:
"calc(var(--circle-size) / 2) calc(var(--circle-size) / 2)",
} as React.CSSProperties
}
/>
</svg>
<span
data-current-value={currentPercent}
className="duration-[var(--transition-length)] delay-[var(--delay)] absolute inset-0 m-auto size-fit ease-linear animate-in fade-in"
>
{currentPercent}%
</span>
</div>
);
}

View File

@@ -3,7 +3,7 @@ import {
useInitialCameraState,
useMotionActivity,
} from "@/api/ws";
import { ATTRIBUTE_LABELS, CameraConfig } from "@/types/frigateConfig";
import { CameraConfig, FrigateConfig } from "@/types/frigateConfig";
import { MotionData, ReviewSegment } from "@/types/review";
import { useCallback, useEffect, useMemo, useState } from "react";
import { useTimelineUtils } from "./use-timeline-utils";
@@ -11,6 +11,8 @@ import { ObjectType } from "@/types/ws";
import useDeepMemo from "./use-deep-memo";
import { isEqual } from "lodash";
import { useAutoFrigateStats } from "./use-stats";
import useSWR from "swr";
import { getAttributeLabels } from "@/utils/iconUtil";
type useCameraActivityReturn = {
activeTracking: boolean;
@@ -23,6 +25,16 @@ export function useCameraActivity(
camera: CameraConfig,
revalidateOnFocus: boolean = true,
): useCameraActivityReturn {
const { data: config } = useSWR<FrigateConfig>("config", {
revalidateOnFocus: false,
});
const attributeLabels = useMemo(() => {
if (!config) {
return [];
}
return getAttributeLabels(config);
}, [config]);
const [objects, setObjects] = useState<ObjectType[]>([]);
// init camera activity
@@ -99,7 +111,7 @@ export function useCameraActivity(
if (updatedEvent.after.sub_label) {
const sub_label = updatedEvent.after.sub_label[0];
if (ATTRIBUTE_LABELS.includes(sub_label)) {
if (attributeLabels.includes(sub_label)) {
label = sub_label;
} else {
label = `${label}-verified`;
@@ -113,7 +125,7 @@ export function useCameraActivity(
}
handleSetObjects(newObjects);
}, [camera, updatedEvent, objects, handleSetObjects]);
}, [attributeLabels, camera, updatedEvent, objects, handleSetObjects]);
// determine if camera is offline

View File

@@ -0,0 +1,45 @@
import { useState, useMemo } from "react";
import { useResizeObserver } from "./resize-observer";
export type VideoResolutionType = {
width: number;
height: number;
};
export function useVideoDimensions(
containerRef: React.RefObject<HTMLDivElement>,
) {
const [{ width: containerWidth, height: containerHeight }] =
useResizeObserver(containerRef);
const [videoResolution, setVideoResolution] = useState<VideoResolutionType>({
width: 0,
height: 0,
});
const videoAspectRatio = useMemo(() => {
return videoResolution.width / videoResolution.height || 16 / 9;
}, [videoResolution]);
const containerAspectRatio = useMemo(() => {
return containerWidth / containerHeight || 16 / 9;
}, [containerWidth, containerHeight]);
const videoDimensions = useMemo(() => {
if (!containerWidth || !containerHeight)
return { width: "100%", height: "100%" };
if (containerAspectRatio > videoAspectRatio) {
const height = containerHeight;
const width = height * videoAspectRatio;
return { width: `${width}px`, height: `${height}px` };
} else {
const width = containerWidth;
const height = width / videoAspectRatio;
return { width: `${width}px`, height: `${height}px` };
}
}, [containerWidth, containerHeight, videoAspectRatio, containerAspectRatio]);
return {
videoDimensions,
setVideoResolution,
};
}

View File

@@ -220,7 +220,7 @@ function ConfigEditor() {
</div>
{error && (
<div className="mt-2 max-h-[30%] overflow-auto whitespace-pre-wrap border-2 border-muted bg-background_alt p-4 text-sm text-danger md:max-h-full">
<div className="mt-2 max-h-[30%] overflow-auto whitespace-pre-wrap border-2 border-muted bg-background_alt p-4 text-sm text-danger md:max-h-[40%]">
{error}
</div>
)}

View File

@@ -1,12 +1,20 @@
import { useEventUpdate, useModelState } from "@/api/ws";
import {
useEmbeddingsReindexProgress,
useEventUpdate,
useModelState,
} from "@/api/ws";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import AnimatedCircularProgressBar from "@/components/ui/circular-progress-bar";
import { useApiFilterArgs } from "@/hooks/use-api-filter";
import { useTimezone } from "@/hooks/use-date-utils";
import { usePersistence } from "@/hooks/use-persistence";
import { FrigateConfig } from "@/types/frigateConfig";
import { SearchFilter, SearchQuery, SearchResult } from "@/types/search";
import { ModelState } from "@/types/ws";
import { formatSecondsToDuration } from "@/utils/dateUtil";
import SearchView from "@/views/search/SearchView";
import { useCallback, useEffect, useMemo, useState } from "react";
import { isMobileOnly } from "react-device-detect";
import { LuCheck, LuExternalLink, LuX } from "react-icons/lu";
import { TbExclamationCircle } from "react-icons/tb";
import { Link } from "react-router-dom";
@@ -22,6 +30,23 @@ export default function Explore() {
revalidateOnFocus: false,
});
// grid
const [columnCount, setColumnCount] = usePersistence("exploreGridColumns", 4);
const gridColumns = useMemo(() => {
if (isMobileOnly) {
return 2;
}
return columnCount ?? 4;
}, [columnCount]);
// default layout
const [defaultView, setDefaultView, defaultViewLoaded] = usePersistence(
"exploreDefaultView",
"summary",
);
const timezone = useTimezone(config);
const [search, setSearch] = useState("");
@@ -59,7 +84,11 @@ export default function Explore() {
const searchQuery: SearchQuery = useMemo(() => {
// no search parameters
if (searchSearchParams && Object.keys(searchSearchParams).length === 0) {
return null;
if (defaultView == "grid") {
return ["events", {}];
} else {
return null;
}
}
// parameters, but no search term and not similarity
@@ -80,6 +109,10 @@ export default function Explore() {
after: searchSearchParams["after"],
time_range: searchSearchParams["time_range"],
search_type: searchSearchParams["search_type"],
min_score: searchSearchParams["min_score"],
max_score: searchSearchParams["max_score"],
has_snapshot: searchSearchParams["has_snapshot"],
has_clip: searchSearchParams["has_clip"],
limit:
Object.keys(searchSearchParams).length == 0 ? API_LIMIT : undefined,
timezone,
@@ -106,12 +139,16 @@ export default function Explore() {
after: searchSearchParams["after"],
time_range: searchSearchParams["time_range"],
search_type: searchSearchParams["search_type"],
min_score: searchSearchParams["min_score"],
max_score: searchSearchParams["max_score"],
has_snapshot: searchSearchParams["has_snapshot"],
has_clip: searchSearchParams["has_clip"],
event_id: searchSearchParams["event_id"],
timezone,
include_thumbnails: 0,
},
];
}, [searchTerm, searchSearchParams, similaritySearch, timezone]);
}, [searchTerm, searchSearchParams, similaritySearch, timezone, defaultView]);
// paging
@@ -140,7 +177,7 @@ export default function Explore() {
const { data, size, setSize, isValidating, mutate } = useSWRInfinite<
SearchResult[]
>(getKey, {
revalidateFirstPage: true,
revalidateFirstPage: false,
revalidateOnFocus: true,
revalidateAll: false,
});
@@ -177,38 +214,60 @@ export default function Explore() {
const eventUpdate = useEventUpdate();
useEffect(() => {
mutate();
if (eventUpdate) {
mutate();
}
// mutate / revalidate when event description updates come in
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [eventUpdate]);
// embeddings reindex progress
const { payload: reindexState } = useEmbeddingsReindexProgress();
const embeddingsReindexing = useMemo(() => {
if (reindexState) {
switch (reindexState.status) {
case "indexing":
return true;
case "completed":
return false;
default:
return undefined;
}
}
}, [reindexState]);
// model states
const { payload: minilmModelState } = useModelState(
"sentence-transformers/all-MiniLM-L6-v2-model.onnx",
const { payload: textModelState } = useModelState(
"jinaai/jina-clip-v1-text_model_fp16.onnx",
);
const { payload: minilmTokenizerState } = useModelState(
"sentence-transformers/all-MiniLM-L6-v2-tokenizer",
const { payload: textTokenizerState } = useModelState(
"jinaai/jina-clip-v1-tokenizer",
);
const { payload: clipImageModelState } = useModelState(
"clip-clip_image_model_vitb32.onnx",
);
const { payload: clipTextModelState } = useModelState(
"clip-clip_text_model_vitb32.onnx",
const modelFile =
config?.semantic_search.model_size === "large"
? "jinaai/jina-clip-v1-vision_model_fp16.onnx"
: "jinaai/jina-clip-v1-vision_model_quantized.onnx";
const { payload: visionModelState } = useModelState(modelFile);
const { payload: visionFeatureExtractorState } = useModelState(
"jinaai/jina-clip-v1-preprocessor_config.json",
);
const allModelsLoaded = useMemo(() => {
return (
minilmModelState === "downloaded" &&
minilmTokenizerState === "downloaded" &&
clipImageModelState === "downloaded" &&
clipTextModelState === "downloaded"
textModelState === "downloaded" &&
textTokenizerState === "downloaded" &&
visionModelState === "downloaded" &&
visionFeatureExtractorState === "downloaded"
);
}, [
minilmModelState,
minilmTokenizerState,
clipImageModelState,
clipTextModelState,
textModelState,
textTokenizerState,
visionModelState,
visionFeatureExtractorState,
]);
const renderModelStateIcon = (modelState: ModelState) => {
@@ -225,10 +284,13 @@ export default function Explore() {
};
if (
!minilmModelState ||
!minilmTokenizerState ||
!clipImageModelState ||
!clipTextModelState
!defaultViewLoaded ||
(config?.semantic_search.enabled &&
(!reindexState ||
!textModelState ||
!textTokenizerState ||
!visionModelState ||
!visionFeatureExtractorState))
) {
return (
<ActivityIndicator className="absolute left-1/2 top-1/2 -translate-x-1/2 -translate-y-1/2" />
@@ -237,58 +299,114 @@ export default function Explore() {
return (
<>
{!allModelsLoaded ? (
{config?.semantic_search.enabled &&
(!allModelsLoaded || embeddingsReindexing) ? (
<div className="absolute inset-0 left-1/2 top-1/2 flex h-96 w-96 -translate-x-1/2 -translate-y-1/2">
<div className="flex flex-col items-center justify-center space-y-3 rounded-lg bg-background/50 p-5">
<div className="flex max-w-96 flex-col items-center justify-center space-y-3 rounded-lg bg-background/50 p-5">
<div className="my-5 flex flex-col items-center gap-2 text-xl">
<TbExclamationCircle className="mb-3 size-10" />
<div>Search Unavailable</div>
</div>
<div className="max-w-96 text-center">
Frigate is downloading the necessary embeddings models to support
semantic searching. This may take several minutes depending on the
speed of your network connection.
</div>
<div className="flex w-96 flex-col gap-2 py-5">
<div className="flex flex-row items-center justify-center gap-2">
{renderModelStateIcon(clipImageModelState)}
CLIP image model
</div>
<div className="flex flex-row items-center justify-center gap-2">
{renderModelStateIcon(clipTextModelState)}
CLIP text model
</div>
<div className="flex flex-row items-center justify-center gap-2">
{renderModelStateIcon(minilmModelState)}
MiniLM sentence model
</div>
<div className="flex flex-row items-center justify-center gap-2">
{renderModelStateIcon(minilmTokenizerState)}
MiniLM tokenizer
</div>
</div>
{(minilmModelState === "error" ||
clipImageModelState === "error" ||
clipTextModelState === "error") && (
<div className="my-3 max-w-96 text-center text-danger">
An error has occurred. Check Frigate logs.
</div>
{embeddingsReindexing && allModelsLoaded && (
<>
<div className="text-center text-primary-variant">
Search can be used after tracked object embeddings have
finished reindexing.
</div>
<div className="pt-5 text-center">
<AnimatedCircularProgressBar
min={0}
max={reindexState.total_objects}
value={reindexState.processed_objects}
gaugePrimaryColor="hsl(var(--selected))"
gaugeSecondaryColor="hsl(var(--secondary))"
/>
</div>
<div className="flex w-96 flex-col gap-2 py-5">
{reindexState.time_remaining !== null && (
<div className="mb-3 flex flex-col items-center justify-center gap-1">
<div className="text-primary-variant">
{reindexState.time_remaining === -1
? "Starting up..."
: "Estimated time remaining:"}
</div>
{reindexState.time_remaining >= 0 &&
(formatSecondsToDuration(reindexState.time_remaining) ||
"Finishing shortly")}
</div>
)}
<div className="flex flex-row items-center justify-center gap-3">
<span className="text-primary-variant">
Thumbnails embedded:
</span>
{reindexState.thumbnails}
</div>
<div className="flex flex-row items-center justify-center gap-3">
<span className="text-primary-variant">
Descriptions embedded:
</span>
{reindexState.descriptions}
</div>
<div className="flex flex-row items-center justify-center gap-3">
<span className="text-primary-variant">
Tracked objects processed:
</span>
{reindexState.processed_objects} /{" "}
{reindexState.total_objects}
</div>
</div>
</>
)}
{!allModelsLoaded && (
<>
<div className="text-center text-primary-variant">
Frigate is downloading the necessary embeddings models to
support semantic searching. This may take several minutes
depending on the speed of your network connection.
</div>
<div className="flex w-96 flex-col gap-2 py-5">
<div className="flex flex-row items-center justify-center gap-2">
{renderModelStateIcon(visionModelState)}
Vision model
</div>
<div className="flex flex-row items-center justify-center gap-2">
{renderModelStateIcon(visionFeatureExtractorState)}
Vision model feature extractor
</div>
<div className="flex flex-row items-center justify-center gap-2">
{renderModelStateIcon(textModelState)}
Text model
</div>
<div className="flex flex-row items-center justify-center gap-2">
{renderModelStateIcon(textTokenizerState)}
Text tokenizer
</div>
</div>
{(textModelState === "error" ||
textTokenizerState === "error" ||
visionModelState === "error" ||
visionFeatureExtractorState === "error") && (
<div className="my-3 max-w-96 text-center text-danger">
An error has occurred. Check Frigate logs.
</div>
)}
<div className="text-center text-primary-variant">
You may want to reindex the embeddings of your tracked objects
once the models are downloaded.
</div>
<div className="flex items-center text-primary-variant">
<Link
to="https://docs.frigate.video/configuration/semantic_search"
target="_blank"
rel="noopener noreferrer"
className="inline"
>
Read the documentation{" "}
<LuExternalLink className="ml-2 inline-flex size-3" />
</Link>
</div>
</>
)}
<div className="max-w-96 text-center">
You may want to reindex the embeddings of your tracked objects
once the models are downloaded.
</div>
<div className="flex max-w-96 items-center text-primary-variant">
<Link
to="https://docs.frigate.video/configuration/semantic_search"
target="_blank"
rel="noopener noreferrer"
className="inline"
>
Read the documentation{" "}
<LuExternalLink className="ml-2 inline-flex size-3" />
</Link>
</div>
</div>
</div>
) : (
@@ -298,6 +416,9 @@ export default function Explore() {
searchFilter={searchFilter}
searchResults={searchResults}
isLoading={(isLoadingInitialData || isLoadingMore) ?? true}
hasMore={!isReachingEnd}
columns={gridColumns}
defaultView={defaultView}
setSearch={setSearch}
setSimilaritySearch={(search) => {
setSearchFilter({
@@ -308,8 +429,10 @@ export default function Explore() {
}}
setSearchFilter={setSearchFilter}
onUpdateFilter={setSearchFilter}
setColumns={setColumnCount}
setDefaultView={setDefaultView}
loadMore={loadMore}
hasMore={!isReachingEnd}
refresh={mutate}
/>
)}
</>

View File

@@ -29,16 +29,18 @@ import { ZoneMaskFilterButton } from "@/components/filter/ZoneMaskFilter";
import { PolygonType } from "@/types/canvas";
import { ScrollArea, ScrollBar } from "@/components/ui/scroll-area";
import scrollIntoView from "scroll-into-view-if-needed";
import GeneralSettingsView from "@/views/settings/GeneralSettingsView";
import CameraSettingsView from "@/views/settings/CameraSettingsView";
import ObjectSettingsView from "@/views/settings/ObjectSettingsView";
import MotionTunerView from "@/views/settings/MotionTunerView";
import MasksAndZonesView from "@/views/settings/MasksAndZonesView";
import AuthenticationView from "@/views/settings/AuthenticationView";
import NotificationView from "@/views/settings/NotificationsSettingsView";
import SearchSettingsView from "@/views/settings/SearchSettingsView";
import UiSettingsView from "@/views/settings/UiSettingsView";
const allSettingsViews = [
"general",
"UI settings",
"search settings",
"camera settings",
"masks / zones",
"motion tuner",
@@ -49,7 +51,7 @@ const allSettingsViews = [
type SettingsType = (typeof allSettingsViews)[number];
export default function Settings() {
const [page, setPage] = useState<SettingsType>("general");
const [page, setPage] = useState<SettingsType>("UI settings");
const [pageToggle, setPageToggle] = useOptimisticState(page, setPage, 100);
const tabsRef = useRef<HTMLDivElement | null>(null);
@@ -140,7 +142,7 @@ export default function Settings() {
{Object.values(settingsViews).map((item) => (
<ToggleGroupItem
key={item}
className={`flex scroll-mx-10 items-center justify-between gap-2 ${page == "general" ? "last:mr-20" : ""} ${pageToggle == item ? "" : "*:text-muted-foreground"}`}
className={`flex scroll-mx-10 items-center justify-between gap-2 ${page == "UI settings" ? "last:mr-20" : ""} ${pageToggle == item ? "" : "*:text-muted-foreground"}`}
value={item}
data-nav-item={item}
aria-label={`Select ${item}`}
@@ -172,7 +174,10 @@ export default function Settings() {
)}
</div>
<div className="mt-2 flex h-full w-full flex-col items-start md:h-dvh md:pb-24">
{page == "general" && <GeneralSettingsView />}
{page == "UI settings" && <UiSettingsView />}
{page == "search settings" && (
<SearchSettingsView setUnsavedChanges={setUnsavedChanges} />
)}
{page == "debug" && (
<ObjectSettingsView selectedCamera={selectedCamera} />
)}

View File

@@ -19,13 +19,7 @@ export interface BirdseyeConfig {
width: number;
}
export const ATTRIBUTE_LABELS = [
"amazon",
"face",
"fedex",
"license_plate",
"ups",
];
export type SearchModelSize = "small" | "large";
export interface CameraConfig {
audio: {
@@ -340,6 +334,7 @@ export interface FrigateConfig {
path: string | null;
width: number;
colormap: { [key: string]: [number, number, number] };
attributes_map: { [key: string]: [string] };
};
motion: Record<string, unknown> | null;
@@ -417,6 +412,8 @@ export interface FrigateConfig {
semantic_search: {
enabled: boolean;
reindex: boolean;
model_size: SearchModelSize;
};
snapshots: {

View File

@@ -35,6 +35,7 @@ export type SearchResult = {
zones: string[];
search_source: SearchSource;
search_distance: number;
top_score: number; // for old events
data: {
top_score: number;
score: number;
@@ -56,6 +57,10 @@ export type SearchFilter = {
zones?: string[];
before?: number;
after?: number;
min_score?: number;
max_score?: number;
has_snapshot?: number;
has_clip?: number;
time_range?: string;
search_type?: SearchSource[];
event_id?: string;
@@ -71,6 +76,8 @@ export type SearchQueryParams = {
zones?: string[];
before?: string;
after?: string;
min_score?: number;
max_score?: number;
search_type?: string;
limit?: number;
in_progress?: number;

View File

@@ -62,4 +62,13 @@ export type ModelState =
| "downloaded"
| "error";
export type EmbeddingsReindexProgressType = {
thumbnails: number;
descriptions: number;
processed_objects: number;
total_objects: number;
time_remaining: number;
status: string;
};
export type ToggleableSetting = "ON" | "OFF";

View File

@@ -229,6 +229,23 @@ export const getDurationFromTimestamps = (
return duration;
};
/**
*
* @param seconds - number of seconds to convert into hours, minutes and seconds
* @returns string - formatted duration in hours, minutes and seconds
*/
export const formatSecondsToDuration = (seconds: number): string => {
if (isNaN(seconds) || seconds < 0) {
return "Invalid duration";
}
const duration = intervalToDuration({ start: 0, end: seconds * 1000 });
return formatDuration(duration, {
format: ["hours", "minutes", "seconds"],
delimiter: ", ",
});
};
/**
* Adapted from https://stackoverflow.com/a/29268535 this takes a timezone string and
* returns the offset of that timezone from UTC in minutes.

View File

@@ -1,4 +1,5 @@
import { IconName } from "@/components/icons/IconPicker";
import { FrigateConfig } from "@/types/frigateConfig";
import { BsPersonWalking } from "react-icons/bs";
import {
FaAmazon,
@@ -7,20 +8,48 @@ import {
FaCarSide,
FaCat,
FaCheckCircle,
FaDhl,
FaDog,
FaFedex,
FaFire,
FaFootballBall,
FaHockeyPuck,
FaHorse,
FaMotorcycle,
FaMouse,
FaRegTrashAlt,
FaUmbrella,
FaUps,
FaUsps,
} from "react-icons/fa";
import { GiDeer, GiHummingbird, GiPolarBear, GiSailboat } from "react-icons/gi";
import {
GiDeer,
GiFox,
GiGoat,
GiHummingbird,
GiPolarBear,
GiPostStamp,
GiRabbit,
GiRaccoonHead,
GiSailboat,
} from "react-icons/gi";
import { LuBox, LuLassoSelect } from "react-icons/lu";
import * as LuIcons from "react-icons/lu";
import { MdRecordVoiceOver } from "react-icons/md";
export function getAttributeLabels(config?: FrigateConfig) {
if (!config) {
return [];
}
const labels = new Set();
Object.values(config.model.attributes_map).forEach((values) =>
values.forEach((label) => labels.add(label)),
);
return [...labels];
}
export function isValidIconName(value: string): value is IconName {
return Object.keys(LuIcons).includes(value as IconName);
}
@@ -53,8 +82,12 @@ export function getIconForLabel(label: string, className?: string) {
case "bark":
case "dog":
return <FaDog key={label} className={className} />;
case "fire_alarm":
return <FaFire key={label} className={className} />;
case "fox":
return <GiFox key={label} className={className} />;
case "goat":
return <GiGoat key={label} className={className} />;
case "horse":
return <FaHorse key={label} className={className} />;
case "motorcycle":
return <FaMotorcycle key={label} className={className} />;
case "mouse":
@@ -63,8 +96,20 @@ export function getIconForLabel(label: string, className?: string) {
return <LuBox key={label} className={className} />;
case "person":
return <BsPersonWalking key={label} className={className} />;
case "rabbit":
return <GiRabbit key={label} className={className} />;
case "raccoon":
return <GiRaccoonHead key={label} className={className} />;
case "robot_lawnmower":
return <FaHockeyPuck key={label} className={className} />;
case "sports_ball":
return <FaFootballBall key={label} className={className} />;
case "squirrel":
return <LuIcons.LuSquirrel key={label} className={className} />;
case "umbrella":
return <FaUmbrella key={label} className={className} />;
case "waste_bin":
return <FaRegTrashAlt key={label} className={className} />;
// audio
case "crying":
case "laughter":
@@ -72,9 +117,21 @@ export function getIconForLabel(label: string, className?: string) {
case "speech":
case "yell":
return <MdRecordVoiceOver key={label} className={className} />;
case "fire_alarm":
return <FaFire key={label} className={className} />;
// sub labels
case "amazon":
return <FaAmazon key={label} className={className} />;
case "an_post":
case "dpd":
case "gls":
case "nzpost":
case "postnl":
case "postnord":
case "purolator":
return <GiPostStamp key={label} className={className} />;
case "dhl":
return <FaDhl key={label} className={className} />;
case "fedex":
return <FaFedex key={label} className={className} />;
case "ups":

View File

@@ -1,5 +1,5 @@
import { useEffect, useMemo } from "react";
import { isIOS, isMobileOnly, isSafari } from "react-device-detect";
import { isDesktop, isIOS, isMobileOnly, isSafari } from "react-device-detect";
import useSWR from "swr";
import { useApiHost } from "@/api";
import { cn } from "@/lib/utils";
@@ -17,6 +17,7 @@ import useImageLoaded from "@/hooks/use-image-loaded";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import { useEventUpdate } from "@/api/ws";
import { isEqual } from "lodash";
import TimeAgo from "@/components/dynamic/TimeAgo";
type ExploreViewProps = {
searchDetail: SearchResult | undefined;
@@ -197,6 +198,7 @@ function ExploreThumbnailImage({
className="absolute inset-0"
imgLoaded={imgLoaded}
/>
<img
ref={imgRef}
className={cn(
@@ -218,6 +220,17 @@ function ExploreThumbnailImage({
onImgLoad();
}}
/>
{isDesktop && (
<div className="absolute bottom-1 right-1 z-10 rounded-lg bg-black/50 px-2 py-1 text-xs text-white">
{event.end_time ? (
<TimeAgo time={event.start_time * 1000} dense />
) : (
<div>
<ActivityIndicator size={10} />
</div>
)}
</div>
)}
</>
);
}

View File

@@ -531,9 +531,37 @@ function PtzControlPanel({
);
useKeyboardListener(
["ArrowLeft", "ArrowRight", "ArrowUp", "ArrowDown", "+", "-"],
[
"ArrowLeft",
"ArrowRight",
"ArrowUp",
"ArrowDown",
"+",
"-",
"1",
"2",
"3",
"4",
"5",
"6",
"7",
"8",
"9",
],
(key, modifiers) => {
if (modifiers.repeat) {
if (modifiers.repeat || !key) {
return;
}
if (["1", "2", "3", "4", "5", "6", "7", "8", "9"].includes(key)) {
const presetNumber = parseInt(key);
if (
ptz &&
(ptz.presets?.length ?? 0) > 0 &&
presetNumber <= ptz.presets.length
) {
sendPtz(`preset_${ptz.presets[presetNumber - 1]}`);
}
return;
}

View File

@@ -140,6 +140,7 @@ export function RecordingView({
const [exportMode, setExportMode] = useState<ExportMode>("none");
const [exportRange, setExportRange] = useState<TimeRange>();
const [showExportPreview, setShowExportPreview] = useState(false);
// move to next clip
@@ -412,6 +413,7 @@ export function RecordingView({
latestTime={timeRange.before}
mode={exportMode}
range={exportRange}
showPreview={showExportPreview}
setRange={(range) => {
setExportRange(range);
@@ -420,6 +422,7 @@ export function RecordingView({
}
}}
setMode={setExportMode}
setShowPreview={setShowExportPreview}
/>
)}
{isDesktop && (
@@ -473,11 +476,13 @@ export function RecordingView({
latestTime={timeRange.before}
mode={exportMode}
range={exportRange}
showExportPreview={showExportPreview}
allLabels={reviewFilterList.labels}
allZones={reviewFilterList.zones}
onUpdateFilter={updateFilter}
setRange={setExportRange}
setMode={setExportMode}
setShowExportPreview={setShowExportPreview}
/>
</div>
</div>

View File

@@ -1,20 +1,16 @@
import SearchThumbnail from "@/components/card/SearchThumbnail";
import SearchFilterGroup from "@/components/filter/SearchFilterGroup";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import Chip from "@/components/indicators/Chip";
import SearchDetailDialog from "@/components/overlay/detail/SearchDetailDialog";
import SearchDetailDialog, {
SearchTab,
} from "@/components/overlay/detail/SearchDetailDialog";
import { Toaster } from "@/components/ui/sonner";
import {
Tooltip,
TooltipContent,
TooltipTrigger,
} from "@/components/ui/tooltip";
import { cn } from "@/lib/utils";
import { FrigateConfig } from "@/types/frigateConfig";
import { SearchFilter, SearchResult, SearchSource } from "@/types/search";
import { useCallback, useEffect, useMemo, useRef, useState } from "react";
import { isDesktop, isMobileOnly } from "react-device-detect";
import { LuColumns, LuImage, LuSearchX, LuText } from "react-icons/lu";
import { isMobileOnly } from "react-device-detect";
import { LuImage, LuSearchX, LuText } from "react-icons/lu";
import useSWR from "swr";
import ExploreView from "../explore/ExploreView";
import useKeyboardListener, {
@@ -25,14 +21,15 @@ import InputWithTags from "@/components/input/InputWithTags";
import { ScrollArea, ScrollBar } from "@/components/ui/scroll-area";
import { isEqual } from "lodash";
import { formatDateToLocaleString } from "@/utils/dateUtil";
import { TooltipPortal } from "@radix-ui/react-tooltip";
import { Slider } from "@/components/ui/slider";
import SearchThumbnailFooter from "@/components/card/SearchThumbnailFooter";
import SearchSettings from "@/components/settings/SearchSettings";
import {
Popover,
PopoverContent,
PopoverTrigger,
} from "@/components/ui/popover";
import { usePersistence } from "@/hooks/use-persistence";
Tooltip,
TooltipContent,
TooltipTrigger,
} from "@/components/ui/tooltip";
import Chip from "@/components/indicators/Chip";
import { TooltipPortal } from "@radix-ui/react-tooltip";
type SearchViewProps = {
search: string;
@@ -40,12 +37,17 @@ type SearchViewProps = {
searchFilter?: SearchFilter;
searchResults?: SearchResult[];
isLoading: boolean;
hasMore: boolean;
columns: number;
defaultView?: string;
setSearch: (search: string) => void;
setSimilaritySearch: (search: SearchResult) => void;
setSearchFilter: (filter: SearchFilter) => void;
onUpdateFilter: (filter: SearchFilter) => void;
loadMore: () => void;
hasMore: boolean;
refresh: () => void;
setColumns: (columns: number) => void;
setDefaultView: (name: string) => void;
};
export default function SearchView({
search,
@@ -53,12 +55,17 @@ export default function SearchView({
searchFilter,
searchResults,
isLoading,
hasMore,
columns,
defaultView = "summary",
setSearch,
setSimilaritySearch,
setSearchFilter,
onUpdateFilter,
loadMore,
hasMore,
refresh,
setColumns,
setDefaultView,
}: SearchViewProps) {
const contentRef = useRef<HTMLDivElement | null>(null);
const { data: config } = useSWR<FrigateConfig>("config", {
@@ -67,18 +74,17 @@ export default function SearchView({
// grid
const [columnCount, setColumnCount] = usePersistence("exploreGridColumns", 4);
const effectiveColumnCount = useMemo(() => columnCount ?? 4, [columnCount]);
const gridClassName = cn("grid w-full gap-2 px-1 gap-2 lg:gap-4 md:mx-2", {
"sm:grid-cols-2": effectiveColumnCount <= 2,
"sm:grid-cols-3": effectiveColumnCount === 3,
"sm:grid-cols-4": effectiveColumnCount === 4,
"sm:grid-cols-5": effectiveColumnCount === 5,
"sm:grid-cols-6": effectiveColumnCount === 6,
"sm:grid-cols-7": effectiveColumnCount === 7,
"sm:grid-cols-8": effectiveColumnCount >= 8,
});
const gridClassName = cn(
"grid w-full gap-2 px-1 gap-2 lg:gap-4 md:mx-2",
isMobileOnly && "grid-cols-2",
{
"sm:grid-cols-2": columns <= 2,
"sm:grid-cols-3": columns === 3,
"sm:grid-cols-4": columns === 4,
"sm:grid-cols-5": columns === 5,
"sm:grid-cols-6": columns === 6,
},
);
// suggestions values
@@ -145,6 +151,10 @@ export default function SearchView({
: ["12:00AM-11:59PM"],
before: [formatDateToLocaleString()],
after: [formatDateToLocaleString(-5)],
min_score: ["50"],
max_score: ["100"],
has_clip: ["yes", "no"],
has_snapshot: ["yes", "no"],
}),
[config, allLabels, allZones, allSubLabels],
);
@@ -161,16 +171,40 @@ export default function SearchView({
// detail
const [searchDetail, setSearchDetail] = useState<SearchResult>();
const [page, setPage] = useState<SearchTab>("details");
// search interaction
const [selectedIndex, setSelectedIndex] = useState<number | null>(null);
const itemRefs = useRef<(HTMLDivElement | null)[]>([]);
const onSelectSearch = useCallback((item: SearchResult, index: number) => {
setSearchDetail(item);
setSelectedIndex(index);
}, []);
const onSelectSearch = useCallback(
(item: SearchResult, index: number, page: SearchTab = "details") => {
setPage(page);
setSearchDetail(item);
setSelectedIndex(index);
},
[],
);
useEffect(() => {
setSelectedIndex(0);
}, [searchTerm, searchFilter]);
// confidence score
const zScoreToConfidence = (score: number) => {
// Normalizing is not needed for similarity searches
// Sigmoid function for normalized: 1 / (1 + e^x)
// Cosine for similarity
if (searchFilter) {
const notNormalized = searchFilter?.search_type?.includes("similarity");
const confidence = notNormalized ? 1 - score : 1 / (1 + Math.exp(score));
return Math.round(confidence * 100);
}
};
// update search detail when results change
@@ -187,15 +221,6 @@ export default function SearchView({
}
}, [searchResults, searchDetail]);
// confidence score - probably needs tweaking
const zScoreToConfidence = (score: number) => {
// Sigmoid function: 1 / (1 + e^x)
const confidence = 1 / (1 + Math.exp(score));
return Math.round(confidence * 100);
};
const hasExistingSearch = useMemo(
() => searchResults != undefined || searchFilter != undefined,
[searchResults, searchFilter],
@@ -304,7 +329,9 @@ export default function SearchView({
<Toaster closeButton={true} />
<SearchDetailDialog
search={searchDetail}
page={page}
setSearch={setSearchDetail}
setSearchPage={setPage}
setSimilarity={
searchDetail && (() => setSimilaritySearch(searchDetail))
}
@@ -335,7 +362,7 @@ export default function SearchView({
{hasExistingSearch && (
<ScrollArea className="w-full whitespace-nowrap lg:ml-[35%]">
<div className="flex flex-row">
<div className="flex flex-row gap-2">
<SearchFilterGroup
className={cn(
"w-full justify-between md:justify-start lg:justify-end",
@@ -343,6 +370,14 @@ export default function SearchView({
filter={searchFilter}
onUpdateFilter={onUpdateFilter}
/>
<SearchSettings
columns={columns}
setColumns={setColumns}
defaultView={defaultView}
setDefaultView={setDefaultView}
filter={searchFilter}
onUpdateFilter={onUpdateFilter}
/>
<ScrollBar orientation="horizontal" className="h-0" />
</div>
</ScrollArea>
@@ -378,16 +413,15 @@ export default function SearchView({
key={value.id}
ref={(item) => (itemRefs.current[index] = item)}
data-start={value.start_time}
className="review-item relative rounded-lg"
className="review-item relative flex flex-col rounded-lg"
>
<div
className={cn(
"aspect-square size-full overflow-hidden rounded-lg",
"aspect-square w-full overflow-hidden rounded-t-lg border",
)}
>
<SearchThumbnail
searchResult={value}
findSimilar={() => setSimilaritySearch(value)}
onClick={() => onSelectSearch(value, index)}
/>
{(searchTerm ||
@@ -399,11 +433,10 @@ export default function SearchView({
className={`flex select-none items-center justify-between space-x-1 bg-gray-500 bg-gradient-to-br from-gray-400 to-gray-500 text-xs capitalize text-white`}
>
{value.search_source == "thumbnail" ? (
<LuImage className="mr-1 size-3" />
<LuImage className="size-3" />
) : (
<LuText className="mr-1 size-3" />
<LuText className="size-3" />
)}
{zScoreToConfidence(value.search_distance)}%
</Chip>
</TooltipTrigger>
<TooltipPortal>
@@ -419,6 +452,21 @@ export default function SearchView({
<div
className={`review-item-ring pointer-events-none absolute inset-0 z-10 size-full rounded-lg outline outline-[3px] -outline-offset-[2.8px] ${selected ? `shadow-selected outline-selected` : "outline-transparent duration-500"}`}
/>
<div className="flex w-full grow items-center justify-between rounded-b-lg border border-t-0 bg-card p-3 text-card-foreground">
<SearchThumbnailFooter
searchResult={value}
columns={columns}
findSimilar={() => {
if (config?.semantic_search.enabled) {
setSimilaritySearch(value);
}
}}
refreshResults={refresh}
showObjectLifecycle={() =>
onSelectSearch(value, index, "object lifecycle")
}
/>
</div>
</div>
);
})}
@@ -430,53 +478,13 @@ export default function SearchView({
<div className="flex h-12 w-full justify-center">
{hasMore && isLoading && <ActivityIndicator />}
</div>
{isDesktop && columnCount && (
<div
className={cn(
"fixed bottom-12 right-3 z-50 flex flex-row gap-2 lg:bottom-9",
)}
>
<Popover>
<Tooltip>
<TooltipTrigger asChild>
<PopoverTrigger asChild>
<div className="cursor-pointer rounded-lg bg-secondary text-secondary-foreground opacity-75 transition-all duration-300 hover:bg-muted hover:opacity-100">
<LuColumns className="size-5 md:m-[6px]" />
</div>
</PopoverTrigger>
</TooltipTrigger>
<TooltipContent>Adjust Grid Columns</TooltipContent>
</Tooltip>
<PopoverContent className="mr-2 w-80">
<div className="space-y-4">
<div className="font-medium leading-none">
Grid Columns
</div>
<div className="flex items-center space-x-4">
<Slider
value={[effectiveColumnCount]}
onValueChange={([value]) => setColumnCount(value)}
max={8}
min={2}
step={1}
className="flex-grow"
/>
<span className="w-9 text-center text-sm font-medium">
{effectiveColumnCount}
</span>
</div>
</div>
</PopoverContent>
</Popover>
</div>
)}
</>
)}
</div>
{searchFilter &&
Object.keys(searchFilter).length === 0 &&
!searchTerm && (
!searchTerm &&
defaultView == "summary" && (
<div className="scrollbar-container flex size-full flex-col overflow-y-auto">
<ExploreView
searchDetail={searchDetail}

View File

@@ -11,11 +11,17 @@ import { usePersistence } from "@/hooks/use-persistence";
import { Skeleton } from "@/components/ui/skeleton";
import { useCameraActivity } from "@/hooks/use-camera-activity";
import { Tabs, TabsContent, TabsList, TabsTrigger } from "@/components/ui/tabs";
import {
Popover,
PopoverContent,
PopoverTrigger,
} from "@/components/ui/popover";
import { ObjectType } from "@/types/ws";
import useDeepMemo from "@/hooks/use-deep-memo";
import { Card } from "@/components/ui/card";
import { getIconForLabel } from "@/utils/iconUtil";
import { capitalizeFirstLetter } from "@/utils/stringUtil";
import { LuInfo } from "react-icons/lu";
type ObjectSettingsViewProps = {
selectedCamera?: string;
@@ -35,6 +41,30 @@ export default function ObjectSettingsView({
param: "bbox",
title: "Bounding boxes",
description: "Show bounding boxes around tracked objects",
info: (
<>
<p className="mb-2">
<strong>Object Bounding Box Colors</strong>
</p>
<ul className="list-disc space-y-1 pl-5">
<li>
At startup, different colors will be assigned to each object label
</li>
<li>
A dark blue thin line indicates that object is not detected at
this current point in time
</li>
<li>
A gray thin line indicates that object is detected as being
stationary
</li>
<li>
A thick line indicates that object is the subject of autotracking
(when enabled)
</li>
</ul>
</>
),
},
{
param: "timestamp",
@@ -55,12 +85,34 @@ export default function ObjectSettingsView({
param: "motion",
title: "Motion boxes",
description: "Show boxes around areas where motion is detected",
info: (
<>
<p className="mb-2">
<strong>Motion Boxes</strong>
</p>
<p>
Red boxes will be overlaid on areas of the frame where motion is
currently being detected
</p>
</>
),
},
{
param: "regions",
title: "Regions",
description:
"Show a box of the region of interest sent to the object detector",
info: (
<>
<p className="mb-2">
<strong>Region Boxes</strong>
</p>
<p>
Bright green boxes will be overlaid on areas of interest in the
frame that are being sent to the object detector.
</p>
</>
),
},
];
@@ -145,19 +197,34 @@ export default function ObjectSettingsView({
<div className="flex w-full flex-col space-y-6">
<div className="mt-2 space-y-6">
<div className="my-2.5 flex flex-col gap-2.5">
{DEBUG_OPTIONS.map(({ param, title, description }) => (
{DEBUG_OPTIONS.map(({ param, title, description, info }) => (
<div
key={param}
className="flex w-full flex-row items-center justify-between"
>
<div className="mb-2 flex flex-col">
<Label
className="mb-2 w-full cursor-pointer capitalize text-primary"
htmlFor={param}
>
{title}
</Label>
<div className="text-xs text-muted-foreground">
<div className="flex items-center gap-2">
<Label
className="mb-0 cursor-pointer capitalize text-primary"
htmlFor={param}
>
{title}
</Label>
{info && (
<Popover>
<PopoverTrigger asChild>
<div className="cursor-pointer p-0">
<LuInfo className="size-4" />
<span className="sr-only">Info</span>
</div>
</PopoverTrigger>
<PopoverContent className="w-80">
{info}
</PopoverContent>
</Popover>
)}
</div>
<div className="mt-1 text-xs text-muted-foreground">
{description}
</div>
</div>
@@ -240,7 +307,7 @@ function ObjectList(objects?: ObjectType[]) {
{getIconForLabel(obj.label, "size-5 text-white")}
</div>
<div className="ml-3 text-lg">
{capitalizeFirstLetter(obj.label)}
{capitalizeFirstLetter(obj.label.replaceAll("_", " "))}
</div>
</div>
<div className="flex w-8/12 flex-row items-end justify-end">

View File

@@ -0,0 +1,291 @@
import Heading from "@/components/ui/heading";
import { FrigateConfig, SearchModelSize } from "@/types/frigateConfig";
import useSWR from "swr";
import axios from "axios";
import ActivityIndicator from "@/components/indicators/activity-indicator";
import { useCallback, useContext, useEffect, useState } from "react";
import { Label } from "@/components/ui/label";
import { Button } from "@/components/ui/button";
import { Switch } from "@/components/ui/switch";
import { Toaster } from "@/components/ui/sonner";
import { toast } from "sonner";
import { Separator } from "@/components/ui/separator";
import { Link } from "react-router-dom";
import { LuExternalLink } from "react-icons/lu";
import { StatusBarMessagesContext } from "@/context/statusbar-provider";
import {
Select,
SelectContent,
SelectGroup,
SelectItem,
SelectTrigger,
} from "@/components/ui/select";
type SearchSettingsViewProps = {
setUnsavedChanges: React.Dispatch<React.SetStateAction<boolean>>;
};
type SearchSettings = {
enabled?: boolean;
reindex?: boolean;
model_size?: SearchModelSize;
};
export default function SearchSettingsView({
setUnsavedChanges,
}: SearchSettingsViewProps) {
const { data: config, mutate: updateConfig } =
useSWR<FrigateConfig>("config");
const [changedValue, setChangedValue] = useState(false);
const [isLoading, setIsLoading] = useState(false);
const { addMessage, removeMessage } = useContext(StatusBarMessagesContext)!;
const [searchSettings, setSearchSettings] = useState<SearchSettings>({
enabled: undefined,
reindex: undefined,
model_size: undefined,
});
const [origSearchSettings, setOrigSearchSettings] = useState<SearchSettings>({
enabled: undefined,
reindex: undefined,
model_size: undefined,
});
useEffect(() => {
if (config) {
if (searchSettings?.enabled == undefined) {
setSearchSettings({
enabled: config.semantic_search.enabled,
reindex: config.semantic_search.reindex,
model_size: config.semantic_search.model_size,
});
}
setOrigSearchSettings({
enabled: config.semantic_search.enabled,
reindex: config.semantic_search.reindex,
model_size: config.semantic_search.model_size,
});
}
// we know that these deps are correct
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [config]);
const handleSearchConfigChange = (newConfig: Partial<SearchSettings>) => {
setSearchSettings((prevConfig) => ({ ...prevConfig, ...newConfig }));
setUnsavedChanges(true);
setChangedValue(true);
};
const saveToConfig = useCallback(async () => {
setIsLoading(true);
axios
.put(
`config/set?semantic_search.enabled=${searchSettings.enabled ? "True" : "False"}&semantic_search.reindex=${searchSettings.reindex ? "True" : "False"}&semantic_search.model_size=${searchSettings.model_size}`,
{
requires_restart: 0,
},
)
.then((res) => {
if (res.status === 200) {
toast.success("Search settings have been saved.", {
position: "top-center",
});
setChangedValue(false);
updateConfig();
} else {
toast.error(`Failed to save config changes: ${res.statusText}`, {
position: "top-center",
});
}
})
.catch((error) => {
toast.error(
`Failed to save config changes: ${error.response.data.message}`,
{ position: "top-center" },
);
})
.finally(() => {
setIsLoading(false);
});
}, [
updateConfig,
searchSettings.enabled,
searchSettings.reindex,
searchSettings.model_size,
]);
const onCancel = useCallback(() => {
setSearchSettings(origSearchSettings);
setChangedValue(false);
removeMessage("search_settings", "search_settings");
}, [origSearchSettings, removeMessage]);
useEffect(() => {
if (changedValue) {
addMessage(
"search_settings",
`Unsaved search settings changes`,
undefined,
"search_settings",
);
} else {
removeMessage("search_settings", "search_settings");
}
// we know that these deps are correct
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [changedValue]);
useEffect(() => {
document.title = "Search Settings - Frigate";
}, []);
if (!config) {
return <ActivityIndicator />;
}
return (
<div className="flex size-full flex-col md:flex-row">
<Toaster position="top-center" closeButton={true} />
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto rounded-lg border-[1px] border-secondary-foreground bg-background_alt p-2 md:order-none md:mb-0 md:mr-2 md:mt-0">
<Heading as="h3" className="my-2">
Search Settings
</Heading>
<Separator className="my-2 flex bg-secondary" />
<Heading as="h4" className="my-2">
Semantic Search
</Heading>
<div className="max-w-6xl">
<div className="mb-5 mt-2 flex max-w-5xl flex-col gap-2 text-sm text-primary-variant">
<p>
Semantic Search in Frigate allows you to find tracked objects
within your review items using either the image itself, a
user-defined text description, or an automatically generated one.
</p>
<div className="flex items-center text-primary">
<Link
to="https://docs.frigate.video/configuration/semantic_search"
target="_blank"
rel="noopener noreferrer"
className="inline"
>
Read the Documentation
<LuExternalLink className="ml-2 inline-flex size-3" />
</Link>
</div>
</div>
</div>
<div className="flex w-full max-w-lg flex-col space-y-6">
<div className="flex flex-row items-center">
<Switch
id="enabled"
className="mr-3"
disabled={searchSettings.enabled === undefined}
checked={searchSettings.enabled === true}
onCheckedChange={(isChecked) => {
handleSearchConfigChange({ enabled: isChecked });
}}
/>
<div className="space-y-0.5">
<Label htmlFor="enabled">Enabled</Label>
</div>
</div>
<div className="flex flex-col">
<div className="flex flex-row items-center">
<Switch
id="reindex"
className="mr-3"
disabled={searchSettings.reindex === undefined}
checked={searchSettings.reindex === true}
onCheckedChange={(isChecked) => {
handleSearchConfigChange({ reindex: isChecked });
}}
/>
<div className="space-y-0.5">
<Label htmlFor="reindex">Re-Index On Startup</Label>
</div>
</div>
<div className="mt-3 text-sm text-muted-foreground">
Re-indexing will reprocess all thumbnails and descriptions (if
enabled) and apply the embeddings on each startup.{" "}
<em>Don't forget to disable the option after restarting!</em>
</div>
</div>
<div className="mt-2 flex flex-col space-y-6">
<div className="space-y-0.5">
<div className="text-md">Model Size</div>
<div className="space-y-1 text-sm text-muted-foreground">
<p>
The size of the model used for semantic search embeddings.
</p>
<ul className="list-disc pl-5 text-sm">
<li>
Using <em>small</em> employs a quantized version of the
model that uses less RAM and runs faster on CPU with a very
negligible difference in embedding quality.
</li>
<li>
Using <em>large</em> employs the full Jina model and will
automatically run on the GPU if applicable.
</li>
</ul>
</div>
</div>
<Select
value={searchSettings.model_size}
onValueChange={(value) =>
handleSearchConfigChange({
model_size: value as SearchModelSize,
})
}
>
<SelectTrigger className="w-20">
{searchSettings.model_size}
</SelectTrigger>
<SelectContent>
<SelectGroup>
{["small", "large"].map((size) => (
<SelectItem
key={size}
className="cursor-pointer"
value={size}
>
{size}
</SelectItem>
))}
</SelectGroup>
</SelectContent>
</Select>
</div>
</div>
<Separator className="my-2 flex bg-secondary" />
<div className="flex w-full flex-row items-center gap-2 pt-2 md:w-[25%]">
<Button className="flex flex-1" onClick={onCancel}>
Reset
</Button>
<Button
variant="select"
disabled={!changedValue || isLoading}
className="flex flex-1"
onClick={saveToConfig}
>
{isLoading ? (
<div className="flex flex-row items-center gap-2">
<ActivityIndicator />
<span>Saving...</span>
</div>
) : (
"Save"
)}
</Button>
</div>
</div>
</div>
);
}

View File

@@ -22,7 +22,7 @@ import {
const PLAYBACK_RATE_DEFAULT = isSafari ? [0.5, 1, 2] : [0.5, 1, 2, 4, 8, 16];
const WEEK_STARTS_ON = ["Sunday", "Monday"];
export default function GeneralSettingsView() {
export default function UiSettingsView() {
const { data: config } = useSWR<FrigateConfig>("config");
const clearStoredLayouts = useCallback(() => {