Compare commits

...

37 Commits

Author SHA1 Message Date
dependabot[bot]
f39322566e Update scipy requirement from ==1.13.* to ==1.15.* in /docker/main
Updates the requirements on [scipy](https://github.com/scipy/scipy) to permit the latest version.
- [Release notes](https://github.com/scipy/scipy/releases)
- [Commits](https://github.com/scipy/scipy/compare/v1.13.0rc1...v1.15.0)

---
updated-dependencies:
- dependency-name: scipy
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-01-06 11:31:30 +00:00
Nicolas Mowen
e7ad38d827 Update model docs (#15779) 2025-01-02 10:04:16 -06:00
Josh Hawkins
a1ce9aacf2 Tracked object details pane bugfix (#15736)
* restore save button in tracked object details pane

* conditionally show save button
2024-12-30 08:23:25 -06:00
Nicolas Mowen
322b847356 Fix event cleanup (#15724) 2024-12-29 14:47:40 -06:00
Josh Hawkins
98338e4c7f Ensure object lifecycle ratio is re-normalized to camera aspect (#15717) 2024-12-28 13:37:39 -07:00
Josh Hawkins
171a89f37b Language consistency - use Explore instead of Search (#15709) 2024-12-27 17:38:43 -07:00
Josh Hawkins
8114b541a8 Sort camera group edit screen by ui config values (#15705) 2024-12-27 14:30:27 -06:00
Josh Hawkins
c48396c5c6 Fix crash when streams are undefined in go2rtc config password cleaning (#15695) 2024-12-27 08:36:21 -06:00
leccelecce
00371546a3 GenAI: add ability to save JPGs sent to provider (#15643)
* GenAI: add ability to save JPGs sent to provider

* Remove mention from GenAI docs

* Change config name to debug_save_thumbnails

* Change  folder structure to clips/genai-requests/{event_id}/{1.jpg}
2024-12-23 07:05:34 -07:00
Nicolas Mowen
87e7b62c85 Remove duplicated rockchip build (#15641) 2024-12-22 13:31:14 -06:00
Nicolas Mowen
15ffe5c254 Fix trt (#15640) 2024-12-22 11:56:04 -07:00
Nicolas Mowen
a767dad3a1 Simplify TensorRT image (#15638) 2024-12-22 12:13:29 -06:00
Josh Hawkins
9387246f83 Add tooltips to ptz controls (#15633) 2024-12-21 17:57:22 -06:00
Nicolas Mowen
bed20de302 Update docs deps (#15617) 2024-12-20 10:37:02 -06:00
Nicolas Mowen
70fc5393b1 Make hailo wheels support any minor version (#15616) 2024-12-20 10:36:32 -06:00
dependabot[bot]
9b80dbe014 Bump actions/setup-python from 5.1.0 to 5.3.0 (#14584)
Bumps [actions/setup-python](https://github.com/actions/setup-python) from 5.1.0 to 5.3.0.
- [Release notes](https://github.com/actions/setup-python/releases)
- [Commits](https://github.com/actions/setup-python/compare/v5.1.0...v5.3.0)

---
updated-dependencies:
- dependency-name: actions/setup-python
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-12-20 09:16:21 -07:00
Josh Hawkins
78a013d63a Add "frame" to shm frame names to avoid camera name issues (#15615) 2024-12-20 08:46:40 -06:00
Gabriel de Biasi
ddfe8f3921 Fix #7944: Adds tls_insecure to the onvif configuration (#15603)
* Adds tls_insecure to the onvif configuration

* reformat using ruff
2024-12-19 12:54:33 -07:00
Nicolas Mowen
4af752028f Bug Fixes (#15598)
* Catch onvif command error

* fix review item pre and post capture

* Include severity in query
2024-12-19 09:46:14 -06:00
Nicolas Mowen
b149828c9f Catch OS error (#15590) 2024-12-18 17:45:08 -06:00
Josh Hawkins
3dc26e78ef Genai descriptions are not generated until tracked objects end (#15561) 2024-12-17 17:33:04 -06:00
Giorgio Ughini
d9ef8fa206 Fix always the same image is sent to GenAI (#15550)
* Fix always the same image is sent to GenAI

* Fix typo for bug where identical images are sent to GenAI

* Correct formatting
2024-12-17 07:44:00 -06:00
Josh Hawkins
292499aebc Improve review message again (#15538) 2024-12-16 09:18:34 -07:00
Josh Hawkins
717493e668 Improve handling of error conditions with ollama and snapshot regeneration (#15527) 2024-12-15 20:51:23 -06:00
Josh Hawkins
d49f958d4d Don't crop by region for genai snapshot for manual events (#15525) 2024-12-15 17:03:19 -06:00
Nicolas Mowen
33ee32865f Ensure that go2rtc streams are cleaned (#15524)
* Ensure that go2rtc streams are cleaned

* Formatting

* Handle go2rtc config correctly

* Set type
2024-12-15 16:56:24 -06:00
Josh Hawkins
17f8939f97 Add FAQ to explain why streams might work in VLC but not in Frigate (#15513)
* Add faq to explain why streams might work in VLC but not in Frigate

* fix go2rtc version number

* wording

* mention udp input args and preset
2024-12-14 13:58:39 -06:00
FL42
1b7fe9523d fix: use requests.Session() for DeepStack API (#15505) 2024-12-14 07:54:13 -07:00
Josh Hawkins
0763f56047 Update iframe interval recommendation (#15501)
* Update iframe interval recommendation

* clarify

* tweaks

* wording
2024-12-13 12:52:56 -07:00
Josh Hawkins
1ea282fba8 Improve the message for missing objects in review items (#15500) 2024-12-13 12:02:41 -07:00
Blake Blackshear
869fa2631e apply zizmor recommendations (#15490) 2024-12-13 07:34:09 -06:00
Nicolas Mowen
f336a91fee Cleanup handling of first object message (#15480) 2024-12-12 21:22:47 -06:00
Nicolas Mowen
d302b6e198 Cap storage bandwidth (#15473) 2024-12-12 14:46:00 -06:00
Nicolas Mowen
ed2e1f3f72 Remove debug cleanup change (#15468) 2024-12-12 07:46:06 -07:00
Nicolas Mowen
b4d82084a9 Fixes (#15465)
* Fix single event return

* Allow customizing if search is preserved for overlay state

* Remove timeout

* Cleanup

* Cleanup naming
2024-12-12 08:22:30 -06:00
Josh Hawkins
53b96dfb89 Improve semantic search docs (#15453) 2024-12-11 20:19:08 -06:00
Nicolas Mowen
0e3fb6cbdd Standardize handling of config files (#15451)
* Standardize handling of config files

* Formatting

* Remove unused
2024-12-11 18:46:42 -06:00
51 changed files with 5516 additions and 2360 deletions

View File

@@ -7,7 +7,7 @@ on:
- dev
- master
paths-ignore:
- 'docs/**'
- "docs/**"
# only run the latest commit to avoid cache overwrites
concurrency:
@@ -24,6 +24,8 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
@@ -45,6 +47,8 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
@@ -71,21 +75,14 @@ jobs:
rpi.tags=${{ steps.setup.outputs.image-name }}-rpi
*.cache-from=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64
*.cache-to=type=registry,ref=${{ steps.setup.outputs.cache-name }}-arm64,mode=max
- name: Build and push Rockchip build
uses: docker/bake-action@v3
with:
push: true
targets: rk
files: docker/rockchip/rk.hcl
set: |
rk.tags=${{ steps.setup.outputs.image-name }}-rk
*.cache-from=type=gha
jetson_jp4_build:
runs-on: ubuntu-latest
name: Jetson Jetpack 4
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
@@ -112,6 +109,8 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
@@ -140,6 +139,8 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
@@ -165,6 +166,8 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup
@@ -188,6 +191,8 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up QEMU and Buildx
id: setup
uses: ./.github/actions/setup

View File

@@ -1,24 +0,0 @@
name: dependabot-auto-merge
on: pull_request
permissions:
contents: write
jobs:
dependabot-auto-merge:
runs-on: ubuntu-latest
if: github.actor == 'dependabot[bot]'
steps:
- name: Get Dependabot metadata
id: metadata
uses: dependabot/fetch-metadata@v2
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Enable auto-merge for Dependabot PRs
if: steps.metadata.outputs.dependency-type == 'direct:development' && (steps.metadata.outputs.update-type == 'version-update:semver-minor' || steps.metadata.outputs.update-type == 'version-update:semver-patch')
run: |
gh pr review --approve "$PR_URL"
gh pr merge --auto --squash "$PR_URL"
env:
PR_URL: ${{ github.event.pull_request.html_url }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -3,7 +3,7 @@ name: On pull request
on:
pull_request:
paths-ignore:
- 'docs/**'
- "docs/**"
env:
DEFAULT_PYTHON: 3.9
@@ -19,6 +19,8 @@ jobs:
DOCKER_BUILDKIT: "1"
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 16.x
@@ -38,6 +40,8 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 16.x
@@ -52,6 +56,8 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 20.x
@@ -67,8 +73,10 @@ jobs:
steps:
- name: Check out the repository
uses: actions/checkout@v4
with:
persist-credentials: false
- name: Set up Python ${{ env.DEFAULT_PYTHON }}
uses: actions/setup-python@v5.1.0
uses: actions/setup-python@v5.3.0
with:
python-version: ${{ env.DEFAULT_PYTHON }}
- name: Install requirements
@@ -88,6 +96,8 @@ jobs:
steps:
- name: Check out code
uses: actions/checkout@v4
with:
persist-credentials: false
- uses: actions/setup-node@master
with:
node-version: 16.x

View File

@@ -11,6 +11,8 @@ jobs:
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- id: lowercaseRepo
uses: ASzc/change-string-case-action@v6
with:
@@ -22,10 +24,13 @@ jobs:
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create tag variables
env:
TAG: ${{ github.ref_name }}
LOWERCASE_REPO: ${{ steps.lowercaseRepo.outputs.lowercase }}
run: |
BUILD_TYPE=$([[ "${{ github.ref_name }}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]] && echo "stable" || echo "beta")
BUILD_TYPE=$([[ "${TAG}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]] && echo "stable" || echo "beta")
echo "BUILD_TYPE=${BUILD_TYPE}" >> $GITHUB_ENV
echo "BASE=ghcr.io/${{ steps.lowercaseRepo.outputs.lowercase }}" >> $GITHUB_ENV
echo "BASE=ghcr.io/${LOWERCASE_REPO}" >> $GITHUB_ENV
echo "BUILD_TAG=${GITHUB_SHA::7}" >> $GITHUB_ENV
echo "CLEAN_VERSION=$(echo ${GITHUB_REF##*/} | tr '[:upper:]' '[:lower:]' | sed 's/^[v]//')" >> $GITHUB_ENV
- name: Tag and push the main image

View File

@@ -23,7 +23,9 @@ jobs:
exempt-pr-labels: "pinned,security,dependencies"
operations-per-run: 120
- name: Print outputs
run: echo ${{ join(steps.stale.outputs.*, ',') }}
env:
STALE_OUTPUT: ${{ join(steps.stale.outputs.*, ',') }}
run: echo "$STALE_OUTPUT"
# clean_ghcr:
# name: Delete outdated dev container images
@@ -38,4 +40,3 @@ jobs:
# account-type: personal
# token: ${{ secrets.GITHUB_TOKEN }}
# token-type: github-token

View File

@@ -1,12 +1,12 @@
appdirs==1.4.4
argcomplete==2.0.0
contextlib2==0.6.0.post1
distlib==0.3.6
filelock==3.8.0
future==0.18.2
importlib-metadata==5.1.0
importlib-resources==5.1.2
netaddr==0.8.0
netifaces==0.10.9
verboselogs==1.7
virtualenv==20.17.0
appdirs==1.4.*
argcomplete==2.0.*
contextlib2==0.6.*
distlib==0.3.*
filelock==3.8.*
future==0.18.*
importlib-metadata==5.1.*
importlib-resources==5.1.*
netaddr==0.8.*
netifaces==0.10.*
verboselogs==1.7.*
virtualenv==20.17.*

View File

@@ -27,7 +27,7 @@ ruamel.yaml == 0.18.*
tzlocal == 5.2
requests == 2.32.*
types-requests == 2.32.*
scipy == 1.13.*
scipy == 1.15.*
norfair == 2.2.*
setproctitle == 1.3.*
ws4py == 0.5.*

View File

@@ -12,26 +12,11 @@ ARG TARGETARCH
COPY docker/tensorrt/requirements-amd64.txt /requirements-tensorrt.txt
RUN mkdir -p /trt-wheels && pip3 wheel --wheel-dir=/trt-wheels -r /requirements-tensorrt.txt
# Build CuDNN
FROM wget AS cudnn-deps
ARG COMPUTE_LEVEL
RUN apt-get update \
&& apt-get install -y git build-essential
RUN wget https://developer.download.nvidia.com/compute/cuda/repos/debian11/x86_64/cuda-keyring_1.1-1_all.deb \
&& dpkg -i cuda-keyring_1.1-1_all.deb \
&& apt-get update \
&& apt-get -y install cuda-toolkit \
&& rm -rf /var/lib/apt/lists/*
FROM tensorrt-base AS frigate-tensorrt
ENV TRT_VER=8.5.3
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \
pip3 install -U /deps/trt-wheels/*.whl && \
ldconfig
COPY --from=cudnn-deps /usr/local/cuda-12.6 /usr/local/cuda
ENV LD_LIBRARY_PATH=/usr/local/lib/python3.9/dist-packages/tensorrt:/usr/local/cuda/lib64:/usr/local/lib/python3.9/dist-packages/nvidia/cufft/lib
WORKDIR /opt/frigate/
@@ -42,7 +27,7 @@ FROM devcontainer AS devcontainer-trt
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
COPY --from=trt-deps /usr/local/src/tensorrt_demos /usr/local/src/tensorrt_demos
COPY --from=cudnn-deps /usr/local/cuda-12.6 /usr/local/cuda
COPY --from=trt-deps /usr/local/cuda-12.1 /usr/local/cuda
COPY docker/tensorrt/detector/rootfs/ /
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
RUN --mount=type=bind,from=trt-wheels,source=/trt-wheels,target=/deps/trt-wheels \

View File

@@ -24,6 +24,7 @@ ENV S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
COPY --from=trt-deps /usr/local/src/tensorrt_demos /usr/local/src/tensorrt_demos
COPY --from=trt-deps /usr/local/cuda-12.* /usr/local/cuda
COPY docker/tensorrt/detector/rootfs/ /
ENV YOLO_MODELS=""

View File

@@ -174,7 +174,7 @@ NOTE: The folder that is set for the config needs to be the folder that contains
### Custom go2rtc version
Frigate currently includes go2rtc v1.9.4, there may be certain cases where you want to run a different version of go2rtc.
Frigate currently includes go2rtc v1.9.2, there may be certain cases where you want to run a different version of go2rtc.
To do this:

View File

@@ -41,6 +41,7 @@ cameras:
...
onvif:
# Required: host of the camera being connected to.
# NOTE: HTTP is assumed by default; HTTPS is supported if you specify the scheme, ex: "https://0.0.0.0".
host: 0.0.0.0
# Optional: ONVIF port for device (default: shown below).
port: 8000
@@ -49,6 +50,8 @@ cameras:
user: admin
# Optional: password for login.
password: admin
# Optional: Skip TLS verification from the ONVIF server (default: shown below)
tls_insecure: False
# Optional: PTZ camera object autotracking. Keeps a moving object in
# the center of the frame by automatically moving the PTZ camera.
autotracking:

View File

@@ -5,6 +5,8 @@ title: Generative AI
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
Requests for a description are sent off automatically to your AI provider at the end of the tracked object's lifecycle. Descriptions can also be regenerated manually via the Frigate UI.
:::info
Semantic Search must be enabled to use Generative AI.

View File

@@ -23,7 +23,7 @@ If you are using go2rtc, you should adjust the following settings in your camera
- Video codec: **H.264** - provides the most compatible video codec with all Live view technologies and browsers. Avoid any kind of "smart codec" or "+" codec like _H.264+_ or _H.265+_. as these non-standard codecs remove keyframes (see below).
- Audio codec: **AAC** - provides the most compatible audio codec with all Live view technologies and browsers that support audio.
- I-frame interval (sometimes called the keyframe interval, the interframe space, or the GOP length): match your camera's frame rate, or choose "1x" (for interframe space on Reolink cameras). For example, if your stream outputs 20fps, your i-frame interval should be 20 (or 1x on Reolink). Values higher than the frame rate will cause the stream to take longer to begin playback. See [this page](https://gardinal.net/understanding-the-keyframe-interval/) for more on keyframes.
- I-frame interval (sometimes called the keyframe interval, the interframe space, or the GOP length): match your camera's frame rate, or choose "1x" (for interframe space on Reolink cameras). For example, if your stream outputs 20fps, your i-frame interval should be 20 (or 1x on Reolink). Values higher than the frame rate will cause the stream to take longer to begin playback. See [this page](https://gardinal.net/understanding-the-keyframe-interval/) for more on keyframes. For many users this may not be an issue, but it should be noted that that a 1x i-frame interval will cause more storage utilization if you are using the stream for the `record` role as well.
The default video and audio codec on your camera may not always be compatible with your browser, which is why setting them to H.264 and AAC is recommended. See the [go2rtc docs](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#codecs-madness) for codec support information.

View File

@@ -144,7 +144,9 @@ detectors:
#### SSDLite MobileNet v2
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model. Use the model configuration shown below when using the OpenVINO detector with the default model.
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model.
Use the model configuration shown below when using the OpenVINO detector with the default OpenVINO model:
```yaml
detectors:
@@ -254,6 +256,7 @@ yolov4x-mish-640
yolov7-tiny-288
yolov7-tiny-416
yolov7-640
yolov7-416
yolov7-320
yolov7x-640
yolov7x-320
@@ -282,6 +285,8 @@ The TensorRT detector can be selected by specifying `tensorrt` as the model type
The TensorRT detector uses `.trt` model files that are located in `/config/model_cache/tensorrt` by default. These model path and dimensions used will depend on which model you have generated.
Use the config below to work with generated TRT models:
```yaml
detectors:
tensorrt:

View File

@@ -117,25 +117,27 @@ auth:
hash_iterations: 600000
# Optional: model modifications
# NOTE: The default values are for the EdgeTPU detector.
# Other detectors will require the model config to be set.
model:
# Optional: path to the model (default: automatic based on detector)
# Required: path to the model (default: automatic based on detector)
path: /edgetpu_model.tflite
# Optional: path to the labelmap (default: shown below)
# Required: path to the labelmap (default: shown below)
labelmap_path: /labelmap.txt
# Required: Object detection model input width (default: shown below)
width: 320
# Required: Object detection model input height (default: shown below)
height: 320
# Optional: Object detection model input colorspace
# Required: Object detection model input colorspace
# Valid values are rgb, bgr, or yuv. (default: shown below)
input_pixel_format: rgb
# Optional: Object detection model input tensor format
# Required: Object detection model input tensor format
# Valid values are nhwc or nchw (default: shown below)
input_tensor: nhwc
# Optional: Object detection model type, currently only used with the OpenVINO detector
# Required: Object detection model type, currently only used with the OpenVINO detector
# Valid values are ssd, yolox, yolonas (default: shown below)
model_type: ssd
# Optional: Label name modifications. These are merged into the standard labelmap.
# Required: Label name modifications. These are merged into the standard labelmap.
labelmap:
2: vehicle
# Optional: Map of object labels to their attribute labels (default: depends on model)
@@ -686,6 +688,7 @@ cameras:
# to enable PTZ controls.
onvif:
# Required: host of the camera being connected to.
# NOTE: HTTP is assumed by default; HTTPS is supported if you specify the scheme, ex: "https://0.0.0.0".
host: 0.0.0.0
# Optional: ONVIF port for device (default: shown below).
port: 8000
@@ -694,6 +697,8 @@ cameras:
user: admin
# Optional: password for login.
password: admin
# Optional: Skip TLS verification from the ONVIF server (default: shown below)
tls_insecure: False
# Optional: Ignores time synchronization mismatches between the camera and the server during authentication.
# Using NTP on both ends is recommended and this should only be set to True in a "safe" environment due to the security risk it represents.
ignore_time_mismatch: False
@@ -757,6 +762,8 @@ cameras:
- cat
# Optional: Restrict generation to objects that entered any of the listed zones (default: none, all zones qualify)
required_zones: []
# Optional: Save thumbnails sent to generative AI for review/debugging purposes (default: shown below)
debug_save_thumbnails: False
# Optional
ui:

View File

@@ -5,7 +5,7 @@ title: Using Semantic Search
Semantic Search in Frigate allows you to find tracked objects within your review items using either the image itself, a user-defined text description, or an automatically generated one. This feature works by creating _embeddings_ — numerical vector representations — for both the images and text descriptions of your tracked objects. By comparing these embeddings, Frigate assesses their similarities to deliver relevant search results.
Frigate has support for [Jina AI's CLIP model](https://huggingface.co/jinaai/jina-clip-v1) to create embeddings, which runs locally. Embeddings are then saved to Frigate's database.
Frigate uses [Jina AI's CLIP model](https://huggingface.co/jinaai/jina-clip-v1) to create and save embeddings to Frigate's database. All of this runs locally.
Semantic Search is accessed via the _Explore_ view in the Frigate UI.
@@ -19,7 +19,7 @@ For best performance, 16GB or more of RAM and a dedicated GPU are recommended.
## Configuration
Semantic Search is disabled by default, and must be enabled in your config file before it can be used. Semantic Search is a global configuration setting.
Semantic Search is disabled by default, and must be enabled in your config file or in the UI's Settings page before it can be used. Semantic Search is a global configuration setting.
```yaml
semantic_search:
@@ -29,9 +29,9 @@ semantic_search:
:::tip
The embeddings database can be re-indexed from the existing tracked objects in your database by adding `reindex: True` to your `semantic_search` configuration. Depending on the number of tracked objects you have, it can take a long while to complete and may max out your CPU while indexing. Make sure to set the config back to `False` before restarting Frigate again.
The embeddings database can be re-indexed from the existing tracked objects in your database by adding `reindex: True` to your `semantic_search` configuration or by toggling the switch on the Search Settings page in the UI and restarting Frigate. Depending on the number of tracked objects you have, it can take a long while to complete and may max out your CPU while indexing. Make sure to turn the UI's switch off or set the config back to `False` before restarting Frigate again.
If you are enabling the Search feature for the first time, be advised that Frigate does not automatically index older tracked objects. You will need to enable the `reindex` feature in order to do that.
If you are enabling Semantic Search for the first time, be advised that Frigate does not automatically index older tracked objects. You will need to enable the `reindex` feature in order to do that.
:::
@@ -39,9 +39,9 @@ If you are enabling the Search feature for the first time, be advised that Friga
The vision model is able to embed both images and text into the same vector space, which allows `image -> image` and `text -> image` similarity searches. Frigate uses this model on tracked objects to encode the thumbnail image and store it in the database. When searching for tracked objects via text in the search box, Frigate will perform a `text -> image` similarity search against this embedding. When clicking "Find Similar" in the tracked object detail pane, Frigate will perform an `image -> image` similarity search to retrieve the closest matching thumbnails.
The text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Search page when clicking on the gray tracked object chip at the top left of each review item. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.
The text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Explore page when clicking on thumbnail of a tracked object. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.
Differently weighted CLIP models are available and can be selected by setting the `model_size` config option as `small` or `large`:
Differently weighted versions of the Jina model are available and can be selected by setting the `model_size` config option as `small` or `large`:
```yaml
semantic_search:
@@ -50,7 +50,7 @@ semantic_search:
```
- Configuring the `large` model employs the full Jina model and will automatically run on the GPU if applicable.
- Configuring the `small` model employs a quantized version of the model that uses less RAM and runs on CPU with a very negligible difference in embedding quality.
- Configuring the `small` model employs a quantized version of the Jina model that uses less RAM and runs on CPU with a very negligible difference in embedding quality.
### GPU Acceleration
@@ -84,7 +84,7 @@ If the correct build is used for your GPU and the `large` model is configured, t
## Usage and Best Practices
1. Semantic Search is used in conjunction with the other filters available on the Search page. Use a combination of traditional filtering and Semantic Search for the best results.
1. Semantic Search is used in conjunction with the other filters available on the Explore page. Use a combination of traditional filtering and Semantic Search for the best results.
2. Use the thumbnail search type when searching for particular objects in the scene. Use the description search type when attempting to discern the intent of your object.
3. Because of how the AI models Frigate uses have been trained, the comparison between text and image embedding distances generally means that with multi-modal (`thumbnail` and `description`) searches, results matching `description` will appear first, even if a `thumbnail` embedding may be a better match. Play with the "Search Type" setting to help find what you are looking for. Note that if you are generating descriptions for specific objects or zones only, this may cause search results to prioritize the objects with descriptions even if the the ones without them are more relevant.
4. Make your search language and tone closely match exactly what you're looking for. If you are using thumbnail search, **phrase your query as an image caption**. Searching for "red car" may not work as well as "red sedan driving down a residential street on a sunny day".

View File

@@ -28,7 +28,7 @@ For the Dahua/Loryta 5442 camera, I use the following settings:
- Encode Mode: H.264
- Resolution: 2688\*1520
- Frame Rate(FPS): 15
- I Frame Interval: 30
- I Frame Interval: 30 (15 can also be used to prioritize streaming performance - see the [camera settings recommendations](../configuration/live) for more info)
**Sub Stream (Detection)**

View File

@@ -98,3 +98,11 @@ docker run -d \
-p 8555:8555/udp \
ghcr.io/blakeblackshear/frigate:stable
```
### My RTSP stream works fine in VLC, but it does not work when I put the same URL in my Frigate config. Is this a bug?
No. Frigate uses the TCP protocol to connect to your camera's RTSP URL. VLC automatically switches between UDP and TCP depending on network conditions and stream availability. So a stream that works in VLC but not in Frigate is likely due to VLC selecting UDP as the transfer protocol.
TCP ensures that all data packets arrive in the correct order. This is crucial for video recording, decoding, and stream processing, which is why Frigate enforces a TCP connection. UDP is faster but less reliable, as it does not guarantee packet delivery or order, and VLC does not have the same requirements as Frigate.
You can still configure Frigate to use UDP by using ffmpeg input args or the preset `preset-rtsp-udp`. See the [ffmpeg presets](/configuration/ffmpeg_presets) documentation.

7069
docs/package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -17,15 +17,15 @@
"write-heading-ids": "docusaurus write-heading-ids"
},
"dependencies": {
"@docusaurus/core": "^3.5.2",
"@docusaurus/preset-classic": "^3.5.2",
"@docusaurus/theme-mermaid": "^3.5.2",
"@docusaurus/plugin-content-docs": "^3.5.2",
"@mdx-js/react": "^3.0.1",
"@docusaurus/core": "^3.6.3",
"@docusaurus/preset-classic": "^3.6.3",
"@docusaurus/theme-mermaid": "^3.6.3",
"@docusaurus/plugin-content-docs": "^3.6.3",
"@mdx-js/react": "^3.1.0",
"clsx": "^2.1.1",
"docusaurus-plugin-openapi-docs": "^4.1.0",
"docusaurus-theme-openapi-docs": "^4.1.0",
"prism-react-renderer": "^2.4.0",
"docusaurus-plugin-openapi-docs": "^4.3.1",
"docusaurus-theme-openapi-docs": "^4.3.1",
"prism-react-renderer": "^2.4.1",
"raw-loader": "^4.0.2",
"react": "^18.3.1",
"react-dom": "^18.3.1"

View File

@@ -21,13 +21,13 @@ from frigate.api.defs.query.app_query_parameters import AppTimelineHourlyQueryPa
from frigate.api.defs.request.app_body import AppConfigSetBody
from frigate.api.defs.tags import Tags
from frigate.config import FrigateConfig
from frigate.const import CONFIG_DIR
from frigate.models import Event, Timeline
from frigate.util.builtin import (
clean_camera_user_pass,
get_tz_modifiers,
update_yaml_from_url,
)
from frigate.util.config import find_config_file
from frigate.util.services import (
ffprobe_stream,
get_nvidia_driver_info,
@@ -134,9 +134,27 @@ def config(request: Request):
for zone_name, zone in config_obj.cameras[camera_name].zones.items():
camera_dict["zones"][zone_name]["color"] = zone.color
# remove go2rtc stream passwords
go2rtc: dict[str, any] = config_obj.go2rtc.model_dump(
mode="json", warnings="none", exclude_none=True
)
for stream_name, stream in go2rtc.get("streams", {}).items():
if stream is None:
continue
if isinstance(stream, str):
cleaned = clean_camera_user_pass(stream)
else:
cleaned = []
for item in stream:
cleaned.append(clean_camera_user_pass(item))
config["go2rtc"]["streams"][stream_name] = cleaned
config["plus"] = {"enabled": request.app.frigate_config.plus_api.is_active()}
config["model"]["colormap"] = config_obj.model.colormap
# use merged labelamp
for detector_config in config["detectors"].values():
detector_config["model"]["labelmap"] = (
request.app.frigate_config.model.merged_labelmap
@@ -147,13 +165,7 @@ def config(request: Request):
@router.get("/config/raw")
def config_raw():
config_file = os.environ.get("CONFIG_FILE", "/config/config.yml")
# Check if we can use .yaml instead of .yml
config_file_yaml = config_file.replace(".yml", ".yaml")
if os.path.isfile(config_file_yaml):
config_file = config_file_yaml
config_file = find_config_file()
if not os.path.isfile(config_file):
return JSONResponse(
@@ -198,13 +210,7 @@ def config_save(save_option: str, body: Any = Body(media_type="text/plain")):
# Save the config to file
try:
config_file = os.environ.get("CONFIG_FILE", "/config/config.yml")
# Check if we can use .yaml instead of .yml
config_file_yaml = config_file.replace(".yml", ".yaml")
if os.path.isfile(config_file_yaml):
config_file = config_file_yaml
config_file = find_config_file()
with open(config_file, "w") as f:
f.write(new_config)
@@ -253,13 +259,7 @@ def config_save(save_option: str, body: Any = Body(media_type="text/plain")):
@router.put("/config/set")
def config_set(request: Request, body: AppConfigSetBody):
config_file = os.environ.get("CONFIG_FILE", f"{CONFIG_DIR}/config.yml")
# Check if we can use .yaml instead of .yml
config_file_yaml = config_file.replace(".yml", ".yaml")
if os.path.isfile(config_file_yaml):
config_file = config_file_yaml
config_file = find_config_file()
with open(config_file, "r") as f:
old_raw_config = f.read()

View File

@@ -437,7 +437,7 @@ class FrigateApp:
# pre-create shms
for i in range(shm_frame_count):
frame_size = config.frame_shape_yuv[0] * config.frame_shape_yuv[1]
self.frame_manager.create(f"{config.name}_{i}", frame_size)
self.frame_manager.create(f"{config.name}_frame{i}", frame_size)
capture_process = util.Process(
target=capture_camera,

View File

@@ -38,6 +38,10 @@ class GenAICameraConfig(BaseModel):
default_factory=list,
title="List of required zones to be entered in order to run generative AI.",
)
debug_save_thumbnails: bool = Field(
default=False,
title="Save thumbnails sent to generative AI for debugging purposes.",
)
@field_validator("required_zones", mode="before")
@classmethod

View File

@@ -74,6 +74,7 @@ class OnvifConfig(FrigateBaseModel):
port: int = Field(default=8000, title="Onvif Port")
user: Optional[EnvString] = Field(default=None, title="Onvif Username")
password: Optional[EnvString] = Field(default=None, title="Onvif Password")
tls_insecure: bool = Field(default=False, title="Onvif Disable TLS verification")
autotracking: PtzAutotrackConfig = Field(
default_factory=PtzAutotrackConfig,
title="PTZ auto tracking config.",

View File

@@ -4,6 +4,7 @@ from typing import Optional
from pydantic import Field
from frigate.const import MAX_PRE_CAPTURE
from frigate.review.types import SeverityEnum
from ..base import FrigateBaseModel
@@ -101,3 +102,15 @@ class RecordConfig(FrigateBaseModel):
self.alerts.pre_capture,
self.detections.pre_capture,
)
def get_review_pre_capture(self, severity: SeverityEnum) -> int:
if severity == SeverityEnum.alert:
return self.alerts.pre_capture
else:
return self.detections.pre_capture
def get_review_post_capture(self, severity: SeverityEnum) -> int:
if severity == SeverityEnum.alert:
return self.alerts.post_capture
else:
return self.detections.post_capture

View File

@@ -29,6 +29,7 @@ from frigate.util.builtin import (
)
from frigate.util.config import (
StreamInfoRetriever,
find_config_file,
get_relative_coordinates,
migrate_frigate_config,
)
@@ -67,7 +68,6 @@ logger = logging.getLogger(__name__)
yaml = YAML()
DEFAULT_CONFIG_FILE = "/config/config.yml"
DEFAULT_CONFIG = """
mqtt:
enabled: False
@@ -638,16 +638,13 @@ class FrigateConfig(FrigateBaseModel):
@classmethod
def load(cls, **kwargs):
config_path = os.environ.get("CONFIG_FILE", DEFAULT_CONFIG_FILE)
if not os.path.isfile(config_path):
config_path = config_path.replace("yml", "yaml")
config_path = find_config_file()
# No configuration file found, create one.
new_config = False
if not os.path.isfile(config_path):
logger.info("No config file found, saving default config")
config_path = DEFAULT_CONFIG_FILE
config_path = config_path
new_config = True
else:
# Check if the config file needs to be migrated.

View File

@@ -32,6 +32,7 @@ class DeepStack(DetectionApi):
self.api_timeout = detector_config.api_timeout
self.api_key = detector_config.api_key
self.labels = detector_config.model.merged_labelmap
self.session = requests.Session()
def get_label_index(self, label_value):
if label_value.lower() == "truck":
@@ -51,7 +52,7 @@ class DeepStack(DetectionApi):
data = {"api_key": self.api_key}
try:
response = requests.post(
response = self.session.post(
self.api_url,
data=data,
files={"image": image_bytes},

View File

@@ -136,17 +136,17 @@ class Rknn(DetectionApi):
def check_config(self, config):
if (config.model.width != 320) or (config.model.height != 320):
raise Exception(
"Make sure to set the model width and height to 320 in your config.yml."
"Make sure to set the model width and height to 320 in your config."
)
if config.model.input_pixel_format != "bgr":
raise Exception(
'Make sure to set the model input_pixel_format to "bgr" in your config.yml.'
'Make sure to set the model input_pixel_format to "bgr" in your config.'
)
if config.model.input_tensor != "nhwc":
raise Exception(
'Make sure to set the model input_tensor to "nhwc" in your config.yml.'
'Make sure to set the model input_tensor to "nhwc" in your config.'
)
def detect_raw(self, tensor_input):

View File

@@ -5,6 +5,7 @@ import logging
import os
import threading
from multiprocessing.synchronize import Event as MpEvent
from pathlib import Path
from typing import Optional
import cv2
@@ -217,16 +218,47 @@ class EmbeddingMaintainer(threading.Thread):
_, buffer = cv2.imencode(".jpg", cropped_image)
snapshot_image = buffer.tobytes()
num_thumbnails = len(self.tracked_events.get(event_id, []))
embed_image = (
[snapshot_image]
if event.has_snapshot and camera_config.genai.use_snapshot
else (
[thumbnail for data in self.tracked_events[event_id]]
if len(self.tracked_events.get(event_id, [])) > 0
[
data["thumbnail"]
for data in self.tracked_events[event_id]
]
if num_thumbnails > 0
else [thumbnail]
)
)
if camera_config.genai.debug_save_thumbnails and num_thumbnails > 0:
logger.debug(
f"Saving {num_thumbnails} thumbnails for event {event.id}"
)
Path(
os.path.join(CLIPS_DIR, f"genai-requests/{event.id}")
).mkdir(parents=True, exist_ok=True)
for idx, data in enumerate(self.tracked_events[event_id], 1):
jpg_bytes: bytes = data["thumbnail"]
if jpg_bytes is None:
logger.warning(
f"Unable to save thumbnail {idx} for {event.id}."
)
else:
with open(
os.path.join(
CLIPS_DIR,
f"genai-requests/{event.id}/{idx}.jpg",
),
"wb",
) as j:
j.write(jpg_bytes)
# Generate the description. Call happens in a thread since it is network bound.
threading.Thread(
target=self._embed_description,
@@ -325,18 +357,25 @@ class EmbeddingMaintainer(threading.Thread):
)
if event.has_snapshot and source == "snapshot":
with open(
os.path.join(CLIPS_DIR, f"{event.camera}-{event.id}.jpg"),
"rb",
) as image_file:
snapshot_file = os.path.join(CLIPS_DIR, f"{event.camera}-{event.id}.jpg")
if not os.path.isfile(snapshot_file):
logger.error(
f"Cannot regenerate description for {event.id}, snapshot file not found: {snapshot_file}"
)
return
with open(snapshot_file, "rb") as image_file:
snapshot_image = image_file.read()
img = cv2.imdecode(
np.frombuffer(snapshot_image, dtype=np.int8), cv2.IMREAD_COLOR
)
# crop snapshot based on region before sending off to genai
# provide full image if region doesn't exist (manual events)
region = event.data.get("region", [0, 0, 1, 1])
height, width = img.shape[:2]
x1_rel, y1_rel, width_rel, height_rel = event.data["region"]
x1_rel, y1_rel, width_rel, height_rel = region
x1, y1 = int(x1_rel * width), int(y1_rel * height)
cropped_image = img[
@@ -350,7 +389,7 @@ class EmbeddingMaintainer(threading.Thread):
[snapshot_image]
if event.has_snapshot and source == "snapshot"
else (
[thumbnail for data in self.tracked_events[event_id]]
[data["thumbnail"] for data in self.tracked_events[event_id]]
if len(self.tracked_events.get(event_id, [])) > 0
else [thumbnail]
)

View File

@@ -121,8 +121,8 @@ class EventCleanup(threading.Thread):
events_to_update = []
for batch in query.iterator():
events_to_update.extend([event.id for event in batch])
for event in query.iterator():
events_to_update.append(event.id)
if len(events_to_update) >= CHUNK_SIZE:
logger.debug(
f"Updating {update_params} for {len(events_to_update)} events"
@@ -256,8 +256,9 @@ class EventCleanup(threading.Thread):
events_to_update = []
for batch in query.iterator():
events_to_update.extend([event.id for event in batch])
for event in query.iterator():
events_to_update.append(event.id)
if len(events_to_update) >= CHUNK_SIZE:
logger.debug(
f"Updating {update_params} for {len(events_to_update)} events"
@@ -330,9 +331,8 @@ class EventCleanup(threading.Thread):
def run(self) -> None:
# only expire events every 5 minutes
while not self.stop_event.wait(1):
while not self.stop_event.wait(300):
events_with_expired_clips = self.expire_clips()
return
# delete timeline entries for events that have expired recordings
# delete up to 100,000 at a time

View File

@@ -82,18 +82,23 @@ class EventProcessor(threading.Thread):
)
if source_type == EventTypeEnum.tracked_object:
id = event_data["id"]
self.timeline_queue.put(
(
camera,
source_type,
event_type,
self.events_in_process.get(event_data["id"]),
self.events_in_process.get(id),
event_data,
)
)
if event_type == EventStateEnum.start:
self.events_in_process[event_data["id"]] = event_data
# if this is the first message, just store it and continue, its not time to insert it in the db
if (
event_type == EventStateEnum.start
or id not in self.events_in_process
):
self.events_in_process[id] = event_data
continue
self.handle_object_detection(event_type, camera, event_data)
@@ -123,10 +128,6 @@ class EventProcessor(threading.Thread):
"""handle tracked object event updates."""
updated_db = False
# if this is the first message, just store it and continue, its not time to insert it in the db
if event_type == EventStateEnum.start:
self.events_in_process[event_data["id"]] = event_data
if should_update_db(self.events_in_process[event_data["id"]], event_data):
updated_db = True
camera_config = self.config.cameras[camera]

View File

@@ -38,6 +38,11 @@ class OllamaClient(GenAIClient):
def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
"""Submit a request to Ollama"""
if self.provider is None:
logger.warning(
"Ollama provider has not been initialized, a description will not be generated. Check your Ollama configuration."
)
return None
try:
result = self.provider.generate(
self.genai_config.model,

View File

@@ -2,7 +2,6 @@
import copy
import logging
import os
import queue
import threading
import time
@@ -29,11 +28,11 @@ from frigate.const import (
AUTOTRACKING_ZOOM_EDGE_THRESHOLD,
AUTOTRACKING_ZOOM_IN_HYSTERESIS,
AUTOTRACKING_ZOOM_OUT_HYSTERESIS,
CONFIG_DIR,
)
from frigate.ptz.onvif import OnvifController
from frigate.track.tracked_object import TrackedObject
from frigate.util.builtin import update_yaml_file
from frigate.util.config import find_config_file
from frigate.util.image import SharedMemoryFrameManager, intersection_over_union
logger = logging.getLogger(__name__)
@@ -328,13 +327,7 @@ class PtzAutoTracker:
self.autotracker_init[camera] = True
def _write_config(self, camera):
config_file = os.environ.get("CONFIG_FILE", f"{CONFIG_DIR}/config.yml")
# Check if we can use .yaml instead of .yml
config_file_yaml = config_file.replace(".yml", ".yaml")
if os.path.isfile(config_file_yaml):
config_file = config_file_yaml
config_file = find_config_file()
logger.debug(
f"{camera}: Writing new config with autotracker motion coefficients: {self.config.cameras[camera].onvif.autotracking.movement_weights}"

View File

@@ -6,6 +6,7 @@ from importlib.util import find_spec
from pathlib import Path
import numpy
import requests
from onvif import ONVIFCamera, ONVIFError
from zeep.exceptions import Fault, TransportError
from zeep.transports import Transport
@@ -48,7 +49,11 @@ class OnvifController:
if cam.onvif.host:
try:
transport = Transport(timeout=10, operation_timeout=10)
session = requests.Session()
session.verify = not cam.onvif.tls_insecure
transport = Transport(
timeout=10, operation_timeout=10, session=session
)
self.cams[cam_name] = {
"onvif": ONVIFCamera(
cam.onvif.host,
@@ -558,22 +563,26 @@ class OnvifController:
if not self._init_onvif(camera_name):
return
if command == OnvifCommandEnum.init:
# already init
return
elif command == OnvifCommandEnum.stop:
self._stop(camera_name)
elif command == OnvifCommandEnum.preset:
self._move_to_preset(camera_name, param)
elif command == OnvifCommandEnum.move_relative:
_, pan, tilt = param.split("_")
self._move_relative(camera_name, float(pan), float(tilt), 0, 1)
elif (
command == OnvifCommandEnum.zoom_in or command == OnvifCommandEnum.zoom_out
):
self._zoom(camera_name, command)
else:
self._move(camera_name, command)
try:
if command == OnvifCommandEnum.init:
# already init
return
elif command == OnvifCommandEnum.stop:
self._stop(camera_name)
elif command == OnvifCommandEnum.preset:
self._move_to_preset(camera_name, param)
elif command == OnvifCommandEnum.move_relative:
_, pan, tilt = param.split("_")
self._move_relative(camera_name, float(pan), float(tilt), 0, 1)
elif (
command == OnvifCommandEnum.zoom_in
or command == OnvifCommandEnum.zoom_out
):
self._zoom(camera_name, command)
else:
self._move(camera_name, command)
except ONVIFError as e:
logger.error(f"Unable to handle onvif command: {e}")
def get_camera_info(self, camera_name: str) -> dict[str, any]:
if camera_name not in self.cams.keys():

View File

@@ -29,6 +29,7 @@ from frigate.const import (
RECORD_DIR,
)
from frigate.models import Recordings, ReviewSegment
from frigate.review.types import SeverityEnum
from frigate.util.services import get_video_properties
logger = logging.getLogger(__name__)
@@ -194,6 +195,7 @@ class RecordingMaintainer(threading.Thread):
ReviewSegment.select(
ReviewSegment.start_time,
ReviewSegment.end_time,
ReviewSegment.severity,
ReviewSegment.data,
)
.where(
@@ -219,11 +221,15 @@ class RecordingMaintainer(threading.Thread):
[r for r in recordings_to_insert if r is not None],
)
def drop_segment(self, cache_path: str) -> None:
Path(cache_path).unlink(missing_ok=True)
self.end_time_cache.pop(cache_path, None)
async def validate_and_move_segment(
self, camera: str, reviews: list[ReviewSegment], recording: dict[str, any]
) -> None:
cache_path = recording["cache_path"]
start_time = recording["start_time"]
cache_path: str = recording["cache_path"]
start_time: datetime.datetime = recording["start_time"]
record_config = self.config.cameras[camera].record
# Just delete files if recordings are turned off
@@ -231,8 +237,7 @@ class RecordingMaintainer(threading.Thread):
camera not in self.config.cameras
or not self.config.cameras[camera].record.enabled
):
Path(cache_path).unlink(missing_ok=True)
self.end_time_cache.pop(cache_path, None)
self.drop_segment(cache_path)
return
if cache_path in self.end_time_cache:
@@ -260,24 +265,34 @@ class RecordingMaintainer(threading.Thread):
return
# if cached file's start_time is earlier than the retain days for the camera
# meaning continuous recording is not enabled
if start_time <= (
datetime.datetime.now().astimezone(datetime.timezone.utc)
- datetime.timedelta(days=self.config.cameras[camera].record.retain.days)
):
# if the cached segment overlaps with the events:
# if the cached segment overlaps with the review items:
overlaps = False
for review in reviews:
# if the event starts in the future, stop checking events
severity = SeverityEnum[review.severity]
# if the review item starts in the future, stop checking review items
# and remove this segment
if review.start_time > end_time.timestamp():
if (
review.start_time - record_config.get_review_pre_capture(severity)
) > end_time.timestamp():
overlaps = False
Path(cache_path).unlink(missing_ok=True)
self.end_time_cache.pop(cache_path, None)
break
# if the event is in progress or ends after the recording starts, keep it
# and stop looking at events
if review.end_time is None or review.end_time >= start_time.timestamp():
# if the review item is in progress or ends after the recording starts, keep it
# and stop looking at review items
if (
review.end_time is None
or (
review.end_time
+ record_config.get_review_post_capture(severity)
)
>= start_time.timestamp()
):
overlaps = True
break
@@ -296,7 +311,7 @@ class RecordingMaintainer(threading.Thread):
cache_path,
record_mode,
)
# if it doesn't overlap with an event, go ahead and drop the segment
# if it doesn't overlap with an review item, go ahead and drop the segment
# if it ends more than the configured pre_capture for the camera
else:
camera_info = self.object_recordings_info[camera]
@@ -307,9 +322,9 @@ class RecordingMaintainer(threading.Thread):
most_recently_processed_frame_time - record_config.event_pre_capture
).astimezone(datetime.timezone.utc)
if end_time < retain_cutoff:
Path(cache_path).unlink(missing_ok=True)
self.end_time_cache.pop(cache_path, None)
self.drop_segment(cache_path)
# else retain days includes this segment
# meaning continuous recording is enabled
else:
# assume that empty means the relevant recording info has not been received yet
camera_info = self.object_recordings_info[camera]
@@ -390,8 +405,7 @@ class RecordingMaintainer(threading.Thread):
# check if the segment shouldn't be stored
if segment_info.should_discard_segment(store_mode):
Path(cache_path).unlink(missing_ok=True)
self.end_time_cache.pop(cache_path, None)
self.drop_segment(cache_path)
return
# directory will be in utc due to start_time being in utc

View File

@@ -293,7 +293,7 @@ def stats_snapshot(
for path in [RECORD_DIR, CLIPS_DIR, CACHE_DIR, "/dev/shm"]:
try:
storage_stats = shutil.disk_usage(path)
except FileNotFoundError:
except (FileNotFoundError, OSError):
stats["service"]["storage"][path] = {}
continue

View File

@@ -17,6 +17,8 @@ bandwidth_equation = Recordings.segment_size / (
Recordings.end_time - Recordings.start_time
)
MAX_CALCULATED_BANDWIDTH = 10000 # 10Gb/hr
class StorageMaintainer(threading.Thread):
"""Maintain frigates recording storage."""
@@ -52,6 +54,12 @@ class StorageMaintainer(threading.Thread):
* 3600,
2,
)
if bandwidth > MAX_CALCULATED_BANDWIDTH:
logger.warning(
f"{camera} has a bandwidth of {bandwidth} MB/hr which exceeds the expected maximum. This typically indicates an issue with the cameras recordings."
)
bandwidth = MAX_CALCULATED_BANDWIDTH
except TypeError:
bandwidth = 0

View File

@@ -14,6 +14,16 @@ from frigate.util.services import get_video_properties
logger = logging.getLogger(__name__)
CURRENT_CONFIG_VERSION = "0.15-0"
DEFAULT_CONFIG_FILE = "/config/config.yml"
def find_config_file() -> str:
config_path = os.environ.get("CONFIG_FILE", DEFAULT_CONFIG_FILE)
if not os.path.isfile(config_path):
config_path = config_path.replace("yml", "yaml")
return config_path
def migrate_frigate_config(config_file: str):

View File

@@ -113,7 +113,7 @@ def capture_frames(
fps.value = frame_rate.eps()
skipped_fps.value = skipped_eps.eps()
current_frame.value = datetime.datetime.now().timestamp()
frame_name = f"{config.name}_{frame_index}"
frame_name = f"{config.name}_frame{frame_index}"
frame_buffer = frame_manager.write(frame_name)
try:
frame_buffer[:] = ffmpeg_process.stdout.read(frame_size)

View File

@@ -755,7 +755,11 @@ export function CameraGroupEdit({
<FormMessage />
{[
...(birdseyeConfig?.enabled ? ["birdseye"] : []),
...Object.keys(config?.cameras ?? {}),
...Object.keys(config?.cameras ?? {}).sort(
(a, b) =>
(config?.cameras[a]?.ui?.order ?? 0) -
(config?.cameras[b]?.ui?.order ?? 0),
),
].map((camera) => (
<FormControl key={camera}>
<FilterSwitch

View File

@@ -477,7 +477,10 @@ export default function ObjectLifecycle({
</p>
{Array.isArray(item.data.box) &&
item.data.box.length >= 4
? (item.data.box[2] / item.data.box[3]).toFixed(2)
? (
aspectRatio *
(item.data.box[2] / item.data.box[3])
).toFixed(2)
: "N/A"}
</div>
</div>

View File

@@ -74,6 +74,23 @@ export default function ReviewDetailDialog({
return events.length != review?.data.detections.length;
}, [review, events]);
const missingObjects = useMemo(() => {
if (!review || !events) {
return [];
}
const detectedIds = review.data.detections;
const missing = Array.from(
new Set(
events
.filter((event) => !detectedIds.includes(event.id))
.map((event) => event.label),
),
);
return missing;
}, [review, events]);
const formattedDate = useFormattedTimestamp(
review?.start_time ?? 0,
config?.ui.time_format == "24hour"
@@ -263,8 +280,25 @@ export default function ReviewDetailDialog({
</div>
{hasMismatch && (
<div className="p-4 text-center text-sm">
Some objects that were detected are not included in this list
because the object does not have a snapshot
{(() => {
const detectedCount = Math.abs(
(events?.length ?? 0) -
(review?.data.detections.length ?? 0),
);
const objectLabel =
detectedCount === 1 ? "object was" : "objects were";
return `${detectedCount} unavailable ${objectLabel} detected and included in this review item.`;
})()}{" "}
Those objects either did not qualify as an alert or detection
or have already been cleaned up/deleted.
{missingObjects.length > 0 && (
<div className="mt-2">
Adjust your configuration if you want Frigate to save
tracked objects for the following labels:{" "}
{missingObjects.join(", ")}
</div>
)}
</div>
)}
<div className="relative flex size-full flex-col gap-2">

View File

@@ -469,16 +469,43 @@ function ObjectDetailsTab({
</div>
</div>
<div className="flex flex-col gap-1.5">
<div className="text-sm text-primary/40">Description</div>
<Textarea
className="h-64"
placeholder="Description of the tracked object"
value={desc}
onChange={(e) => setDesc(e.target.value)}
/>
{config?.cameras[search.camera].genai.enabled &&
!search.end_time &&
(config.cameras[search.camera].genai.required_zones.length === 0 ||
search.zones.some((zone) =>
config.cameras[search.camera].genai.required_zones.includes(zone),
)) &&
(config.cameras[search.camera].genai.objects.length === 0 ||
config.cameras[search.camera].genai.objects.includes(
search.label,
)) ? (
<>
<div className="text-sm text-primary/40">Description</div>
<div className="flex h-64 flex-col items-center justify-center gap-3 border p-4 text-sm text-primary/40">
<div className="flex">
<ActivityIndicator />
</div>
<div className="flex">
Frigate will not request a description from your Generative AI
provider until the tracked object's lifecycle has ended.
</div>
</div>
</>
) : (
<>
<div className="text-sm text-primary/40">Description</div>
<Textarea
className="h-64"
placeholder="Description of the tracked object"
value={desc}
onChange={(e) => setDesc(e.target.value)}
/>
</>
)}
<div className="flex w-full flex-row justify-end gap-2">
{config?.cameras[search.camera].genai.enabled && (
<div className="flex items-center">
{config?.cameras[search.camera].genai.enabled && search.end_time && (
<div className="flex items-start">
<Button
className="rounded-r-none border-r-0"
aria-label="Regenerate tracked object description"
@@ -516,13 +543,16 @@ function ObjectDetailsTab({
)}
</div>
)}
<Button
variant="select"
aria-label="Save"
onClick={updateDescription}
>
Save
</Button>
{(config?.cameras[search.camera].genai.enabled && search.end_time) ||
(!config?.cameras[search.camera].genai.enabled && (
<Button
variant="select"
aria-label="Save"
onClick={updateDescription}
>
Save
</Button>
))}
</div>
</div>
</div>

View File

@@ -46,7 +46,7 @@ export default function SearchSettings({
const trigger = (
<Button
className="flex items-center gap-2"
aria-label="Search Settings"
aria-label="Explore Settings"
size="sm"
>
<FaCog className="text-secondary-foreground" />

View File

@@ -5,6 +5,7 @@ import { usePersistence } from "./use-persistence";
export function useOverlayState<S>(
key: string,
defaultValue: S | undefined = undefined,
preserveSearch: boolean = true,
): [S | undefined, (value: S, replace?: boolean) => void] {
const location = useLocation();
const navigate = useNavigate();
@@ -15,7 +16,7 @@ export function useOverlayState<S>(
(value: S, replace: boolean = false) => {
const newLocationState = { ...currentLocationState };
newLocationState[key] = value;
navigate(location.pathname + location.search, {
navigate(location.pathname + (preserveSearch ? location.search : ""), {
state: newLocationState,
replace,
});

View File

@@ -39,8 +39,11 @@ export default function Events() {
const [showReviewed, setShowReviewed] = usePersistence("showReviewed", false);
const [recording, setRecording] =
useOverlayState<RecordingStartingPoint>("recording");
const [recording, setRecording] = useOverlayState<RecordingStartingPoint>(
"recording",
undefined,
false,
);
useSearchEffect("id", (reviewId: string) => {
axios

View File

@@ -328,12 +328,12 @@ export default function Explore() {
<div className="flex max-w-96 flex-col items-center justify-center space-y-3 rounded-lg bg-background/50 p-5">
<div className="my-5 flex flex-col items-center gap-2 text-xl">
<TbExclamationCircle className="mb-3 size-10" />
<div>Search Unavailable</div>
<div>Explore is Unavailable</div>
</div>
{embeddingsReindexing && allModelsLoaded && (
<>
<div className="text-center text-primary-variant">
Search can be used after tracked object embeddings have
Explore can be used after tracked object embeddings have
finished reindexing.
</div>
<div className="pt-5 text-center">
@@ -384,8 +384,8 @@ export default function Explore() {
<>
<div className="text-center text-primary-variant">
Frigate is downloading the necessary embeddings models to
support semantic searching. This may take several minutes
depending on the speed of your network connection.
support the Semantic Search feature. This may take several
minutes depending on the speed of your network connection.
</div>
<div className="flex w-96 flex-col gap-2 py-5">
<div className="flex flex-row items-center justify-center gap-2">

View File

@@ -40,7 +40,7 @@ import UiSettingsView from "@/views/settings/UiSettingsView";
const allSettingsViews = [
"UI settings",
"search settings",
"explore settings",
"camera settings",
"masks / zones",
"motion tuner",
@@ -175,7 +175,7 @@ export default function Settings() {
</div>
<div className="mt-2 flex h-full w-full flex-col items-start md:h-dvh md:pb-24">
{page == "UI settings" && <UiSettingsView />}
{page == "search settings" && (
{page == "explore settings" && (
<SearchSettingsView setUnsavedChanges={setUnsavedChanges} />
)}
{page == "debug" && (

View File

@@ -142,6 +142,7 @@ export interface CameraConfig {
password: string | null;
port: number;
user: string | null;
tls_insecure: boolean;
};
record: {
enabled: boolean;

View File

@@ -17,7 +17,12 @@ import {
DropdownMenuItem,
DropdownMenuTrigger,
} from "@/components/ui/dropdown-menu";
import { TooltipProvider } from "@/components/ui/tooltip";
import {
Tooltip,
TooltipContent,
TooltipProvider,
TooltipTrigger,
} from "@/components/ui/tooltip";
import { useResizeObserver } from "@/hooks/resize-observer";
import useKeyboardListener from "@/hooks/use-keyboard-listener";
import { CameraConfig, FrigateConfig } from "@/types/frigateConfig";
@@ -29,6 +34,7 @@ import {
import { CameraPtzInfo } from "@/types/ptz";
import { RecordingStartingPoint } from "@/types/record";
import React, {
ReactNode,
useCallback,
useEffect,
useMemo,
@@ -518,6 +524,53 @@ export default function LiveCameraView({
);
}
type TooltipButtonProps = {
label: string;
onClick?: () => void;
onMouseDown?: (e: React.MouseEvent) => void;
onMouseUp?: (e: React.MouseEvent) => void;
onTouchStart?: (e: React.TouchEvent) => void;
onTouchEnd?: (e: React.TouchEvent) => void;
children: ReactNode;
className?: string;
};
function TooltipButton({
label,
onClick,
onMouseDown,
onMouseUp,
onTouchStart,
onTouchEnd,
children,
className,
...props
}: TooltipButtonProps) {
return (
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<Button
aria-label={label}
onClick={onClick}
onMouseDown={onMouseDown}
onMouseUp={onMouseUp}
onTouchStart={onTouchStart}
onTouchEnd={onTouchEnd}
className={className}
{...props}
>
{children}
</Button>
</TooltipTrigger>
<TooltipContent>
<p>{label}</p>
</TooltipContent>
</Tooltip>
</TooltipProvider>
);
}
function PtzControlPanel({
camera,
clickOverlay,
@@ -611,8 +664,8 @@ function PtzControlPanel({
>
{ptz?.features?.includes("pt") && (
<>
<Button
aria-label="Move PTZ camera to the left"
<TooltipButton
label="Move camera left"
onMouseDown={(e) => {
e.preventDefault();
sendPtz("MOVE_LEFT");
@@ -625,9 +678,9 @@ function PtzControlPanel({
onTouchEnd={onStop}
>
<FaAngleLeft />
</Button>
<Button
aria-label="Move PTZ camera up"
</TooltipButton>
<TooltipButton
label="Move camera up"
onMouseDown={(e) => {
e.preventDefault();
sendPtz("MOVE_UP");
@@ -640,9 +693,9 @@ function PtzControlPanel({
onTouchEnd={onStop}
>
<FaAngleUp />
</Button>
<Button
aria-label="Move PTZ camera down"
</TooltipButton>
<TooltipButton
label="Move camera down"
onMouseDown={(e) => {
e.preventDefault();
sendPtz("MOVE_DOWN");
@@ -655,9 +708,9 @@ function PtzControlPanel({
onTouchEnd={onStop}
>
<FaAngleDown />
</Button>
<Button
aria-label="Move PTZ camera to the right"
</TooltipButton>
<TooltipButton
label="Move camera right"
onMouseDown={(e) => {
e.preventDefault();
sendPtz("MOVE_RIGHT");
@@ -670,13 +723,13 @@ function PtzControlPanel({
onTouchEnd={onStop}
>
<FaAngleRight />
</Button>
</TooltipButton>
</>
)}
{ptz?.features?.includes("zoom") && (
<>
<Button
aria-label="Zoom PTZ camera in"
<TooltipButton
label="Zoom in"
onMouseDown={(e) => {
e.preventDefault();
sendPtz("ZOOM_IN");
@@ -689,9 +742,9 @@ function PtzControlPanel({
onTouchEnd={onStop}
>
<MdZoomIn />
</Button>
<Button
aria-label="Zoom PTZ camera out"
</TooltipButton>
<TooltipButton
label="Zoom out"
onMouseDown={(e) => {
e.preventDefault();
sendPtz("ZOOM_OUT");
@@ -704,45 +757,60 @@ function PtzControlPanel({
onTouchEnd={onStop}
>
<MdZoomOut />
</Button>
</TooltipButton>
</>
)}
{ptz?.features?.includes("pt-r-fov") && (
<>
<Button
className={`${clickOverlay ? "text-selected" : "text-primary"}`}
aria-label="Click in the frame to center the PTZ camera"
onClick={() => setClickOverlay(!clickOverlay)}
>
<TbViewfinder />
</Button>
</>
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<Button
className={`${clickOverlay ? "text-selected" : "text-primary"}`}
aria-label="Click in the frame to center the camera"
onClick={() => setClickOverlay(!clickOverlay)}
>
<TbViewfinder />
</Button>
</TooltipTrigger>
<TooltipContent>
<p>{clickOverlay ? "Disable" : "Enable"} click to move</p>
</TooltipContent>
</Tooltip>
</TooltipProvider>
)}
{(ptz?.presets?.length ?? 0) > 0 && (
<DropdownMenu modal={!isDesktop}>
<DropdownMenuTrigger asChild>
<Button aria-label="PTZ camera presets">
<BsThreeDotsVertical />
</Button>
</DropdownMenuTrigger>
<DropdownMenuContent
className="scrollbar-container max-h-[40dvh] overflow-y-auto"
onCloseAutoFocus={(e) => e.preventDefault()}
>
{ptz?.presets.map((preset) => {
return (
<DropdownMenuItem
key={preset}
aria-label={preset}
className="cursor-pointer"
onSelect={() => sendPtz(`preset_${preset}`)}
<TooltipProvider>
<Tooltip>
<TooltipTrigger asChild>
<DropdownMenu modal={!isDesktop}>
<DropdownMenuTrigger asChild>
<Button aria-label="PTZ camera presets">
<BsThreeDotsVertical />
</Button>
</DropdownMenuTrigger>
<DropdownMenuContent
className="scrollbar-container max-h-[40dvh] overflow-y-auto"
onCloseAutoFocus={(e) => e.preventDefault()}
>
{preset}
</DropdownMenuItem>
);
})}
</DropdownMenuContent>
</DropdownMenu>
{ptz?.presets.map((preset) => (
<DropdownMenuItem
key={preset}
aria-label={preset}
className="cursor-pointer"
onSelect={() => sendPtz(`preset_${preset}`)}
>
{preset}
</DropdownMenuItem>
))}
</DropdownMenuContent>
</DropdownMenu>
</TooltipTrigger>
<TooltipContent>
<p>PTZ camera presets</p>
</TooltipContent>
</Tooltip>
</TooltipProvider>
)}
</div>
);

View File

@@ -91,7 +91,7 @@ export default function SearchSettingsView({
)
.then((res) => {
if (res.status === 200) {
toast.success("Search settings have been saved.", {
toast.success("Explore settings have been saved.", {
position: "top-center",
});
setChangedValue(false);
@@ -128,7 +128,7 @@ export default function SearchSettingsView({
if (changedValue) {
addMessage(
"search_settings",
`Unsaved search settings changes`,
`Unsaved Explore settings changes`,
undefined,
"search_settings",
);
@@ -140,7 +140,7 @@ export default function SearchSettingsView({
}, [changedValue]);
useEffect(() => {
document.title = "Search Settings - Frigate";
document.title = "Explore Settings - Frigate";
}, []);
if (!config) {
@@ -152,7 +152,7 @@ export default function SearchSettingsView({
<Toaster position="top-center" closeButton={true} />
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto rounded-lg border-[1px] border-secondary-foreground bg-background_alt p-2 md:order-none md:mb-0 md:mr-2 md:mt-0">
<Heading as="h3" className="my-2">
Search Settings
Explore Settings
</Heading>
<Separator className="my-2 flex bg-secondary" />
<Heading as="h4" className="my-2">
@@ -221,7 +221,7 @@ export default function SearchSettingsView({
<div className="text-md">Model Size</div>
<div className="space-y-1 text-sm text-muted-foreground">
<p>
The size of the model used for semantic search embeddings.
The size of the model used for Semantic Search embeddings.
</p>
<ul className="list-disc pl-5 text-sm">
<li>