Compare commits

...

22 Commits
trt-10 ... dev

Author SHA1 Message Date
Marc Altmann
3947e79086 update FFmpeg to ensure compatibility with newer kernels (#16027) 2025-01-18 05:48:28 -07:00
Nicolas Mowen
91ab1071d2 Update docs to make note of go2rtc port requirement (#16013) 2025-01-16 16:14:40 -07:00
Nicolas Mowen
409e911752 Update integration docs (#15967) 2025-01-13 08:50:44 -06:00
tpjanssen
9983bd8d92 Fix API latest image quality and API MIME types (#15964)
* Fix API latest image quality

* Fix mime types

* Code formatting + media_type fix
2025-01-13 07:46:46 -06:00
Nicolas Mowen
32c71c4108 Clean up handling of ffmpeg specific params (#15956) 2025-01-12 17:47:24 -06:00
Josh Hawkins
ef6952e3ea Fix display of save button in tracked object details pane (#15946) 2025-01-11 15:23:52 -06:00
Nicolas Mowen
173b7aa308 Handle case where user has multiple manual events on same camera (#15943) 2025-01-11 07:47:45 -07:00
Blake Blackshear
c4727f19e1 Simplify plus submit (#15941)
* remove unused annotate file

* improve plus error messages

* formatting
2025-01-11 07:04:11 -07:00
Josh Hawkins
b8a74793ca Clarify motion recording (#15917)
* Clarify motion recording

* move to troubleshooting
2025-01-09 09:55:08 -07:00
Josh Hawkins
c1dede9369 Clarify reolink doorbell two way talk requirements (#15915)
* Clarify reolink doorbell two way talk requirements

* relative paths

* move to live section

* fix link
2025-01-09 09:31:16 -07:00
Nicolas Mowen
0c4ea504d8 Update proxmox docs to align with proxmox recommendation of running in VM. (#15904) 2025-01-08 17:19:04 -06:00
Nicolas Mowen
b265b6b190 Catch case where user has multiple of the same kind of GPU (#15903) 2025-01-08 17:17:57 -06:00
Nicolas Mowen
d57a61b50f Simplify model config (#15881)
* Add migration to migrate to model_path

* Simplify model config

* Cleanup docs

* Set config version

* Formatting

* Fix tests
2025-01-07 20:59:37 -07:00
Nicolas Mowen
4fc9106c17 Update for correct audio requirements (#15882) 2025-01-07 17:02:32 -06:00
Nicolas Mowen
38e098ca31 Remove extra data except from keypackets when using qsv (#15865) 2025-01-06 17:38:46 -06:00
Nicolas Mowen
e7ad38d827 Update model docs (#15779) 2025-01-02 10:04:16 -06:00
Josh Hawkins
a1ce9aacf2 Tracked object details pane bugfix (#15736)
* restore save button in tracked object details pane

* conditionally show save button
2024-12-30 08:23:25 -06:00
Nicolas Mowen
322b847356 Fix event cleanup (#15724) 2024-12-29 14:47:40 -06:00
Josh Hawkins
98338e4c7f Ensure object lifecycle ratio is re-normalized to camera aspect (#15717) 2024-12-28 13:37:39 -07:00
Josh Hawkins
171a89f37b Language consistency - use Explore instead of Search (#15709) 2024-12-27 17:38:43 -07:00
Josh Hawkins
8114b541a8 Sort camera group edit screen by ui config values (#15705) 2024-12-27 14:30:27 -06:00
Josh Hawkins
c48396c5c6 Fix crash when streams are undefined in go2rtc config password cleaning (#15695) 2024-12-27 08:36:21 -06:00
36 changed files with 265 additions and 180 deletions

View File

@@ -61,7 +61,7 @@ def start(id, num_detections, detection_queue, event):
object_detector.cleanup() object_detector.cleanup()
print(f"{id} - Processed for {duration:.2f} seconds.") print(f"{id} - Processed for {duration:.2f} seconds.")
print(f"{id} - FPS: {object_detector.fps.eps():.2f}") print(f"{id} - FPS: {object_detector.fps.eps():.2f}")
print(f"{id} - Average frame processing time: {mean(frame_times)*1000:.2f}ms") print(f"{id} - Average frame processing time: {mean(frame_times) * 1000:.2f}ms")
###### ######

View File

@@ -22,6 +22,6 @@ ADD https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.0.0/librknnrt
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffmpeg RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffmpeg
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffprobe RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffprobe
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-5/ffmpeg /usr/lib/ffmpeg/6.0/bin/ ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-7/ffmpeg /usr/lib/ffmpeg/6.0/bin/
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-5/ffprobe /usr/lib/ffmpeg/6.0/bin/ ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-7/ffprobe /usr/lib/ffmpeg/6.0/bin/
ENV PATH="/usr/lib/ffmpeg/6.0/bin/:${PATH}" ENV PATH="/usr/lib/ffmpeg/6.0/bin/:${PATH}"

View File

@@ -156,7 +156,9 @@ cameras:
#### Reolink Doorbell #### Reolink Doorbell
The reolink doorbell supports 2-way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only. The reolink doorbell supports two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk).
```yaml ```yaml
go2rtc: go2rtc:

View File

@@ -203,14 +203,13 @@ detectors:
ov: ov:
type: openvino type: openvino
device: AUTO device: AUTO
model:
path: /openvino-model/ssdlite_mobilenet_v2.xml
model: model:
width: 300 width: 300
height: 300 height: 300
input_tensor: nhwc input_tensor: nhwc
input_pixel_format: bgr input_pixel_format: bgr
path: /openvino-model/ssdlite_mobilenet_v2.xml
labelmap_path: /openvino-model/coco_91cl_bkgr.txt labelmap_path: /openvino-model/coco_91cl_bkgr.txt
record: record:

View File

@@ -29,7 +29,7 @@ The default video and audio codec on your camera may not always be compatible wi
### Audio Support ### Audio Support
MSE Requires AAC audio, WebRTC requires PCMU/PCMA, or opus audio. If you want to support both MSE and WebRTC then your restream config needs to make sure both are enabled. MSE Requires PCMA/PCMU or AAC audio, WebRTC requires PCMA/PCMU or opus audio. If you want to support both MSE and WebRTC then your restream config needs to make sure both are enabled.
```yaml ```yaml
go2rtc: go2rtc:
@@ -138,3 +138,13 @@ services:
::: :::
See [go2rtc WebRTC docs](https://github.com/AlexxIT/go2rtc/tree/v1.8.3#module-webrtc) for more information about this. See [go2rtc WebRTC docs](https://github.com/AlexxIT/go2rtc/tree/v1.8.3#module-webrtc) for more information about this.
### Two way talk
For devices that support two way talk, Frigate can be configured to use the feature from the camera's Live view in the Web UI. You should:
- Set up go2rtc with [WebRTC](#webrtc-extra-configuration).
- Ensure you access Frigate via https (may require [opening port 8971](/frigate/installation/#ports)).
- For the Home Assistant Frigate card, [follow the docs](https://github.com/dermotduffy/frigate-hass-card?tab=readme-ov-file#using-2-way-audio) for the correct source.
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-doorbell)

View File

@@ -144,7 +144,9 @@ detectors:
#### SSDLite MobileNet v2 #### SSDLite MobileNet v2
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model. Use the model configuration shown below when using the OpenVINO detector with the default model. An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model.
Use the model configuration shown below when using the OpenVINO detector with the default OpenVINO model:
```yaml ```yaml
detectors: detectors:
@@ -254,6 +256,7 @@ yolov4x-mish-640
yolov7-tiny-288 yolov7-tiny-288
yolov7-tiny-416 yolov7-tiny-416
yolov7-640 yolov7-640
yolov7-416
yolov7-320 yolov7-320
yolov7x-640 yolov7x-640
yolov7x-320 yolov7x-320
@@ -282,6 +285,8 @@ The TensorRT detector can be selected by specifying `tensorrt` as the model type
The TensorRT detector uses `.trt` model files that are located in `/config/model_cache/tensorrt` by default. These model path and dimensions used will depend on which model you have generated. The TensorRT detector uses `.trt` model files that are located in `/config/model_cache/tensorrt` by default. These model path and dimensions used will depend on which model you have generated.
Use the config below to work with generated TRT models:
```yaml ```yaml
detectors: detectors:
tensorrt: tensorrt:
@@ -501,11 +506,12 @@ detectors:
cpu1: cpu1:
type: cpu type: cpu
num_threads: 3 num_threads: 3
model:
path: "/custom_model.tflite"
cpu2: cpu2:
type: cpu type: cpu
num_threads: 3 num_threads: 3
model:
path: "/custom_model.tflite"
``` ```
When using CPU detectors, you can add one CPU detector per camera. Adding more detectors than the number of cameras should not improve performance. When using CPU detectors, you can add one CPU detector per camera. Adding more detectors than the number of cameras should not improve performance.
@@ -632,8 +638,6 @@ detectors:
hailo8l: hailo8l:
type: hailo8l type: hailo8l
device: PCIe device: PCIe
model:
path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
model: model:
width: 300 width: 300
@@ -641,4 +645,5 @@ model:
input_tensor: nhwc input_tensor: nhwc
input_pixel_format: bgr input_pixel_format: bgr
model_type: ssd model_type: ssd
path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
``` ```

View File

@@ -52,7 +52,7 @@ detectors:
# Required: name of the detector # Required: name of the detector
detector_name: detector_name:
# Required: type of the detector # Required: type of the detector
# Frigate provided types include 'cpu', 'edgetpu', 'openvino' and 'tensorrt' (default: shown below) # Frigate provides many types, see https://docs.frigate.video/configuration/object_detectors for more details (default: shown below)
# Additional detector types can also be plugged in. # Additional detector types can also be plugged in.
# Detectors may require additional configuration. # Detectors may require additional configuration.
# Refer to the Detectors configuration page for more information. # Refer to the Detectors configuration page for more information.
@@ -117,25 +117,27 @@ auth:
hash_iterations: 600000 hash_iterations: 600000
# Optional: model modifications # Optional: model modifications
# NOTE: The default values are for the EdgeTPU detector.
# Other detectors will require the model config to be set.
model: model:
# Optional: path to the model (default: automatic based on detector) # Required: path to the model (default: automatic based on detector)
path: /edgetpu_model.tflite path: /edgetpu_model.tflite
# Optional: path to the labelmap (default: shown below) # Required: path to the labelmap (default: shown below)
labelmap_path: /labelmap.txt labelmap_path: /labelmap.txt
# Required: Object detection model input width (default: shown below) # Required: Object detection model input width (default: shown below)
width: 320 width: 320
# Required: Object detection model input height (default: shown below) # Required: Object detection model input height (default: shown below)
height: 320 height: 320
# Optional: Object detection model input colorspace # Required: Object detection model input colorspace
# Valid values are rgb, bgr, or yuv. (default: shown below) # Valid values are rgb, bgr, or yuv. (default: shown below)
input_pixel_format: rgb input_pixel_format: rgb
# Optional: Object detection model input tensor format # Required: Object detection model input tensor format
# Valid values are nhwc or nchw (default: shown below) # Valid values are nhwc or nchw (default: shown below)
input_tensor: nhwc input_tensor: nhwc
# Optional: Object detection model type, currently only used with the OpenVINO detector # Required: Object detection model type, currently only used with the OpenVINO detector
# Valid values are ssd, yolox, yolonas (default: shown below) # Valid values are ssd, yolox, yolonas (default: shown below)
model_type: ssd model_type: ssd
# Optional: Label name modifications. These are merged into the standard labelmap. # Required: Label name modifications. These are merged into the standard labelmap.
labelmap: labelmap:
2: vehicle 2: vehicle
# Optional: Map of object labels to their attribute labels (default: depends on model) # Optional: Map of object labels to their attribute labels (default: depends on model)
@@ -546,6 +548,8 @@ genai:
# Optional: Restream configuration # Optional: Restream configuration
# Uses https://github.com/AlexxIT/go2rtc (v1.9.2) # Uses https://github.com/AlexxIT/go2rtc (v1.9.2)
# NOTE: The default go2rtc API port (1984) must be used,
# changing this port for the integrated go2rtc instance is not supported.
go2rtc: go2rtc:
# Optional: Live stream configuration for WebUI. # Optional: Live stream configuration for WebUI.

View File

@@ -305,8 +305,15 @@ To install make sure you have the [community app plugin here](https://forums.unr
## Proxmox ## Proxmox
It is recommended to run Frigate in LXC, rather than in a VM, for maximum performance. The setup can be complex so be prepared to read the Proxmox and LXC documentation. Suggestions include: [According to Proxmox documentation](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pct) it is recommended that you run application containers like Frigate inside a Proxmox QEMU VM. This will give you all the advantages of application containerization, while also providing the benefits that VMs offer, such as strong isolation from the host and the ability to live-migrate, which otherwise isnt possible with containers.
:::warning
If you choose to run Frigate via LXC in Proxmox the setup can be complex so be prepared to read the Proxmox and LXC documentation, Frigate does not officially support running inside of an LXC.
:::
Suggestions include:
- For Intel-based hardware acceleration, to allow access to the `/dev/dri/renderD128` device with major number 226 and minor number 128, add the following lines to the `/etc/pve/lxc/<id>.conf` LXC configuration: - For Intel-based hardware acceleration, to allow access to the `/dev/dri/renderD128` device with major number 226 and minor number 128, add the following lines to the `/etc/pve/lxc/<id>.conf` LXC configuration:
- `lxc.cgroup2.devices.allow: c 226:128 rwm` - `lxc.cgroup2.devices.allow: c 226:128 rwm`
- `lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file` - `lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file`

View File

@@ -47,7 +47,7 @@ that card.
## Configuration ## Configuration
When configuring the integration, you will be asked for the `URL` of your Frigate instance which needs to be pointed at the internal unauthenticated port (`5000`) for your instance. This may look like `http://<host>:5000/`. When configuring the integration, you will be asked for the `URL` of your Frigate instance which can be pointed at the internal unauthenticated port (`5000`) or the authenticated port (`8971`) for your instance. This may look like `http://<host>:5000/`.
### Docker Compose Examples ### Docker Compose Examples
@@ -55,7 +55,7 @@ If you are running Home Assistant Core and Frigate with Docker Compose on the sa
#### Home Assistant running with host networking #### Home Assistant running with host networking
It is not recommended to run Frigate in host networking mode. In this example, you would use `http://172.17.0.1:5000` when configuring the integration. It is not recommended to run Frigate in host networking mode. In this example, you would use `http://172.17.0.1:5000` or `http://172.17.0.1:8971` when configuring the integration.
```yaml ```yaml
services: services:
@@ -75,7 +75,7 @@ services:
#### Home Assistant _not_ running with host networking or in a separate compose file #### Home Assistant _not_ running with host networking or in a separate compose file
In this example, you would use `http://frigate:5000` when configuring the integration. There is no need to map the port for the Frigate container. In this example, it is recommended to connect to the authenticated port, for example, `http://frigate:8971` when configuring the integration. There is no need to map the port for the Frigate container.
```yaml ```yaml
services: services:
@@ -103,14 +103,15 @@ If you are using HassOS with the addon, the URL should be one of the following d
| Frigate NVR (Full Access) | `http://ccab4aaf-frigate-fa:5000` | | Frigate NVR (Full Access) | `http://ccab4aaf-frigate-fa:5000` |
| Frigate NVR Beta | `http://ccab4aaf-frigate-beta:5000` | | Frigate NVR Beta | `http://ccab4aaf-frigate-beta:5000` |
| Frigate NVR Beta (Full Access) | `http://ccab4aaf-frigate-fa-beta:5000` | | Frigate NVR Beta (Full Access) | `http://ccab4aaf-frigate-fa-beta:5000` |
| Frigate NVR HailoRT Beta | `http://ccab4aaf-frigate-hailo-beta:5000` |
### Frigate running on a separate machine ### Frigate running on a separate machine
If you run Frigate on a separate device within your local network, Home Assistant will need access to port 5000. If you run Frigate on a separate device within your local network, Home Assistant will need access to port 8971.
#### Local network #### Local network
Use `http://<frigate_device_ip>:5000` as the URL for the integration. If you want to protect access to port 5000, you can use firewall rules to limit access to the device running Home Assistant. Use `http://<frigate_device_ip>:8971` as the URL for the integration so that authentication is required.
```yaml ```yaml
services: services:
@@ -118,7 +119,7 @@ services:
image: ghcr.io/blakeblackshear/frigate:stable image: ghcr.io/blakeblackshear/frigate:stable
... ...
ports: ports:
- "5000:5000" - "8971:8971"
... ...
``` ```
@@ -195,12 +196,30 @@ To load a snapshot for a tracked object:
https://HA_URL/api/frigate/notifications/<event-id>/snapshot.jpg https://HA_URL/api/frigate/notifications/<event-id>/snapshot.jpg
``` ```
To load a video clip of a tracked object: To load a video clip of a tracked object using an Android device:
``` ```
https://HA_URL/api/frigate/notifications/<event-id>/clip.mp4 https://HA_URL/api/frigate/notifications/<event-id>/clip.mp4
``` ```
To load a video clip of a tracked object using an iOS device:
```
https://HA_URL/api/frigate/notifications/<event-id>/master.m3u8
```
To load a preview gif of a tracked object:
```
https://HA_URL/api/frigate/notifications/<event-id>/event_preview.gif
```
To load a preview gif of a review item:
```
https://HA_URL/api/frigate/notifications/<review-id>/review_preview.gif
```
<a name="streams"></a> <a name="streams"></a>
## RTSP stream ## RTSP stream

View File

@@ -3,7 +3,15 @@ id: recordings
title: Troubleshooting Recordings title: Troubleshooting Recordings
--- ---
### WARNING : Unable to keep up with recording segments in cache for camera. Keeping the 5 most recent segments out of 6 and discarding the rest... ## I have Frigate configured for motion recording only, but it still seems to be recording even with no motion. Why?
You'll want to:
- Make sure your camera's timestamp is masked out with a motion mask. Even if there is no motion occurring in your scene, your motion settings may be sensitive enough to count your timestamp as motion.
- If you have audio detection enabled, keep in mind that audio that is heard above `min_volume` is considered motion.
- [Tune your motion detection settings](/configuration/motion_detection) either by editing your config file or by using the UI's Motion Tuner.
## I see the message: WARNING : Unable to keep up with recording segments in cache for camera. Keeping the 5 most recent segments out of 6 and discarding the rest...
This error can be caused by a number of different issues. The first step in troubleshooting is to enable debug logging for recording. This will enable logging showing how long it takes for recordings to be moved from RAM cache to the disk. This error can be caused by a number of different issues. The first step in troubleshooting is to enable debug logging for recording. This will enable logging showing how long it takes for recordings to be moved from RAM cache to the disk.
@@ -40,6 +48,7 @@ On linux, some helpful tools/commands in diagnosing would be:
On modern linux kernels, the system will utilize some swap if enabled. Setting vm.swappiness=1 no longer means that the kernel will only swap in order to avoid OOM. To prevent any swapping inside a container, set allocations memory and memory+swap to be the same and disable swapping by setting the following docker/podman run parameters: On modern linux kernels, the system will utilize some swap if enabled. Setting vm.swappiness=1 no longer means that the kernel will only swap in order to avoid OOM. To prevent any swapping inside a container, set allocations memory and memory+swap to be the same and disable swapping by setting the following docker/podman run parameters:
**Compose example** **Compose example**
```yaml ```yaml
version: "3.9" version: "3.9"
services: services:
@@ -54,6 +63,7 @@ services:
``` ```
**Run command example** **Run command example**
``` ```
--memory=<MAXRAM> --memory-swap=<MAXSWAP> --memory-swappiness=0 --memory=<MAXRAM> --memory-swap=<MAXSWAP> --memory-swappiness=0
``` ```

View File

@@ -139,6 +139,8 @@ def config(request: Request):
mode="json", warnings="none", exclude_none=True mode="json", warnings="none", exclude_none=True
) )
for stream_name, stream in go2rtc.get("streams", {}).items(): for stream_name, stream in go2rtc.get("streams", {}).items():
if stream is None:
continue
if isinstance(stream, str): if isinstance(stream, str):
cleaned = clean_camera_user_pass(stream) cleaned = clean_camera_user_pass(stream)
else: else:

View File

@@ -133,6 +133,15 @@ def latest_frame(
"regions": params.regions, "regions": params.regions,
} }
quality = params.quality quality = params.quality
mime_type = extension
if extension == "png":
quality_params = None
elif extension == "webp":
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), quality]
else:
quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), quality]
mime_type = "jpeg"
if camera_name in request.app.frigate_config.cameras: if camera_name in request.app.frigate_config.cameras:
frame = frame_processor.get_current_frame(camera_name, draw_options) frame = frame_processor.get_current_frame(camera_name, draw_options)
@@ -173,13 +182,11 @@ def latest_frame(
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA) frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
ret, img = cv2.imencode( ret, img = cv2.imencode(f".{extension}", frame, quality_params)
f".{extension}", frame, [int(cv2.IMWRITE_WEBP_QUALITY), quality]
)
return Response( return Response(
content=img.tobytes(), content=img.tobytes(),
media_type=f"image/{extension}", media_type=f"image/{mime_type}",
headers={"Content-Type": f"image/{extension}", "Cache-Control": "no-store"}, headers={"Content-Type": f"image/{mime_type}", "Cache-Control": "no-store"},
) )
elif camera_name == "birdseye" and request.app.frigate_config.birdseye.restream: elif camera_name == "birdseye" and request.app.frigate_config.birdseye.restream:
frame = cv2.cvtColor( frame = cv2.cvtColor(
@@ -192,13 +199,11 @@ def latest_frame(
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA) frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
ret, img = cv2.imencode( ret, img = cv2.imencode(f".{extension}", frame, quality_params)
f".{extension}", frame, [int(cv2.IMWRITE_WEBP_QUALITY), quality]
)
return Response( return Response(
content=img.tobytes(), content=img.tobytes(),
media_type=f"image/{extension}", media_type=f"image/{mime_type}",
headers={"Content-Type": f"image/{extension}", "Cache-Control": "no-store"}, headers={"Content-Type": f"image/{mime_type}", "Cache-Control": "no-store"},
) )
else: else:
return JSONResponse( return JSONResponse(
@@ -241,6 +246,7 @@ def get_snapshot_from_recording(
recording: Recordings = recording_query.get() recording: Recordings = recording_query.get()
time_in_segment = frame_time - recording.start_time time_in_segment = frame_time - recording.start_time
codec = "png" if format == "png" else "mjpeg" codec = "png" if format == "png" else "mjpeg"
mime_type = "png" if format == "png" else "jpeg"
config: FrigateConfig = request.app.frigate_config config: FrigateConfig = request.app.frigate_config
image_data = get_image_from_recording( image_data = get_image_from_recording(
@@ -257,7 +263,7 @@ def get_snapshot_from_recording(
), ),
status_code=404, status_code=404,
) )
return Response(image_data, headers={"Content-Type": f"image/{format}"}) return Response(image_data, headers={"Content-Type": f"image/{mime_type}"})
except DoesNotExist: except DoesNotExist:
return JSONResponse( return JSONResponse(
content={ content={

View File

@@ -151,7 +151,7 @@ class WebPushClient(Communicator): # type: ignore[misc]
camera: str = payload["after"]["camera"] camera: str = payload["after"]["camera"]
title = f"{', '.join(sorted_objects).replace('_', ' ').title()}{' was' if state == 'end' else ''} detected in {', '.join(payload['after']['data']['zones']).replace('_', ' ').title()}" title = f"{', '.join(sorted_objects).replace('_', ' ').title()}{' was' if state == 'end' else ''} detected in {', '.join(payload['after']['data']['zones']).replace('_', ' ').title()}"
message = f"Detected on {camera.replace('_', ' ').title()}" message = f"Detected on {camera.replace('_', ' ').title()}"
image = f'{payload["after"]["thumb_path"].replace("/media/frigate", "")}' image = f"{payload['after']['thumb_path'].replace('/media/frigate', '')}"
# if event is ongoing open to live view otherwise open to recordings view # if event is ongoing open to live view otherwise open to recordings view
direct_url = f"/review?id={reviewId}" if state == "end" else f"/#{camera}" direct_url = f"/review?id={reviewId}" if state == "end" else f"/#{camera}"

View File

@@ -85,7 +85,7 @@ class ZoneConfig(BaseModel):
if explicit: if explicit:
self.coordinates = ",".join( self.coordinates = ",".join(
[ [
f'{round(int(p.split(",")[0]) / frame_shape[1], 3)},{round(int(p.split(",")[1]) / frame_shape[0], 3)}' f"{round(int(p.split(',')[0]) / frame_shape[1], 3)},{round(int(p.split(',')[1]) / frame_shape[0], 3)}"
for p in coordinates for p in coordinates
] ]
) )

View File

@@ -594,35 +594,27 @@ class FrigateConfig(FrigateBaseModel):
if isinstance(detector, dict) if isinstance(detector, dict)
else detector.model_dump(warnings="none") else detector.model_dump(warnings="none")
) )
detector_config: DetectorConfig = adapter.validate_python(model_dict) detector_config: BaseDetectorConfig = adapter.validate_python(model_dict)
if detector_config.model is None:
detector_config.model = self.model.model_copy()
else:
path = detector_config.model.path
detector_config.model = self.model.model_copy()
detector_config.model.path = path
if "path" not in model_dict or len(model_dict.keys()) > 1: # users should not set model themselves
logger.warning( if detector_config.model:
"Customizing more than a detector model path is unsupported." detector_config.model = None
)
merged_model = deep_merge( model_config = self.model.model_dump(exclude_unset=True, warnings="none")
detector_config.model.model_dump(exclude_unset=True, warnings="none"),
self.model.model_dump(exclude_unset=True, warnings="none"),
)
if "path" not in merged_model: if detector_config.model_path:
model_config["path"] = detector_config.model_path
if "path" not in model_config:
if detector_config.type == "cpu": if detector_config.type == "cpu":
merged_model["path"] = "/cpu_model.tflite" model_config["path"] = "/cpu_model.tflite"
elif detector_config.type == "edgetpu": elif detector_config.type == "edgetpu":
merged_model["path"] = "/edgetpu_model.tflite" model_config["path"] = "/edgetpu_model.tflite"
detector_config.model = ModelConfig.model_validate(merged_model) model = ModelConfig.model_validate(model_config)
detector_config.model.check_and_load_plus_model( model.check_and_load_plus_model(self.plus_api, detector_config.type)
self.plus_api, detector_config.type model.compute_model_hash()
) detector_config.model = model
detector_config.model.compute_model_hash()
self.detectors[key] = detector_config self.detectors[key] = detector_config
return self return self

View File

@@ -194,6 +194,9 @@ class BaseDetectorConfig(BaseModel):
model: Optional[ModelConfig] = Field( model: Optional[ModelConfig] = Field(
default=None, title="Detector specific model configuration." default=None, title="Detector specific model configuration."
) )
model_path: Optional[str] = Field(
default=None, title="Detector specific model path."
)
model_config = ConfigDict( model_config = ConfigDict(
extra="allow", arbitrary_types_allowed=True, protected_namespaces=() extra="allow", arbitrary_types_allowed=True, protected_namespaces=()
) )

View File

@@ -219,19 +219,19 @@ class TensorRtDetector(DetectionApi):
] ]
def __init__(self, detector_config: TensorRTDetectorConfig): def __init__(self, detector_config: TensorRTDetectorConfig):
assert ( assert TRT_SUPPORT, (
TRT_SUPPORT f"TensorRT libraries not found, {DETECTOR_KEY} detector not present"
), f"TensorRT libraries not found, {DETECTOR_KEY} detector not present" )
(cuda_err,) = cuda.cuInit(0) (cuda_err,) = cuda.cuInit(0)
assert ( assert cuda_err == cuda.CUresult.CUDA_SUCCESS, (
cuda_err == cuda.CUresult.CUDA_SUCCESS f"Failed to initialize cuda {cuda_err}"
), f"Failed to initialize cuda {cuda_err}" )
err, dev_count = cuda.cuDeviceGetCount() err, dev_count = cuda.cuDeviceGetCount()
logger.debug(f"Num Available Devices: {dev_count}") logger.debug(f"Num Available Devices: {dev_count}")
assert ( assert detector_config.device < dev_count, (
detector_config.device < dev_count f"Invalid TensorRT Device Config. Device {detector_config.device} Invalid."
), f"Invalid TensorRT Device Config. Device {detector_config.device} Invalid." )
err, self.cu_ctx = cuda.cuCtxCreate( err, self.cu_ctx = cuda.cuCtxCreate(
cuda.CUctx_flags.CU_CTX_MAP_HOST, detector_config.device cuda.CUctx_flags.CU_CTX_MAP_HOST, detector_config.device
) )

View File

@@ -121,8 +121,8 @@ class EventCleanup(threading.Thread):
events_to_update = [] events_to_update = []
for batch in query.iterator(): for event in query.iterator():
events_to_update.extend([event.id for event in batch]) events_to_update.append(event.id)
if len(events_to_update) >= CHUNK_SIZE: if len(events_to_update) >= CHUNK_SIZE:
logger.debug( logger.debug(
f"Updating {update_params} for {len(events_to_update)} events" f"Updating {update_params} for {len(events_to_update)} events"
@@ -257,7 +257,7 @@ class EventCleanup(threading.Thread):
events_to_update = [] events_to_update = []
for event in query.iterator(): for event in query.iterator():
events_to_update.append(event) events_to_update.append(event.id)
if len(events_to_update) >= CHUNK_SIZE: if len(events_to_update) >= CHUNK_SIZE:
logger.debug( logger.debug(

View File

@@ -50,16 +50,9 @@ class LibvaGpuSelector:
return "" return ""
FPS_VFR_PARAM = ( LIBAV_VERSION = int(os.getenv("LIBAVFORMAT_VERSION_MAJOR", "59") or "59")
"-fps_mode vfr" FPS_VFR_PARAM = "-fps_mode vfr" if LIBAV_VERSION >= 59 else "-vsync 2"
if int(os.getenv("LIBAVFORMAT_VERSION_MAJOR", "59") or "59") >= 59 TIMEOUT_PARAM = "-timeout" if LIBAV_VERSION >= 59 else "-stimeout"
else "-vsync 2"
)
TIMEOUT_PARAM = (
"-timeout"
if int(os.getenv("LIBAVFORMAT_VERSION_MAJOR", "59") or "59") >= 59
else "-stimeout"
)
_gpu_selector = LibvaGpuSelector() _gpu_selector = LibvaGpuSelector()
_user_agent_args = [ _user_agent_args = [
@@ -71,8 +64,8 @@ PRESETS_HW_ACCEL_DECODE = {
"preset-rpi-64-h264": "-c:v:1 h264_v4l2m2m", "preset-rpi-64-h264": "-c:v:1 h264_v4l2m2m",
"preset-rpi-64-h265": "-c:v:1 hevc_v4l2m2m", "preset-rpi-64-h265": "-c:v:1 hevc_v4l2m2m",
FFMPEG_HWACCEL_VAAPI: f"-hwaccel_flags allow_profile_mismatch -hwaccel vaapi -hwaccel_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format vaapi", FFMPEG_HWACCEL_VAAPI: f"-hwaccel_flags allow_profile_mismatch -hwaccel vaapi -hwaccel_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format vaapi",
"preset-intel-qsv-h264": f"-hwaccel qsv -qsv_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format qsv -c:v h264_qsv", "preset-intel-qsv-h264": f"-hwaccel qsv -qsv_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format qsv -c:v h264_qsv{' -bsf:v dump_extra' if LIBAV_VERSION >= 61 else ''}", # https://trac.ffmpeg.org/ticket/9766#comment:17
"preset-intel-qsv-h265": f"-load_plugin hevc_hw -hwaccel qsv -qsv_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format qsv -c:v hevc_qsv", "preset-intel-qsv-h265": f"-load_plugin hevc_hw -hwaccel qsv -qsv_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format qsv{' -bsf:v dump_extra' if LIBAV_VERSION >= 61 else ''}", # https://trac.ffmpeg.org/ticket/9766#comment:17
FFMPEG_HWACCEL_NVIDIA: "-hwaccel cuda -hwaccel_output_format cuda", FFMPEG_HWACCEL_NVIDIA: "-hwaccel cuda -hwaccel_output_format cuda",
"preset-jetson-h264": "-c:v h264_nvmpi -resize {1}x{2}", "preset-jetson-h264": "-c:v h264_nvmpi -resize {1}x{2}",
"preset-jetson-h265": "-c:v hevc_nvmpi -resize {1}x{2}", "preset-jetson-h265": "-c:v hevc_nvmpi -resize {1}x{2}",

View File

@@ -68,11 +68,13 @@ class PlusApi:
or self._token_data["expires"] - datetime.datetime.now().timestamp() < 60 or self._token_data["expires"] - datetime.datetime.now().timestamp() < 60
): ):
if self.key is None: if self.key is None:
raise Exception("Plus API not activated") raise Exception(
"Plus API key not set. See https://docs.frigate.video/integrations/plus#set-your-api-key"
)
parts = self.key.split(":") parts = self.key.split(":")
r = requests.get(f"{self.host}/v1/auth/token", auth=(parts[0], parts[1])) r = requests.get(f"{self.host}/v1/auth/token", auth=(parts[0], parts[1]))
if not r.ok: if not r.ok:
raise Exception("Unable to refresh API token") raise Exception(f"Unable to refresh API token: {r.text}")
self._token_data = r.json() self._token_data = r.json()
def _get_authorization_header(self) -> dict: def _get_authorization_header(self) -> dict:
@@ -116,15 +118,6 @@ class PlusApi:
logger.error(f"Failed to upload original: {r.status_code} {r.text}") logger.error(f"Failed to upload original: {r.status_code} {r.text}")
raise Exception(r.text) raise Exception(r.text)
# resize and submit annotate
files = {"file": get_jpg_bytes(image, 640, 70)}
data = presigned_urls["annotate"]["fields"]
data["content-type"] = "image/jpeg"
r = requests.post(presigned_urls["annotate"]["url"], files=files, data=data)
if not r.ok:
logger.error(f"Failed to upload annotate: {r.status_code} {r.text}")
raise Exception(r.text)
# resize and submit thumbnail # resize and submit thumbnail
files = {"file": get_jpg_bytes(image, 200, 70)} files = {"file": get_jpg_bytes(image, 200, 70)}
data = presigned_urls["thumbnail"]["fields"] data = presigned_urls["thumbnail"]["fields"]

View File

@@ -135,7 +135,7 @@ class PtzMotionEstimator:
try: try:
logger.debug( logger.debug(
f"{camera}: Motion estimator transformation: {self.coord_transformations.rel_to_abs([[0,0]])}" f"{camera}: Motion estimator transformation: {self.coord_transformations.rel_to_abs([[0, 0]])}"
) )
except Exception: except Exception:
pass pass
@@ -471,7 +471,7 @@ class PtzAutoTracker:
self.onvif.get_camera_status(camera) self.onvif.get_camera_status(camera)
logger.info( logger.info(
f"Calibration for {camera} in progress: {round((step/num_steps)*100)}% complete" f"Calibration for {camera} in progress: {round((step / num_steps) * 100)}% complete"
) )
self.calibrating[camera] = False self.calibrating[camera] = False
@@ -690,7 +690,7 @@ class PtzAutoTracker:
f"{camera}: Predicted movement time: {self._predict_movement_time(camera, pan, tilt)}" f"{camera}: Predicted movement time: {self._predict_movement_time(camera, pan, tilt)}"
) )
logger.debug( logger.debug(
f"{camera}: Actual movement time: {self.ptz_metrics[camera].stop_time.value-self.ptz_metrics[camera].start_time.value}" f"{camera}: Actual movement time: {self.ptz_metrics[camera].stop_time.value - self.ptz_metrics[camera].start_time.value}"
) )
# save metrics for better estimate calculations # save metrics for better estimate calculations
@@ -983,10 +983,10 @@ class PtzAutoTracker:
logger.debug(f"{camera}: Zoom test: at max zoom: {at_max_zoom}") logger.debug(f"{camera}: Zoom test: at max zoom: {at_max_zoom}")
logger.debug(f"{camera}: Zoom test: at min zoom: {at_min_zoom}") logger.debug(f"{camera}: Zoom test: at min zoom: {at_min_zoom}")
logger.debug( logger.debug(
f'{camera}: Zoom test: zoom in hysteresis limit: {zoom_in_hysteresis} value: {AUTOTRACKING_ZOOM_IN_HYSTERESIS} original: {self.tracked_object_metrics[camera]["original_target_box"]} max: {self.tracked_object_metrics[camera]["max_target_box"]} target: {calculated_target_box if calculated_target_box else self.tracked_object_metrics[camera]["target_box"]}' f"{camera}: Zoom test: zoom in hysteresis limit: {zoom_in_hysteresis} value: {AUTOTRACKING_ZOOM_IN_HYSTERESIS} original: {self.tracked_object_metrics[camera]['original_target_box']} max: {self.tracked_object_metrics[camera]['max_target_box']} target: {calculated_target_box if calculated_target_box else self.tracked_object_metrics[camera]['target_box']}"
) )
logger.debug( logger.debug(
f'{camera}: Zoom test: zoom out hysteresis limit: {zoom_out_hysteresis} value: {AUTOTRACKING_ZOOM_OUT_HYSTERESIS} original: {self.tracked_object_metrics[camera]["original_target_box"]} max: {self.tracked_object_metrics[camera]["max_target_box"]} target: {calculated_target_box if calculated_target_box else self.tracked_object_metrics[camera]["target_box"]}' f"{camera}: Zoom test: zoom out hysteresis limit: {zoom_out_hysteresis} value: {AUTOTRACKING_ZOOM_OUT_HYSTERESIS} original: {self.tracked_object_metrics[camera]['original_target_box']} max: {self.tracked_object_metrics[camera]['max_target_box']} target: {calculated_target_box if calculated_target_box else self.tracked_object_metrics[camera]['target_box']}"
) )
# Zoom in conditions (and) # Zoom in conditions (and)
@@ -1069,7 +1069,7 @@ class PtzAutoTracker:
pan = ((centroid_x / camera_width) - 0.5) * 2 pan = ((centroid_x / camera_width) - 0.5) * 2
tilt = (0.5 - (centroid_y / camera_height)) * 2 tilt = (0.5 - (centroid_y / camera_height)) * 2
logger.debug(f'{camera}: Original box: {obj.obj_data["box"]}') logger.debug(f"{camera}: Original box: {obj.obj_data['box']}")
logger.debug(f"{camera}: Predicted box: {tuple(predicted_box)}") logger.debug(f"{camera}: Predicted box: {tuple(predicted_box)}")
logger.debug( logger.debug(
f"{camera}: Velocity: {tuple(np.round(average_velocity).flatten().astype(int))}" f"{camera}: Velocity: {tuple(np.round(average_velocity).flatten().astype(int))}"
@@ -1179,7 +1179,7 @@ class PtzAutoTracker:
) )
zoom = (ratio - 1) / (ratio + 1) zoom = (ratio - 1) / (ratio + 1)
logger.debug( logger.debug(
f'{camera}: limit: {self.tracked_object_metrics[camera]["max_target_box"]}, ratio: {ratio} zoom calculation: {zoom}' f"{camera}: limit: {self.tracked_object_metrics[camera]['max_target_box']}, ratio: {ratio} zoom calculation: {zoom}"
) )
if not result: if not result:
# zoom out with special condition if zooming out because of velocity, edges, etc. # zoom out with special condition if zooming out because of velocity, edges, etc.

View File

@@ -449,7 +449,7 @@ class RecordingMaintainer(threading.Thread):
return None return None
else: else:
logger.debug( logger.debug(
f"Copied {file_path} in {datetime.datetime.now().timestamp()-start_frame} seconds." f"Copied {file_path} in {datetime.datetime.now().timestamp() - start_frame} seconds."
) )
try: try:

View File

@@ -256,7 +256,7 @@ class ReviewSegmentMaintainer(threading.Thread):
elif object["sub_label"][0] in self.config.model.all_attributes: elif object["sub_label"][0] in self.config.model.all_attributes:
segment.detections[object["id"]] = object["sub_label"][0] segment.detections[object["id"]] = object["sub_label"][0]
else: else:
segment.detections[object["id"]] = f'{object["label"]}-verified' segment.detections[object["id"]] = f"{object['label']}-verified"
segment.sub_labels[object["id"]] = object["sub_label"][0] segment.sub_labels[object["id"]] = object["sub_label"][0]
# if object is alert label # if object is alert label
@@ -352,7 +352,7 @@ class ReviewSegmentMaintainer(threading.Thread):
elif object["sub_label"][0] in self.config.model.all_attributes: elif object["sub_label"][0] in self.config.model.all_attributes:
detections[object["id"]] = object["sub_label"][0] detections[object["id"]] = object["sub_label"][0]
else: else:
detections[object["id"]] = f'{object["label"]}-verified' detections[object["id"]] = f"{object['label']}-verified"
sub_labels[object["id"]] = object["sub_label"][0] sub_labels[object["id"]] = object["sub_label"][0]
# if object is alert label # if object is alert label
@@ -527,7 +527,9 @@ class ReviewSegmentMaintainer(threading.Thread):
if event_id in self.indefinite_events[camera]: if event_id in self.indefinite_events[camera]:
self.indefinite_events[camera].pop(event_id) self.indefinite_events[camera].pop(event_id)
current_segment.last_update = manual_info["end_time"]
if len(self.indefinite_events[camera]) == 0:
current_segment.last_update = manual_info["end_time"]
else: else:
logger.error( logger.error(
f"Event with ID {event_id} has a set duration and can not be ended manually." f"Event with ID {event_id} has a set duration and can not be ended manually."

View File

@@ -72,8 +72,7 @@ class BaseServiceProcess(Service, ABC):
running = False running = False
except TimeoutError: except TimeoutError:
self.manager.logger.warning( self.manager.logger.warning(
f"{self.name} is still running after " f"{self.name} is still running after {timeout} seconds. Killing."
f"{timeout} seconds. Killing."
) )
if running: if running:

View File

@@ -75,11 +75,11 @@ class TestConfig(unittest.TestCase):
"detectors": { "detectors": {
"cpu": { "cpu": {
"type": "cpu", "type": "cpu",
"model": {"path": "/cpu_model.tflite"}, "model_path": "/cpu_model.tflite",
}, },
"edgetpu": { "edgetpu": {
"type": "edgetpu", "type": "edgetpu",
"model": {"path": "/edgetpu_model.tflite"}, "model_path": "/edgetpu_model.tflite",
}, },
"openvino": { "openvino": {
"type": "openvino", "type": "openvino",

View File

@@ -339,7 +339,7 @@ class TrackedObject:
box[2], box[2],
box[3], box[3],
self.obj_data["label"], self.obj_data["label"],
f"{int(self.thumbnail_data['score']*100)}% {int(self.thumbnail_data['area'])}", f"{int(self.thumbnail_data['score'] * 100)}% {int(self.thumbnail_data['area'])}",
thickness=thickness, thickness=thickness,
color=color, color=color,
) )

View File

@@ -13,7 +13,7 @@ from frigate.util.services import get_video_properties
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
CURRENT_CONFIG_VERSION = "0.15-0" CURRENT_CONFIG_VERSION = "0.15-1"
DEFAULT_CONFIG_FILE = "/config/config.yml" DEFAULT_CONFIG_FILE = "/config/config.yml"
@@ -77,6 +77,13 @@ def migrate_frigate_config(config_file: str):
yaml.dump(new_config, f) yaml.dump(new_config, f)
previous_version = "0.15-0" previous_version = "0.15-0"
if previous_version < "0.15-1":
logger.info(f"Migrating frigate config from {previous_version} to 0.15-1...")
new_config = migrate_015_1(config)
with open(config_file, "w") as f:
yaml.dump(new_config, f)
previous_version = "0.15-1"
logger.info("Finished frigate config migration...") logger.info("Finished frigate config migration...")
@@ -267,6 +274,21 @@ def migrate_015_0(config: dict[str, dict[str, any]]) -> dict[str, dict[str, any]
return new_config return new_config
def migrate_015_1(config: dict[str, dict[str, any]]) -> dict[str, dict[str, any]]:
"""Handle migrating frigate config to 0.15-1"""
new_config = config.copy()
for detector, detector_config in config.get("detectors", {}).items():
path = detector_config.get("model", {}).get("path")
if path:
new_config["detectors"][detector]["model_path"] = path
del new_config["detectors"][detector]["model"]
new_config["version"] = "0.15-1"
return new_config
def get_relative_coordinates( def get_relative_coordinates(
mask: Optional[Union[str, list]], frame_shape: tuple[int, int] mask: Optional[Union[str, list]], frame_shape: tuple[int, int]
) -> Union[str, list]: ) -> Union[str, list]:
@@ -292,7 +314,7 @@ def get_relative_coordinates(
continue continue
rel_points.append( rel_points.append(
f"{round(x / frame_shape[1], 3)},{round(y / frame_shape[0], 3)}" f"{round(x / frame_shape[1], 3)},{round(y / frame_shape[0], 3)}"
) )
relative_masks.append(",".join(rel_points)) relative_masks.append(",".join(rel_points))
@@ -315,7 +337,7 @@ def get_relative_coordinates(
return [] return []
rel_points.append( rel_points.append(
f"{round(x / frame_shape[1], 3)},{round(y / frame_shape[0], 3)}" f"{round(x / frame_shape[1], 3)},{round(y / frame_shape[0], 3)}"
) )
mask = ",".join(rel_points) mask = ",".join(rel_points)

View File

@@ -390,12 +390,22 @@ def try_get_info(f, h, default="N/A"):
def get_nvidia_gpu_stats() -> dict[int, dict]: def get_nvidia_gpu_stats() -> dict[int, dict]:
names: dict[str, int] = {}
results = {} results = {}
try: try:
nvml.nvmlInit() nvml.nvmlInit()
deviceCount = nvml.nvmlDeviceGetCount() deviceCount = nvml.nvmlDeviceGetCount()
for i in range(deviceCount): for i in range(deviceCount):
handle = nvml.nvmlDeviceGetHandleByIndex(i) handle = nvml.nvmlDeviceGetHandleByIndex(i)
gpu_name = nvml.nvmlDeviceGetName(handle)
# handle case where user has multiple of same GPU
if gpu_name in names:
names[gpu_name] += 1
gpu_name += f" ({names.get(gpu_name)})"
else:
names[gpu_name] = 1
meminfo = try_get_info(nvml.nvmlDeviceGetMemoryInfo, handle) meminfo = try_get_info(nvml.nvmlDeviceGetMemoryInfo, handle)
util = try_get_info(nvml.nvmlDeviceGetUtilizationRates, handle) util = try_get_info(nvml.nvmlDeviceGetUtilizationRates, handle)
enc = try_get_info(nvml.nvmlDeviceGetEncoderUtilization, handle) enc = try_get_info(nvml.nvmlDeviceGetEncoderUtilization, handle)
@@ -423,7 +433,7 @@ def get_nvidia_gpu_stats() -> dict[int, dict]:
dec_util = -1 dec_util = -1
results[i] = { results[i] = {
"name": nvml.nvmlDeviceGetName(handle), "name": gpu_name,
"gpu": gpu_util, "gpu": gpu_util,
"mem": gpu_mem_util, "mem": gpu_mem_util,
"enc": enc_util, "enc": enc_util,

View File

@@ -208,7 +208,7 @@ class ProcessClip:
box[2], box[2],
box[3], box[3],
obj["id"], obj["id"],
f"{int(obj['score']*100)}% {int(obj['area'])}", f"{int(obj['score'] * 100)}% {int(obj['area'])}",
thickness=thickness, thickness=thickness,
color=color, color=color,
) )
@@ -227,7 +227,7 @@ class ProcessClip:
) )
cv2.imwrite( cv2.imwrite(
f"{os.path.join(debug_path, os.path.basename(self.clip_path))}.{int(frame_time*1000000)}.jpg", f"{os.path.join(debug_path, os.path.basename(self.clip_path))}.{int(frame_time * 1000000)}.jpg",
current_frame, current_frame,
) )
@@ -290,7 +290,7 @@ def process(path, label, output, debug_path):
1 for result in results if result[1]["true_positive_objects"] > 0 1 for result in results if result[1]["true_positive_objects"] > 0
) )
print( print(
f"Objects were detected in {positive_count}/{len(results)}({positive_count/len(results)*100:.2f}%) clip(s)." f"Objects were detected in {positive_count}/{len(results)}({positive_count / len(results) * 100:.2f}%) clip(s)."
) )
if output: if output:

View File

@@ -755,7 +755,11 @@ export function CameraGroupEdit({
<FormMessage /> <FormMessage />
{[ {[
...(birdseyeConfig?.enabled ? ["birdseye"] : []), ...(birdseyeConfig?.enabled ? ["birdseye"] : []),
...Object.keys(config?.cameras ?? {}), ...Object.keys(config?.cameras ?? {}).sort(
(a, b) =>
(config?.cameras[a]?.ui?.order ?? 0) -
(config?.cameras[b]?.ui?.order ?? 0),
),
].map((camera) => ( ].map((camera) => (
<FormControl key={camera}> <FormControl key={camera}>
<FilterSwitch <FilterSwitch

View File

@@ -477,7 +477,10 @@ export default function ObjectLifecycle({
</p> </p>
{Array.isArray(item.data.box) && {Array.isArray(item.data.box) &&
item.data.box.length >= 4 item.data.box.length >= 4
? (item.data.box[2] / item.data.box[3]).toFixed(2) ? (
aspectRatio *
(item.data.box[2] / item.data.box[3])
).toFixed(2)
: "N/A"} : "N/A"}
</div> </div>
</div> </div>

View File

@@ -505,53 +505,53 @@ function ObjectDetailsTab({
<div className="flex w-full flex-row justify-end gap-2"> <div className="flex w-full flex-row justify-end gap-2">
{config?.cameras[search.camera].genai.enabled && search.end_time && ( {config?.cameras[search.camera].genai.enabled && search.end_time && (
<> <div className="flex items-start">
<div className="flex items-start">
<Button
className="rounded-r-none border-r-0"
aria-label="Regenerate tracked object description"
onClick={() => regenerateDescription("thumbnails")}
>
Regenerate
</Button>
{search.has_snapshot && (
<DropdownMenu>
<DropdownMenuTrigger asChild>
<Button
className="rounded-l-none border-l-0 px-2"
aria-label="Expand regeneration menu"
>
<FaChevronDown className="size-3" />
</Button>
</DropdownMenuTrigger>
<DropdownMenuContent>
<DropdownMenuItem
className="cursor-pointer"
aria-label="Regenerate from snapshot"
onClick={() => regenerateDescription("snapshot")}
>
Regenerate from Snapshot
</DropdownMenuItem>
<DropdownMenuItem
className="cursor-pointer"
aria-label="Regenerate from thumbnails"
onClick={() => regenerateDescription("thumbnails")}
>
Regenerate from Thumbnails
</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>
)}
</div>
<Button <Button
variant="select" className="rounded-r-none border-r-0"
aria-label="Save" aria-label="Regenerate tracked object description"
onClick={updateDescription} onClick={() => regenerateDescription("thumbnails")}
> >
Save Regenerate
</Button> </Button>
</> {search.has_snapshot && (
<DropdownMenu>
<DropdownMenuTrigger asChild>
<Button
className="rounded-l-none border-l-0 px-2"
aria-label="Expand regeneration menu"
>
<FaChevronDown className="size-3" />
</Button>
</DropdownMenuTrigger>
<DropdownMenuContent>
<DropdownMenuItem
className="cursor-pointer"
aria-label="Regenerate from snapshot"
onClick={() => regenerateDescription("snapshot")}
>
Regenerate from Snapshot
</DropdownMenuItem>
<DropdownMenuItem
className="cursor-pointer"
aria-label="Regenerate from thumbnails"
onClick={() => regenerateDescription("thumbnails")}
>
Regenerate from Thumbnails
</DropdownMenuItem>
</DropdownMenuContent>
</DropdownMenu>
)}
</div>
)}
{((config?.cameras[search.camera].genai.enabled && search.end_time) ||
!config?.cameras[search.camera].genai.enabled) && (
<Button
variant="select"
aria-label="Save"
onClick={updateDescription}
>
Save
</Button>
)} )}
</div> </div>
</div> </div>

View File

@@ -46,7 +46,7 @@ export default function SearchSettings({
const trigger = ( const trigger = (
<Button <Button
className="flex items-center gap-2" className="flex items-center gap-2"
aria-label="Search Settings" aria-label="Explore Settings"
size="sm" size="sm"
> >
<FaCog className="text-secondary-foreground" /> <FaCog className="text-secondary-foreground" />

View File

@@ -328,12 +328,12 @@ export default function Explore() {
<div className="flex max-w-96 flex-col items-center justify-center space-y-3 rounded-lg bg-background/50 p-5"> <div className="flex max-w-96 flex-col items-center justify-center space-y-3 rounded-lg bg-background/50 p-5">
<div className="my-5 flex flex-col items-center gap-2 text-xl"> <div className="my-5 flex flex-col items-center gap-2 text-xl">
<TbExclamationCircle className="mb-3 size-10" /> <TbExclamationCircle className="mb-3 size-10" />
<div>Search Unavailable</div> <div>Explore is Unavailable</div>
</div> </div>
{embeddingsReindexing && allModelsLoaded && ( {embeddingsReindexing && allModelsLoaded && (
<> <>
<div className="text-center text-primary-variant"> <div className="text-center text-primary-variant">
Search can be used after tracked object embeddings have Explore can be used after tracked object embeddings have
finished reindexing. finished reindexing.
</div> </div>
<div className="pt-5 text-center"> <div className="pt-5 text-center">
@@ -384,8 +384,8 @@ export default function Explore() {
<> <>
<div className="text-center text-primary-variant"> <div className="text-center text-primary-variant">
Frigate is downloading the necessary embeddings models to Frigate is downloading the necessary embeddings models to
support semantic searching. This may take several minutes support the Semantic Search feature. This may take several
depending on the speed of your network connection. minutes depending on the speed of your network connection.
</div> </div>
<div className="flex w-96 flex-col gap-2 py-5"> <div className="flex w-96 flex-col gap-2 py-5">
<div className="flex flex-row items-center justify-center gap-2"> <div className="flex flex-row items-center justify-center gap-2">

View File

@@ -40,7 +40,7 @@ import UiSettingsView from "@/views/settings/UiSettingsView";
const allSettingsViews = [ const allSettingsViews = [
"UI settings", "UI settings",
"search settings", "explore settings",
"camera settings", "camera settings",
"masks / zones", "masks / zones",
"motion tuner", "motion tuner",
@@ -175,7 +175,7 @@ export default function Settings() {
</div> </div>
<div className="mt-2 flex h-full w-full flex-col items-start md:h-dvh md:pb-24"> <div className="mt-2 flex h-full w-full flex-col items-start md:h-dvh md:pb-24">
{page == "UI settings" && <UiSettingsView />} {page == "UI settings" && <UiSettingsView />}
{page == "search settings" && ( {page == "explore settings" && (
<SearchSettingsView setUnsavedChanges={setUnsavedChanges} /> <SearchSettingsView setUnsavedChanges={setUnsavedChanges} />
)} )}
{page == "debug" && ( {page == "debug" && (

View File

@@ -91,7 +91,7 @@ export default function SearchSettingsView({
) )
.then((res) => { .then((res) => {
if (res.status === 200) { if (res.status === 200) {
toast.success("Search settings have been saved.", { toast.success("Explore settings have been saved.", {
position: "top-center", position: "top-center",
}); });
setChangedValue(false); setChangedValue(false);
@@ -128,7 +128,7 @@ export default function SearchSettingsView({
if (changedValue) { if (changedValue) {
addMessage( addMessage(
"search_settings", "search_settings",
`Unsaved search settings changes`, `Unsaved Explore settings changes`,
undefined, undefined,
"search_settings", "search_settings",
); );
@@ -140,7 +140,7 @@ export default function SearchSettingsView({
}, [changedValue]); }, [changedValue]);
useEffect(() => { useEffect(() => {
document.title = "Search Settings - Frigate"; document.title = "Explore Settings - Frigate";
}, []); }, []);
if (!config) { if (!config) {
@@ -152,7 +152,7 @@ export default function SearchSettingsView({
<Toaster position="top-center" closeButton={true} /> <Toaster position="top-center" closeButton={true} />
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto rounded-lg border-[1px] border-secondary-foreground bg-background_alt p-2 md:order-none md:mb-0 md:mr-2 md:mt-0"> <div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto rounded-lg border-[1px] border-secondary-foreground bg-background_alt p-2 md:order-none md:mb-0 md:mr-2 md:mt-0">
<Heading as="h3" className="my-2"> <Heading as="h3" className="my-2">
Search Settings Explore Settings
</Heading> </Heading>
<Separator className="my-2 flex bg-secondary" /> <Separator className="my-2 flex bg-secondary" />
<Heading as="h4" className="my-2"> <Heading as="h4" className="my-2">
@@ -221,7 +221,7 @@ export default function SearchSettingsView({
<div className="text-md">Model Size</div> <div className="text-md">Model Size</div>
<div className="space-y-1 text-sm text-muted-foreground"> <div className="space-y-1 text-sm text-muted-foreground">
<p> <p>
The size of the model used for semantic search embeddings. The size of the model used for Semantic Search embeddings.
</p> </p>
<ul className="list-disc pl-5 text-sm"> <ul className="list-disc pl-5 text-sm">
<li> <li>