forked from Github/frigate
Compare commits
1 Commits
v0.15.0-be
...
dependabot
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
123b67ab2e |
@@ -61,7 +61,7 @@ def start(id, num_detections, detection_queue, event):
|
||||
object_detector.cleanup()
|
||||
print(f"{id} - Processed for {duration:.2f} seconds.")
|
||||
print(f"{id} - FPS: {object_detector.fps.eps():.2f}")
|
||||
print(f"{id} - Average frame processing time: {mean(frame_times) * 1000:.2f}ms")
|
||||
print(f"{id} - Average frame processing time: {mean(frame_times)*1000:.2f}ms")
|
||||
|
||||
|
||||
######
|
||||
|
||||
@@ -22,6 +22,6 @@ ADD https://github.com/MarcA711/rknn-toolkit2/releases/download/v2.0.0/librknnrt
|
||||
|
||||
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffmpeg
|
||||
RUN rm -rf /usr/lib/btbn-ffmpeg/bin/ffprobe
|
||||
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-7/ffmpeg /usr/lib/ffmpeg/6.0/bin/
|
||||
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-7/ffprobe /usr/lib/ffmpeg/6.0/bin/
|
||||
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-5/ffmpeg /usr/lib/ffmpeg/6.0/bin/
|
||||
ADD --chmod=111 https://github.com/MarcA711/Rockchip-FFmpeg-Builds/releases/download/6.1-5/ffprobe /usr/lib/ffmpeg/6.0/bin/
|
||||
ENV PATH="/usr/lib/ffmpeg/6.0/bin/:${PATH}"
|
||||
|
||||
@@ -156,9 +156,7 @@ cameras:
|
||||
|
||||
#### Reolink Doorbell
|
||||
|
||||
The reolink doorbell supports two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
|
||||
|
||||
Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk).
|
||||
The reolink doorbell supports 2-way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
|
||||
|
||||
```yaml
|
||||
go2rtc:
|
||||
|
||||
@@ -203,13 +203,14 @@ detectors:
|
||||
ov:
|
||||
type: openvino
|
||||
device: AUTO
|
||||
model:
|
||||
path: /openvino-model/ssdlite_mobilenet_v2.xml
|
||||
|
||||
model:
|
||||
width: 300
|
||||
height: 300
|
||||
input_tensor: nhwc
|
||||
input_pixel_format: bgr
|
||||
path: /openvino-model/ssdlite_mobilenet_v2.xml
|
||||
labelmap_path: /openvino-model/coco_91cl_bkgr.txt
|
||||
|
||||
record:
|
||||
|
||||
@@ -29,7 +29,7 @@ The default video and audio codec on your camera may not always be compatible wi
|
||||
|
||||
### Audio Support
|
||||
|
||||
MSE Requires PCMA/PCMU or AAC audio, WebRTC requires PCMA/PCMU or opus audio. If you want to support both MSE and WebRTC then your restream config needs to make sure both are enabled.
|
||||
MSE Requires AAC audio, WebRTC requires PCMU/PCMA, or opus audio. If you want to support both MSE and WebRTC then your restream config needs to make sure both are enabled.
|
||||
|
||||
```yaml
|
||||
go2rtc:
|
||||
@@ -138,13 +138,3 @@ services:
|
||||
:::
|
||||
|
||||
See [go2rtc WebRTC docs](https://github.com/AlexxIT/go2rtc/tree/v1.8.3#module-webrtc) for more information about this.
|
||||
|
||||
### Two way talk
|
||||
|
||||
For devices that support two way talk, Frigate can be configured to use the feature from the camera's Live view in the Web UI. You should:
|
||||
|
||||
- Set up go2rtc with [WebRTC](#webrtc-extra-configuration).
|
||||
- Ensure you access Frigate via https (may require [opening port 8971](/frigate/installation/#ports)).
|
||||
- For the Home Assistant Frigate card, [follow the docs](https://github.com/dermotduffy/frigate-hass-card?tab=readme-ov-file#using-2-way-audio) for the correct source.
|
||||
|
||||
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-doorbell)
|
||||
|
||||
@@ -144,9 +144,7 @@ detectors:
|
||||
|
||||
#### SSDLite MobileNet v2
|
||||
|
||||
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model.
|
||||
|
||||
Use the model configuration shown below when using the OpenVINO detector with the default OpenVINO model:
|
||||
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model. Use the model configuration shown below when using the OpenVINO detector with the default model.
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
@@ -256,7 +254,6 @@ yolov4x-mish-640
|
||||
yolov7-tiny-288
|
||||
yolov7-tiny-416
|
||||
yolov7-640
|
||||
yolov7-416
|
||||
yolov7-320
|
||||
yolov7x-640
|
||||
yolov7x-320
|
||||
@@ -285,8 +282,6 @@ The TensorRT detector can be selected by specifying `tensorrt` as the model type
|
||||
|
||||
The TensorRT detector uses `.trt` model files that are located in `/config/model_cache/tensorrt` by default. These model path and dimensions used will depend on which model you have generated.
|
||||
|
||||
Use the config below to work with generated TRT models:
|
||||
|
||||
```yaml
|
||||
detectors:
|
||||
tensorrt:
|
||||
@@ -506,12 +501,11 @@ detectors:
|
||||
cpu1:
|
||||
type: cpu
|
||||
num_threads: 3
|
||||
model:
|
||||
path: "/custom_model.tflite"
|
||||
cpu2:
|
||||
type: cpu
|
||||
num_threads: 3
|
||||
|
||||
model:
|
||||
path: "/custom_model.tflite"
|
||||
```
|
||||
|
||||
When using CPU detectors, you can add one CPU detector per camera. Adding more detectors than the number of cameras should not improve performance.
|
||||
@@ -638,6 +632,8 @@ detectors:
|
||||
hailo8l:
|
||||
type: hailo8l
|
||||
device: PCIe
|
||||
model:
|
||||
path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
|
||||
|
||||
model:
|
||||
width: 300
|
||||
@@ -645,5 +641,4 @@ model:
|
||||
input_tensor: nhwc
|
||||
input_pixel_format: bgr
|
||||
model_type: ssd
|
||||
path: /config/model_cache/h8l_cache/ssd_mobilenet_v1.hef
|
||||
```
|
||||
|
||||
@@ -52,7 +52,7 @@ detectors:
|
||||
# Required: name of the detector
|
||||
detector_name:
|
||||
# Required: type of the detector
|
||||
# Frigate provides many types, see https://docs.frigate.video/configuration/object_detectors for more details (default: shown below)
|
||||
# Frigate provided types include 'cpu', 'edgetpu', 'openvino' and 'tensorrt' (default: shown below)
|
||||
# Additional detector types can also be plugged in.
|
||||
# Detectors may require additional configuration.
|
||||
# Refer to the Detectors configuration page for more information.
|
||||
@@ -117,27 +117,25 @@ auth:
|
||||
hash_iterations: 600000
|
||||
|
||||
# Optional: model modifications
|
||||
# NOTE: The default values are for the EdgeTPU detector.
|
||||
# Other detectors will require the model config to be set.
|
||||
model:
|
||||
# Required: path to the model (default: automatic based on detector)
|
||||
# Optional: path to the model (default: automatic based on detector)
|
||||
path: /edgetpu_model.tflite
|
||||
# Required: path to the labelmap (default: shown below)
|
||||
# Optional: path to the labelmap (default: shown below)
|
||||
labelmap_path: /labelmap.txt
|
||||
# Required: Object detection model input width (default: shown below)
|
||||
width: 320
|
||||
# Required: Object detection model input height (default: shown below)
|
||||
height: 320
|
||||
# Required: Object detection model input colorspace
|
||||
# Optional: Object detection model input colorspace
|
||||
# Valid values are rgb, bgr, or yuv. (default: shown below)
|
||||
input_pixel_format: rgb
|
||||
# Required: Object detection model input tensor format
|
||||
# Optional: Object detection model input tensor format
|
||||
# Valid values are nhwc or nchw (default: shown below)
|
||||
input_tensor: nhwc
|
||||
# Required: Object detection model type, currently only used with the OpenVINO detector
|
||||
# Optional: Object detection model type, currently only used with the OpenVINO detector
|
||||
# Valid values are ssd, yolox, yolonas (default: shown below)
|
||||
model_type: ssd
|
||||
# Required: Label name modifications. These are merged into the standard labelmap.
|
||||
# Optional: Label name modifications. These are merged into the standard labelmap.
|
||||
labelmap:
|
||||
2: vehicle
|
||||
# Optional: Map of object labels to their attribute labels (default: depends on model)
|
||||
@@ -548,8 +546,6 @@ genai:
|
||||
|
||||
# Optional: Restream configuration
|
||||
# Uses https://github.com/AlexxIT/go2rtc (v1.9.2)
|
||||
# NOTE: The default go2rtc API port (1984) must be used,
|
||||
# changing this port for the integrated go2rtc instance is not supported.
|
||||
go2rtc:
|
||||
|
||||
# Optional: Live stream configuration for WebUI.
|
||||
@@ -764,8 +760,6 @@ cameras:
|
||||
- cat
|
||||
# Optional: Restrict generation to objects that entered any of the listed zones (default: none, all zones qualify)
|
||||
required_zones: []
|
||||
# Optional: Save thumbnails sent to generative AI for review/debugging purposes (default: shown below)
|
||||
debug_save_thumbnails: False
|
||||
|
||||
# Optional
|
||||
ui:
|
||||
|
||||
@@ -305,15 +305,8 @@ To install make sure you have the [community app plugin here](https://forums.unr
|
||||
|
||||
## Proxmox
|
||||
|
||||
[According to Proxmox documentation](https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pct) it is recommended that you run application containers like Frigate inside a Proxmox QEMU VM. This will give you all the advantages of application containerization, while also providing the benefits that VMs offer, such as strong isolation from the host and the ability to live-migrate, which otherwise isn’t possible with containers.
|
||||
It is recommended to run Frigate in LXC, rather than in a VM, for maximum performance. The setup can be complex so be prepared to read the Proxmox and LXC documentation. Suggestions include:
|
||||
|
||||
:::warning
|
||||
|
||||
If you choose to run Frigate via LXC in Proxmox the setup can be complex so be prepared to read the Proxmox and LXC documentation, Frigate does not officially support running inside of an LXC.
|
||||
|
||||
:::
|
||||
|
||||
Suggestions include:
|
||||
- For Intel-based hardware acceleration, to allow access to the `/dev/dri/renderD128` device with major number 226 and minor number 128, add the following lines to the `/etc/pve/lxc/<id>.conf` LXC configuration:
|
||||
- `lxc.cgroup2.devices.allow: c 226:128 rwm`
|
||||
- `lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file`
|
||||
|
||||
@@ -47,7 +47,7 @@ that card.
|
||||
|
||||
## Configuration
|
||||
|
||||
When configuring the integration, you will be asked for the `URL` of your Frigate instance which can be pointed at the internal unauthenticated port (`5000`) or the authenticated port (`8971`) for your instance. This may look like `http://<host>:5000/`.
|
||||
When configuring the integration, you will be asked for the `URL` of your Frigate instance which needs to be pointed at the internal unauthenticated port (`5000`) for your instance. This may look like `http://<host>:5000/`.
|
||||
|
||||
### Docker Compose Examples
|
||||
|
||||
@@ -55,7 +55,7 @@ If you are running Home Assistant Core and Frigate with Docker Compose on the sa
|
||||
|
||||
#### Home Assistant running with host networking
|
||||
|
||||
It is not recommended to run Frigate in host networking mode. In this example, you would use `http://172.17.0.1:5000` or `http://172.17.0.1:8971` when configuring the integration.
|
||||
It is not recommended to run Frigate in host networking mode. In this example, you would use `http://172.17.0.1:5000` when configuring the integration.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
@@ -75,7 +75,7 @@ services:
|
||||
|
||||
#### Home Assistant _not_ running with host networking or in a separate compose file
|
||||
|
||||
In this example, it is recommended to connect to the authenticated port, for example, `http://frigate:8971` when configuring the integration. There is no need to map the port for the Frigate container.
|
||||
In this example, you would use `http://frigate:5000` when configuring the integration. There is no need to map the port for the Frigate container.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
@@ -103,15 +103,14 @@ If you are using HassOS with the addon, the URL should be one of the following d
|
||||
| Frigate NVR (Full Access) | `http://ccab4aaf-frigate-fa:5000` |
|
||||
| Frigate NVR Beta | `http://ccab4aaf-frigate-beta:5000` |
|
||||
| Frigate NVR Beta (Full Access) | `http://ccab4aaf-frigate-fa-beta:5000` |
|
||||
| Frigate NVR HailoRT Beta | `http://ccab4aaf-frigate-hailo-beta:5000` |
|
||||
|
||||
### Frigate running on a separate machine
|
||||
|
||||
If you run Frigate on a separate device within your local network, Home Assistant will need access to port 8971.
|
||||
If you run Frigate on a separate device within your local network, Home Assistant will need access to port 5000.
|
||||
|
||||
#### Local network
|
||||
|
||||
Use `http://<frigate_device_ip>:8971` as the URL for the integration so that authentication is required.
|
||||
Use `http://<frigate_device_ip>:5000` as the URL for the integration. If you want to protect access to port 5000, you can use firewall rules to limit access to the device running Home Assistant.
|
||||
|
||||
```yaml
|
||||
services:
|
||||
@@ -119,7 +118,7 @@ services:
|
||||
image: ghcr.io/blakeblackshear/frigate:stable
|
||||
...
|
||||
ports:
|
||||
- "8971:8971"
|
||||
- "5000:5000"
|
||||
...
|
||||
```
|
||||
|
||||
@@ -196,30 +195,12 @@ To load a snapshot for a tracked object:
|
||||
https://HA_URL/api/frigate/notifications/<event-id>/snapshot.jpg
|
||||
```
|
||||
|
||||
To load a video clip of a tracked object using an Android device:
|
||||
To load a video clip of a tracked object:
|
||||
|
||||
```
|
||||
https://HA_URL/api/frigate/notifications/<event-id>/clip.mp4
|
||||
```
|
||||
|
||||
To load a video clip of a tracked object using an iOS device:
|
||||
|
||||
```
|
||||
https://HA_URL/api/frigate/notifications/<event-id>/master.m3u8
|
||||
```
|
||||
|
||||
To load a preview gif of a tracked object:
|
||||
|
||||
```
|
||||
https://HA_URL/api/frigate/notifications/<event-id>/event_preview.gif
|
||||
```
|
||||
|
||||
To load a preview gif of a review item:
|
||||
|
||||
```
|
||||
https://HA_URL/api/frigate/notifications/<review-id>/review_preview.gif
|
||||
```
|
||||
|
||||
<a name="streams"></a>
|
||||
|
||||
## RTSP stream
|
||||
|
||||
@@ -3,15 +3,7 @@ id: recordings
|
||||
title: Troubleshooting Recordings
|
||||
---
|
||||
|
||||
## I have Frigate configured for motion recording only, but it still seems to be recording even with no motion. Why?
|
||||
|
||||
You'll want to:
|
||||
|
||||
- Make sure your camera's timestamp is masked out with a motion mask. Even if there is no motion occurring in your scene, your motion settings may be sensitive enough to count your timestamp as motion.
|
||||
- If you have audio detection enabled, keep in mind that audio that is heard above `min_volume` is considered motion.
|
||||
- [Tune your motion detection settings](/configuration/motion_detection) either by editing your config file or by using the UI's Motion Tuner.
|
||||
|
||||
## I see the message: WARNING : Unable to keep up with recording segments in cache for camera. Keeping the 5 most recent segments out of 6 and discarding the rest...
|
||||
### WARNING : Unable to keep up with recording segments in cache for camera. Keeping the 5 most recent segments out of 6 and discarding the rest...
|
||||
|
||||
This error can be caused by a number of different issues. The first step in troubleshooting is to enable debug logging for recording. This will enable logging showing how long it takes for recordings to be moved from RAM cache to the disk.
|
||||
|
||||
@@ -48,7 +40,6 @@ On linux, some helpful tools/commands in diagnosing would be:
|
||||
On modern linux kernels, the system will utilize some swap if enabled. Setting vm.swappiness=1 no longer means that the kernel will only swap in order to avoid OOM. To prevent any swapping inside a container, set allocations memory and memory+swap to be the same and disable swapping by setting the following docker/podman run parameters:
|
||||
|
||||
**Compose example**
|
||||
|
||||
```yaml
|
||||
version: "3.9"
|
||||
services:
|
||||
@@ -63,7 +54,6 @@ services:
|
||||
```
|
||||
|
||||
**Run command example**
|
||||
|
||||
```
|
||||
--memory=<MAXRAM> --memory-swap=<MAXSWAP> --memory-swappiness=0
|
||||
```
|
||||
|
||||
@@ -139,8 +139,6 @@ def config(request: Request):
|
||||
mode="json", warnings="none", exclude_none=True
|
||||
)
|
||||
for stream_name, stream in go2rtc.get("streams", {}).items():
|
||||
if stream is None:
|
||||
continue
|
||||
if isinstance(stream, str):
|
||||
cleaned = clean_camera_user_pass(stream)
|
||||
else:
|
||||
|
||||
@@ -133,15 +133,6 @@ def latest_frame(
|
||||
"regions": params.regions,
|
||||
}
|
||||
quality = params.quality
|
||||
mime_type = extension
|
||||
|
||||
if extension == "png":
|
||||
quality_params = None
|
||||
elif extension == "webp":
|
||||
quality_params = [int(cv2.IMWRITE_WEBP_QUALITY), quality]
|
||||
else:
|
||||
quality_params = [int(cv2.IMWRITE_JPEG_QUALITY), quality]
|
||||
mime_type = "jpeg"
|
||||
|
||||
if camera_name in request.app.frigate_config.cameras:
|
||||
frame = frame_processor.get_current_frame(camera_name, draw_options)
|
||||
@@ -182,11 +173,13 @@ def latest_frame(
|
||||
|
||||
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
|
||||
|
||||
ret, img = cv2.imencode(f".{extension}", frame, quality_params)
|
||||
ret, img = cv2.imencode(
|
||||
f".{extension}", frame, [int(cv2.IMWRITE_WEBP_QUALITY), quality]
|
||||
)
|
||||
return Response(
|
||||
content=img.tobytes(),
|
||||
media_type=f"image/{mime_type}",
|
||||
headers={"Content-Type": f"image/{mime_type}", "Cache-Control": "no-store"},
|
||||
media_type=f"image/{extension}",
|
||||
headers={"Content-Type": f"image/{extension}", "Cache-Control": "no-store"},
|
||||
)
|
||||
elif camera_name == "birdseye" and request.app.frigate_config.birdseye.restream:
|
||||
frame = cv2.cvtColor(
|
||||
@@ -199,11 +192,13 @@ def latest_frame(
|
||||
|
||||
frame = cv2.resize(frame, dsize=(width, height), interpolation=cv2.INTER_AREA)
|
||||
|
||||
ret, img = cv2.imencode(f".{extension}", frame, quality_params)
|
||||
ret, img = cv2.imencode(
|
||||
f".{extension}", frame, [int(cv2.IMWRITE_WEBP_QUALITY), quality]
|
||||
)
|
||||
return Response(
|
||||
content=img.tobytes(),
|
||||
media_type=f"image/{mime_type}",
|
||||
headers={"Content-Type": f"image/{mime_type}", "Cache-Control": "no-store"},
|
||||
media_type=f"image/{extension}",
|
||||
headers={"Content-Type": f"image/{extension}", "Cache-Control": "no-store"},
|
||||
)
|
||||
else:
|
||||
return JSONResponse(
|
||||
@@ -246,7 +241,6 @@ def get_snapshot_from_recording(
|
||||
recording: Recordings = recording_query.get()
|
||||
time_in_segment = frame_time - recording.start_time
|
||||
codec = "png" if format == "png" else "mjpeg"
|
||||
mime_type = "png" if format == "png" else "jpeg"
|
||||
config: FrigateConfig = request.app.frigate_config
|
||||
|
||||
image_data = get_image_from_recording(
|
||||
@@ -263,7 +257,7 @@ def get_snapshot_from_recording(
|
||||
),
|
||||
status_code=404,
|
||||
)
|
||||
return Response(image_data, headers={"Content-Type": f"image/{mime_type}"})
|
||||
return Response(image_data, headers={"Content-Type": f"image/{format}"})
|
||||
except DoesNotExist:
|
||||
return JSONResponse(
|
||||
content={
|
||||
|
||||
@@ -151,7 +151,7 @@ class WebPushClient(Communicator): # type: ignore[misc]
|
||||
camera: str = payload["after"]["camera"]
|
||||
title = f"{', '.join(sorted_objects).replace('_', ' ').title()}{' was' if state == 'end' else ''} detected in {', '.join(payload['after']['data']['zones']).replace('_', ' ').title()}"
|
||||
message = f"Detected on {camera.replace('_', ' ').title()}"
|
||||
image = f"{payload['after']['thumb_path'].replace('/media/frigate', '')}"
|
||||
image = f'{payload["after"]["thumb_path"].replace("/media/frigate", "")}'
|
||||
|
||||
# if event is ongoing open to live view otherwise open to recordings view
|
||||
direct_url = f"/review?id={reviewId}" if state == "end" else f"/#{camera}"
|
||||
|
||||
@@ -38,10 +38,6 @@ class GenAICameraConfig(BaseModel):
|
||||
default_factory=list,
|
||||
title="List of required zones to be entered in order to run generative AI.",
|
||||
)
|
||||
debug_save_thumbnails: bool = Field(
|
||||
default=False,
|
||||
title="Save thumbnails sent to generative AI for debugging purposes.",
|
||||
)
|
||||
|
||||
@field_validator("required_zones", mode="before")
|
||||
@classmethod
|
||||
|
||||
@@ -85,7 +85,7 @@ class ZoneConfig(BaseModel):
|
||||
if explicit:
|
||||
self.coordinates = ",".join(
|
||||
[
|
||||
f"{round(int(p.split(',')[0]) / frame_shape[1], 3)},{round(int(p.split(',')[1]) / frame_shape[0], 3)}"
|
||||
f'{round(int(p.split(",")[0]) / frame_shape[1], 3)},{round(int(p.split(",")[1]) / frame_shape[0], 3)}'
|
||||
for p in coordinates
|
||||
]
|
||||
)
|
||||
|
||||
@@ -594,27 +594,35 @@ class FrigateConfig(FrigateBaseModel):
|
||||
if isinstance(detector, dict)
|
||||
else detector.model_dump(warnings="none")
|
||||
)
|
||||
detector_config: BaseDetectorConfig = adapter.validate_python(model_dict)
|
||||
detector_config: DetectorConfig = adapter.validate_python(model_dict)
|
||||
if detector_config.model is None:
|
||||
detector_config.model = self.model.model_copy()
|
||||
else:
|
||||
path = detector_config.model.path
|
||||
detector_config.model = self.model.model_copy()
|
||||
detector_config.model.path = path
|
||||
|
||||
# users should not set model themselves
|
||||
if detector_config.model:
|
||||
detector_config.model = None
|
||||
if "path" not in model_dict or len(model_dict.keys()) > 1:
|
||||
logger.warning(
|
||||
"Customizing more than a detector model path is unsupported."
|
||||
)
|
||||
|
||||
model_config = self.model.model_dump(exclude_unset=True, warnings="none")
|
||||
merged_model = deep_merge(
|
||||
detector_config.model.model_dump(exclude_unset=True, warnings="none"),
|
||||
self.model.model_dump(exclude_unset=True, warnings="none"),
|
||||
)
|
||||
|
||||
if detector_config.model_path:
|
||||
model_config["path"] = detector_config.model_path
|
||||
|
||||
if "path" not in model_config:
|
||||
if "path" not in merged_model:
|
||||
if detector_config.type == "cpu":
|
||||
model_config["path"] = "/cpu_model.tflite"
|
||||
merged_model["path"] = "/cpu_model.tflite"
|
||||
elif detector_config.type == "edgetpu":
|
||||
model_config["path"] = "/edgetpu_model.tflite"
|
||||
merged_model["path"] = "/edgetpu_model.tflite"
|
||||
|
||||
model = ModelConfig.model_validate(model_config)
|
||||
model.check_and_load_plus_model(self.plus_api, detector_config.type)
|
||||
model.compute_model_hash()
|
||||
detector_config.model = model
|
||||
detector_config.model = ModelConfig.model_validate(merged_model)
|
||||
detector_config.model.check_and_load_plus_model(
|
||||
self.plus_api, detector_config.type
|
||||
)
|
||||
detector_config.model.compute_model_hash()
|
||||
self.detectors[key] = detector_config
|
||||
|
||||
return self
|
||||
|
||||
@@ -194,9 +194,6 @@ class BaseDetectorConfig(BaseModel):
|
||||
model: Optional[ModelConfig] = Field(
|
||||
default=None, title="Detector specific model configuration."
|
||||
)
|
||||
model_path: Optional[str] = Field(
|
||||
default=None, title="Detector specific model path."
|
||||
)
|
||||
model_config = ConfigDict(
|
||||
extra="allow", arbitrary_types_allowed=True, protected_namespaces=()
|
||||
)
|
||||
|
||||
@@ -219,19 +219,19 @@ class TensorRtDetector(DetectionApi):
|
||||
]
|
||||
|
||||
def __init__(self, detector_config: TensorRTDetectorConfig):
|
||||
assert TRT_SUPPORT, (
|
||||
f"TensorRT libraries not found, {DETECTOR_KEY} detector not present"
|
||||
)
|
||||
assert (
|
||||
TRT_SUPPORT
|
||||
), f"TensorRT libraries not found, {DETECTOR_KEY} detector not present"
|
||||
|
||||
(cuda_err,) = cuda.cuInit(0)
|
||||
assert cuda_err == cuda.CUresult.CUDA_SUCCESS, (
|
||||
f"Failed to initialize cuda {cuda_err}"
|
||||
)
|
||||
assert (
|
||||
cuda_err == cuda.CUresult.CUDA_SUCCESS
|
||||
), f"Failed to initialize cuda {cuda_err}"
|
||||
err, dev_count = cuda.cuDeviceGetCount()
|
||||
logger.debug(f"Num Available Devices: {dev_count}")
|
||||
assert detector_config.device < dev_count, (
|
||||
f"Invalid TensorRT Device Config. Device {detector_config.device} Invalid."
|
||||
)
|
||||
assert (
|
||||
detector_config.device < dev_count
|
||||
), f"Invalid TensorRT Device Config. Device {detector_config.device} Invalid."
|
||||
err, self.cu_ctx = cuda.cuCtxCreate(
|
||||
cuda.CUctx_flags.CU_CTX_MAP_HOST, detector_config.device
|
||||
)
|
||||
|
||||
@@ -5,7 +5,6 @@ import logging
|
||||
import os
|
||||
import threading
|
||||
from multiprocessing.synchronize import Event as MpEvent
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
import cv2
|
||||
@@ -218,8 +217,6 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
_, buffer = cv2.imencode(".jpg", cropped_image)
|
||||
snapshot_image = buffer.tobytes()
|
||||
|
||||
num_thumbnails = len(self.tracked_events.get(event_id, []))
|
||||
|
||||
embed_image = (
|
||||
[snapshot_image]
|
||||
if event.has_snapshot and camera_config.genai.use_snapshot
|
||||
@@ -228,37 +225,11 @@ class EmbeddingMaintainer(threading.Thread):
|
||||
data["thumbnail"]
|
||||
for data in self.tracked_events[event_id]
|
||||
]
|
||||
if num_thumbnails > 0
|
||||
if len(self.tracked_events.get(event_id, [])) > 0
|
||||
else [thumbnail]
|
||||
)
|
||||
)
|
||||
|
||||
if camera_config.genai.debug_save_thumbnails and num_thumbnails > 0:
|
||||
logger.debug(
|
||||
f"Saving {num_thumbnails} thumbnails for event {event.id}"
|
||||
)
|
||||
|
||||
Path(
|
||||
os.path.join(CLIPS_DIR, f"genai-requests/{event.id}")
|
||||
).mkdir(parents=True, exist_ok=True)
|
||||
|
||||
for idx, data in enumerate(self.tracked_events[event_id], 1):
|
||||
jpg_bytes: bytes = data["thumbnail"]
|
||||
|
||||
if jpg_bytes is None:
|
||||
logger.warning(
|
||||
f"Unable to save thumbnail {idx} for {event.id}."
|
||||
)
|
||||
else:
|
||||
with open(
|
||||
os.path.join(
|
||||
CLIPS_DIR,
|
||||
f"genai-requests/{event.id}/{idx}.jpg",
|
||||
),
|
||||
"wb",
|
||||
) as j:
|
||||
j.write(jpg_bytes)
|
||||
|
||||
# Generate the description. Call happens in a thread since it is network bound.
|
||||
threading.Thread(
|
||||
target=self._embed_description,
|
||||
|
||||
@@ -121,8 +121,8 @@ class EventCleanup(threading.Thread):
|
||||
|
||||
events_to_update = []
|
||||
|
||||
for event in query.iterator():
|
||||
events_to_update.append(event.id)
|
||||
for batch in query.iterator():
|
||||
events_to_update.extend([event.id for event in batch])
|
||||
if len(events_to_update) >= CHUNK_SIZE:
|
||||
logger.debug(
|
||||
f"Updating {update_params} for {len(events_to_update)} events"
|
||||
@@ -257,7 +257,7 @@ class EventCleanup(threading.Thread):
|
||||
events_to_update = []
|
||||
|
||||
for event in query.iterator():
|
||||
events_to_update.append(event.id)
|
||||
events_to_update.append(event)
|
||||
|
||||
if len(events_to_update) >= CHUNK_SIZE:
|
||||
logger.debug(
|
||||
|
||||
@@ -50,9 +50,16 @@ class LibvaGpuSelector:
|
||||
return ""
|
||||
|
||||
|
||||
LIBAV_VERSION = int(os.getenv("LIBAVFORMAT_VERSION_MAJOR", "59") or "59")
|
||||
FPS_VFR_PARAM = "-fps_mode vfr" if LIBAV_VERSION >= 59 else "-vsync 2"
|
||||
TIMEOUT_PARAM = "-timeout" if LIBAV_VERSION >= 59 else "-stimeout"
|
||||
FPS_VFR_PARAM = (
|
||||
"-fps_mode vfr"
|
||||
if int(os.getenv("LIBAVFORMAT_VERSION_MAJOR", "59") or "59") >= 59
|
||||
else "-vsync 2"
|
||||
)
|
||||
TIMEOUT_PARAM = (
|
||||
"-timeout"
|
||||
if int(os.getenv("LIBAVFORMAT_VERSION_MAJOR", "59") or "59") >= 59
|
||||
else "-stimeout"
|
||||
)
|
||||
|
||||
_gpu_selector = LibvaGpuSelector()
|
||||
_user_agent_args = [
|
||||
@@ -64,8 +71,8 @@ PRESETS_HW_ACCEL_DECODE = {
|
||||
"preset-rpi-64-h264": "-c:v:1 h264_v4l2m2m",
|
||||
"preset-rpi-64-h265": "-c:v:1 hevc_v4l2m2m",
|
||||
FFMPEG_HWACCEL_VAAPI: f"-hwaccel_flags allow_profile_mismatch -hwaccel vaapi -hwaccel_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format vaapi",
|
||||
"preset-intel-qsv-h264": f"-hwaccel qsv -qsv_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format qsv -c:v h264_qsv{' -bsf:v dump_extra' if LIBAV_VERSION >= 61 else ''}", # https://trac.ffmpeg.org/ticket/9766#comment:17
|
||||
"preset-intel-qsv-h265": f"-load_plugin hevc_hw -hwaccel qsv -qsv_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format qsv{' -bsf:v dump_extra' if LIBAV_VERSION >= 61 else ''}", # https://trac.ffmpeg.org/ticket/9766#comment:17
|
||||
"preset-intel-qsv-h264": f"-hwaccel qsv -qsv_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format qsv -c:v h264_qsv",
|
||||
"preset-intel-qsv-h265": f"-load_plugin hevc_hw -hwaccel qsv -qsv_device {_gpu_selector.get_selected_gpu()} -hwaccel_output_format qsv -c:v hevc_qsv",
|
||||
FFMPEG_HWACCEL_NVIDIA: "-hwaccel cuda -hwaccel_output_format cuda",
|
||||
"preset-jetson-h264": "-c:v h264_nvmpi -resize {1}x{2}",
|
||||
"preset-jetson-h265": "-c:v hevc_nvmpi -resize {1}x{2}",
|
||||
|
||||
@@ -68,13 +68,11 @@ class PlusApi:
|
||||
or self._token_data["expires"] - datetime.datetime.now().timestamp() < 60
|
||||
):
|
||||
if self.key is None:
|
||||
raise Exception(
|
||||
"Plus API key not set. See https://docs.frigate.video/integrations/plus#set-your-api-key"
|
||||
)
|
||||
raise Exception("Plus API not activated")
|
||||
parts = self.key.split(":")
|
||||
r = requests.get(f"{self.host}/v1/auth/token", auth=(parts[0], parts[1]))
|
||||
if not r.ok:
|
||||
raise Exception(f"Unable to refresh API token: {r.text}")
|
||||
raise Exception("Unable to refresh API token")
|
||||
self._token_data = r.json()
|
||||
|
||||
def _get_authorization_header(self) -> dict:
|
||||
@@ -118,6 +116,15 @@ class PlusApi:
|
||||
logger.error(f"Failed to upload original: {r.status_code} {r.text}")
|
||||
raise Exception(r.text)
|
||||
|
||||
# resize and submit annotate
|
||||
files = {"file": get_jpg_bytes(image, 640, 70)}
|
||||
data = presigned_urls["annotate"]["fields"]
|
||||
data["content-type"] = "image/jpeg"
|
||||
r = requests.post(presigned_urls["annotate"]["url"], files=files, data=data)
|
||||
if not r.ok:
|
||||
logger.error(f"Failed to upload annotate: {r.status_code} {r.text}")
|
||||
raise Exception(r.text)
|
||||
|
||||
# resize and submit thumbnail
|
||||
files = {"file": get_jpg_bytes(image, 200, 70)}
|
||||
data = presigned_urls["thumbnail"]["fields"]
|
||||
|
||||
@@ -135,7 +135,7 @@ class PtzMotionEstimator:
|
||||
|
||||
try:
|
||||
logger.debug(
|
||||
f"{camera}: Motion estimator transformation: {self.coord_transformations.rel_to_abs([[0, 0]])}"
|
||||
f"{camera}: Motion estimator transformation: {self.coord_transformations.rel_to_abs([[0,0]])}"
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
@@ -471,7 +471,7 @@ class PtzAutoTracker:
|
||||
self.onvif.get_camera_status(camera)
|
||||
|
||||
logger.info(
|
||||
f"Calibration for {camera} in progress: {round((step / num_steps) * 100)}% complete"
|
||||
f"Calibration for {camera} in progress: {round((step/num_steps)*100)}% complete"
|
||||
)
|
||||
|
||||
self.calibrating[camera] = False
|
||||
@@ -690,7 +690,7 @@ class PtzAutoTracker:
|
||||
f"{camera}: Predicted movement time: {self._predict_movement_time(camera, pan, tilt)}"
|
||||
)
|
||||
logger.debug(
|
||||
f"{camera}: Actual movement time: {self.ptz_metrics[camera].stop_time.value - self.ptz_metrics[camera].start_time.value}"
|
||||
f"{camera}: Actual movement time: {self.ptz_metrics[camera].stop_time.value-self.ptz_metrics[camera].start_time.value}"
|
||||
)
|
||||
|
||||
# save metrics for better estimate calculations
|
||||
@@ -983,10 +983,10 @@ class PtzAutoTracker:
|
||||
logger.debug(f"{camera}: Zoom test: at max zoom: {at_max_zoom}")
|
||||
logger.debug(f"{camera}: Zoom test: at min zoom: {at_min_zoom}")
|
||||
logger.debug(
|
||||
f"{camera}: Zoom test: zoom in hysteresis limit: {zoom_in_hysteresis} value: {AUTOTRACKING_ZOOM_IN_HYSTERESIS} original: {self.tracked_object_metrics[camera]['original_target_box']} max: {self.tracked_object_metrics[camera]['max_target_box']} target: {calculated_target_box if calculated_target_box else self.tracked_object_metrics[camera]['target_box']}"
|
||||
f'{camera}: Zoom test: zoom in hysteresis limit: {zoom_in_hysteresis} value: {AUTOTRACKING_ZOOM_IN_HYSTERESIS} original: {self.tracked_object_metrics[camera]["original_target_box"]} max: {self.tracked_object_metrics[camera]["max_target_box"]} target: {calculated_target_box if calculated_target_box else self.tracked_object_metrics[camera]["target_box"]}'
|
||||
)
|
||||
logger.debug(
|
||||
f"{camera}: Zoom test: zoom out hysteresis limit: {zoom_out_hysteresis} value: {AUTOTRACKING_ZOOM_OUT_HYSTERESIS} original: {self.tracked_object_metrics[camera]['original_target_box']} max: {self.tracked_object_metrics[camera]['max_target_box']} target: {calculated_target_box if calculated_target_box else self.tracked_object_metrics[camera]['target_box']}"
|
||||
f'{camera}: Zoom test: zoom out hysteresis limit: {zoom_out_hysteresis} value: {AUTOTRACKING_ZOOM_OUT_HYSTERESIS} original: {self.tracked_object_metrics[camera]["original_target_box"]} max: {self.tracked_object_metrics[camera]["max_target_box"]} target: {calculated_target_box if calculated_target_box else self.tracked_object_metrics[camera]["target_box"]}'
|
||||
)
|
||||
|
||||
# Zoom in conditions (and)
|
||||
@@ -1069,7 +1069,7 @@ class PtzAutoTracker:
|
||||
pan = ((centroid_x / camera_width) - 0.5) * 2
|
||||
tilt = (0.5 - (centroid_y / camera_height)) * 2
|
||||
|
||||
logger.debug(f"{camera}: Original box: {obj.obj_data['box']}")
|
||||
logger.debug(f'{camera}: Original box: {obj.obj_data["box"]}')
|
||||
logger.debug(f"{camera}: Predicted box: {tuple(predicted_box)}")
|
||||
logger.debug(
|
||||
f"{camera}: Velocity: {tuple(np.round(average_velocity).flatten().astype(int))}"
|
||||
@@ -1179,7 +1179,7 @@ class PtzAutoTracker:
|
||||
)
|
||||
zoom = (ratio - 1) / (ratio + 1)
|
||||
logger.debug(
|
||||
f"{camera}: limit: {self.tracked_object_metrics[camera]['max_target_box']}, ratio: {ratio} zoom calculation: {zoom}"
|
||||
f'{camera}: limit: {self.tracked_object_metrics[camera]["max_target_box"]}, ratio: {ratio} zoom calculation: {zoom}'
|
||||
)
|
||||
if not result:
|
||||
# zoom out with special condition if zooming out because of velocity, edges, etc.
|
||||
|
||||
@@ -449,7 +449,7 @@ class RecordingMaintainer(threading.Thread):
|
||||
return None
|
||||
else:
|
||||
logger.debug(
|
||||
f"Copied {file_path} in {datetime.datetime.now().timestamp() - start_frame} seconds."
|
||||
f"Copied {file_path} in {datetime.datetime.now().timestamp()-start_frame} seconds."
|
||||
)
|
||||
|
||||
try:
|
||||
|
||||
@@ -256,7 +256,7 @@ class ReviewSegmentMaintainer(threading.Thread):
|
||||
elif object["sub_label"][0] in self.config.model.all_attributes:
|
||||
segment.detections[object["id"]] = object["sub_label"][0]
|
||||
else:
|
||||
segment.detections[object["id"]] = f"{object['label']}-verified"
|
||||
segment.detections[object["id"]] = f'{object["label"]}-verified'
|
||||
segment.sub_labels[object["id"]] = object["sub_label"][0]
|
||||
|
||||
# if object is alert label
|
||||
@@ -352,7 +352,7 @@ class ReviewSegmentMaintainer(threading.Thread):
|
||||
elif object["sub_label"][0] in self.config.model.all_attributes:
|
||||
detections[object["id"]] = object["sub_label"][0]
|
||||
else:
|
||||
detections[object["id"]] = f"{object['label']}-verified"
|
||||
detections[object["id"]] = f'{object["label"]}-verified'
|
||||
sub_labels[object["id"]] = object["sub_label"][0]
|
||||
|
||||
# if object is alert label
|
||||
@@ -527,9 +527,7 @@ class ReviewSegmentMaintainer(threading.Thread):
|
||||
|
||||
if event_id in self.indefinite_events[camera]:
|
||||
self.indefinite_events[camera].pop(event_id)
|
||||
|
||||
if len(self.indefinite_events[camera]) == 0:
|
||||
current_segment.last_update = manual_info["end_time"]
|
||||
current_segment.last_update = manual_info["end_time"]
|
||||
else:
|
||||
logger.error(
|
||||
f"Event with ID {event_id} has a set duration and can not be ended manually."
|
||||
|
||||
@@ -72,7 +72,8 @@ class BaseServiceProcess(Service, ABC):
|
||||
running = False
|
||||
except TimeoutError:
|
||||
self.manager.logger.warning(
|
||||
f"{self.name} is still running after {timeout} seconds. Killing."
|
||||
f"{self.name} is still running after "
|
||||
f"{timeout} seconds. Killing."
|
||||
)
|
||||
|
||||
if running:
|
||||
|
||||
@@ -75,11 +75,11 @@ class TestConfig(unittest.TestCase):
|
||||
"detectors": {
|
||||
"cpu": {
|
||||
"type": "cpu",
|
||||
"model_path": "/cpu_model.tflite",
|
||||
"model": {"path": "/cpu_model.tflite"},
|
||||
},
|
||||
"edgetpu": {
|
||||
"type": "edgetpu",
|
||||
"model_path": "/edgetpu_model.tflite",
|
||||
"model": {"path": "/edgetpu_model.tflite"},
|
||||
},
|
||||
"openvino": {
|
||||
"type": "openvino",
|
||||
|
||||
@@ -339,7 +339,7 @@ class TrackedObject:
|
||||
box[2],
|
||||
box[3],
|
||||
self.obj_data["label"],
|
||||
f"{int(self.thumbnail_data['score'] * 100)}% {int(self.thumbnail_data['area'])}",
|
||||
f"{int(self.thumbnail_data['score']*100)}% {int(self.thumbnail_data['area'])}",
|
||||
thickness=thickness,
|
||||
color=color,
|
||||
)
|
||||
|
||||
@@ -13,7 +13,7 @@ from frigate.util.services import get_video_properties
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
CURRENT_CONFIG_VERSION = "0.15-1"
|
||||
CURRENT_CONFIG_VERSION = "0.15-0"
|
||||
DEFAULT_CONFIG_FILE = "/config/config.yml"
|
||||
|
||||
|
||||
@@ -77,13 +77,6 @@ def migrate_frigate_config(config_file: str):
|
||||
yaml.dump(new_config, f)
|
||||
previous_version = "0.15-0"
|
||||
|
||||
if previous_version < "0.15-1":
|
||||
logger.info(f"Migrating frigate config from {previous_version} to 0.15-1...")
|
||||
new_config = migrate_015_1(config)
|
||||
with open(config_file, "w") as f:
|
||||
yaml.dump(new_config, f)
|
||||
previous_version = "0.15-1"
|
||||
|
||||
logger.info("Finished frigate config migration...")
|
||||
|
||||
|
||||
@@ -274,21 +267,6 @@ def migrate_015_0(config: dict[str, dict[str, any]]) -> dict[str, dict[str, any]
|
||||
return new_config
|
||||
|
||||
|
||||
def migrate_015_1(config: dict[str, dict[str, any]]) -> dict[str, dict[str, any]]:
|
||||
"""Handle migrating frigate config to 0.15-1"""
|
||||
new_config = config.copy()
|
||||
|
||||
for detector, detector_config in config.get("detectors", {}).items():
|
||||
path = detector_config.get("model", {}).get("path")
|
||||
|
||||
if path:
|
||||
new_config["detectors"][detector]["model_path"] = path
|
||||
del new_config["detectors"][detector]["model"]
|
||||
|
||||
new_config["version"] = "0.15-1"
|
||||
return new_config
|
||||
|
||||
|
||||
def get_relative_coordinates(
|
||||
mask: Optional[Union[str, list]], frame_shape: tuple[int, int]
|
||||
) -> Union[str, list]:
|
||||
@@ -314,7 +292,7 @@ def get_relative_coordinates(
|
||||
continue
|
||||
|
||||
rel_points.append(
|
||||
f"{round(x / frame_shape[1], 3)},{round(y / frame_shape[0], 3)}"
|
||||
f"{round(x / frame_shape[1], 3)},{round(y / frame_shape[0], 3)}"
|
||||
)
|
||||
|
||||
relative_masks.append(",".join(rel_points))
|
||||
@@ -337,7 +315,7 @@ def get_relative_coordinates(
|
||||
return []
|
||||
|
||||
rel_points.append(
|
||||
f"{round(x / frame_shape[1], 3)},{round(y / frame_shape[0], 3)}"
|
||||
f"{round(x / frame_shape[1], 3)},{round(y / frame_shape[0], 3)}"
|
||||
)
|
||||
|
||||
mask = ",".join(rel_points)
|
||||
|
||||
@@ -390,22 +390,12 @@ def try_get_info(f, h, default="N/A"):
|
||||
|
||||
|
||||
def get_nvidia_gpu_stats() -> dict[int, dict]:
|
||||
names: dict[str, int] = {}
|
||||
results = {}
|
||||
try:
|
||||
nvml.nvmlInit()
|
||||
deviceCount = nvml.nvmlDeviceGetCount()
|
||||
for i in range(deviceCount):
|
||||
handle = nvml.nvmlDeviceGetHandleByIndex(i)
|
||||
gpu_name = nvml.nvmlDeviceGetName(handle)
|
||||
|
||||
# handle case where user has multiple of same GPU
|
||||
if gpu_name in names:
|
||||
names[gpu_name] += 1
|
||||
gpu_name += f" ({names.get(gpu_name)})"
|
||||
else:
|
||||
names[gpu_name] = 1
|
||||
|
||||
meminfo = try_get_info(nvml.nvmlDeviceGetMemoryInfo, handle)
|
||||
util = try_get_info(nvml.nvmlDeviceGetUtilizationRates, handle)
|
||||
enc = try_get_info(nvml.nvmlDeviceGetEncoderUtilization, handle)
|
||||
@@ -433,7 +423,7 @@ def get_nvidia_gpu_stats() -> dict[int, dict]:
|
||||
dec_util = -1
|
||||
|
||||
results[i] = {
|
||||
"name": gpu_name,
|
||||
"name": nvml.nvmlDeviceGetName(handle),
|
||||
"gpu": gpu_util,
|
||||
"mem": gpu_mem_util,
|
||||
"enc": enc_util,
|
||||
|
||||
@@ -208,7 +208,7 @@ class ProcessClip:
|
||||
box[2],
|
||||
box[3],
|
||||
obj["id"],
|
||||
f"{int(obj['score'] * 100)}% {int(obj['area'])}",
|
||||
f"{int(obj['score']*100)}% {int(obj['area'])}",
|
||||
thickness=thickness,
|
||||
color=color,
|
||||
)
|
||||
@@ -227,7 +227,7 @@ class ProcessClip:
|
||||
)
|
||||
|
||||
cv2.imwrite(
|
||||
f"{os.path.join(debug_path, os.path.basename(self.clip_path))}.{int(frame_time * 1000000)}.jpg",
|
||||
f"{os.path.join(debug_path, os.path.basename(self.clip_path))}.{int(frame_time*1000000)}.jpg",
|
||||
current_frame,
|
||||
)
|
||||
|
||||
@@ -290,7 +290,7 @@ def process(path, label, output, debug_path):
|
||||
1 for result in results if result[1]["true_positive_objects"] > 0
|
||||
)
|
||||
print(
|
||||
f"Objects were detected in {positive_count}/{len(results)}({positive_count / len(results) * 100:.2f}%) clip(s)."
|
||||
f"Objects were detected in {positive_count}/{len(results)}({positive_count/len(results)*100:.2f}%) clip(s)."
|
||||
)
|
||||
|
||||
if output:
|
||||
|
||||
11
web/package-lock.json
generated
11
web/package-lock.json
generated
@@ -54,7 +54,7 @@
|
||||
"react-device-detect": "^2.2.3",
|
||||
"react-dom": "^18.3.1",
|
||||
"react-grid-layout": "^1.4.4",
|
||||
"react-hook-form": "^7.52.1",
|
||||
"react-hook-form": "^7.54.2",
|
||||
"react-icons": "^5.2.1",
|
||||
"react-konva": "^18.2.10",
|
||||
"react-router-dom": "^6.26.0",
|
||||
@@ -7260,12 +7260,11 @@
|
||||
}
|
||||
},
|
||||
"node_modules/react-hook-form": {
|
||||
"version": "7.52.1",
|
||||
"resolved": "https://registry.npmjs.org/react-hook-form/-/react-hook-form-7.52.1.tgz",
|
||||
"integrity": "sha512-uNKIhaoICJ5KQALYZ4TOaOLElyM+xipord+Ha3crEFhTntdLvWZqVY49Wqd/0GiVCA/f9NjemLeiNPjG7Hpurg==",
|
||||
"license": "MIT",
|
||||
"version": "7.54.2",
|
||||
"resolved": "https://registry.npmjs.org/react-hook-form/-/react-hook-form-7.54.2.tgz",
|
||||
"integrity": "sha512-eHpAUgUjWbZocoQYUHposymRb4ZP6d0uwUnooL2uOybA9/3tPUvoAKqEWK1WaSiTxxOfTpffNZP7QwlnM3/gEg==",
|
||||
"engines": {
|
||||
"node": ">=12.22.0"
|
||||
"node": ">=18.0.0"
|
||||
},
|
||||
"funding": {
|
||||
"type": "opencollective",
|
||||
|
||||
@@ -60,7 +60,7 @@
|
||||
"react-device-detect": "^2.2.3",
|
||||
"react-dom": "^18.3.1",
|
||||
"react-grid-layout": "^1.4.4",
|
||||
"react-hook-form": "^7.52.1",
|
||||
"react-hook-form": "^7.54.2",
|
||||
"react-icons": "^5.2.1",
|
||||
"react-konva": "^18.2.10",
|
||||
"react-router-dom": "^6.26.0",
|
||||
|
||||
@@ -755,11 +755,7 @@ export function CameraGroupEdit({
|
||||
<FormMessage />
|
||||
{[
|
||||
...(birdseyeConfig?.enabled ? ["birdseye"] : []),
|
||||
...Object.keys(config?.cameras ?? {}).sort(
|
||||
(a, b) =>
|
||||
(config?.cameras[a]?.ui?.order ?? 0) -
|
||||
(config?.cameras[b]?.ui?.order ?? 0),
|
||||
),
|
||||
...Object.keys(config?.cameras ?? {}),
|
||||
].map((camera) => (
|
||||
<FormControl key={camera}>
|
||||
<FilterSwitch
|
||||
|
||||
@@ -477,10 +477,7 @@ export default function ObjectLifecycle({
|
||||
</p>
|
||||
{Array.isArray(item.data.box) &&
|
||||
item.data.box.length >= 4
|
||||
? (
|
||||
aspectRatio *
|
||||
(item.data.box[2] / item.data.box[3])
|
||||
).toFixed(2)
|
||||
? (item.data.box[2] / item.data.box[3]).toFixed(2)
|
||||
: "N/A"}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -505,53 +505,53 @@ function ObjectDetailsTab({
|
||||
|
||||
<div className="flex w-full flex-row justify-end gap-2">
|
||||
{config?.cameras[search.camera].genai.enabled && search.end_time && (
|
||||
<div className="flex items-start">
|
||||
<>
|
||||
<div className="flex items-start">
|
||||
<Button
|
||||
className="rounded-r-none border-r-0"
|
||||
aria-label="Regenerate tracked object description"
|
||||
onClick={() => regenerateDescription("thumbnails")}
|
||||
>
|
||||
Regenerate
|
||||
</Button>
|
||||
{search.has_snapshot && (
|
||||
<DropdownMenu>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<Button
|
||||
className="rounded-l-none border-l-0 px-2"
|
||||
aria-label="Expand regeneration menu"
|
||||
>
|
||||
<FaChevronDown className="size-3" />
|
||||
</Button>
|
||||
</DropdownMenuTrigger>
|
||||
<DropdownMenuContent>
|
||||
<DropdownMenuItem
|
||||
className="cursor-pointer"
|
||||
aria-label="Regenerate from snapshot"
|
||||
onClick={() => regenerateDescription("snapshot")}
|
||||
>
|
||||
Regenerate from Snapshot
|
||||
</DropdownMenuItem>
|
||||
<DropdownMenuItem
|
||||
className="cursor-pointer"
|
||||
aria-label="Regenerate from thumbnails"
|
||||
onClick={() => regenerateDescription("thumbnails")}
|
||||
>
|
||||
Regenerate from Thumbnails
|
||||
</DropdownMenuItem>
|
||||
</DropdownMenuContent>
|
||||
</DropdownMenu>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<Button
|
||||
className="rounded-r-none border-r-0"
|
||||
aria-label="Regenerate tracked object description"
|
||||
onClick={() => regenerateDescription("thumbnails")}
|
||||
variant="select"
|
||||
aria-label="Save"
|
||||
onClick={updateDescription}
|
||||
>
|
||||
Regenerate
|
||||
Save
|
||||
</Button>
|
||||
{search.has_snapshot && (
|
||||
<DropdownMenu>
|
||||
<DropdownMenuTrigger asChild>
|
||||
<Button
|
||||
className="rounded-l-none border-l-0 px-2"
|
||||
aria-label="Expand regeneration menu"
|
||||
>
|
||||
<FaChevronDown className="size-3" />
|
||||
</Button>
|
||||
</DropdownMenuTrigger>
|
||||
<DropdownMenuContent>
|
||||
<DropdownMenuItem
|
||||
className="cursor-pointer"
|
||||
aria-label="Regenerate from snapshot"
|
||||
onClick={() => regenerateDescription("snapshot")}
|
||||
>
|
||||
Regenerate from Snapshot
|
||||
</DropdownMenuItem>
|
||||
<DropdownMenuItem
|
||||
className="cursor-pointer"
|
||||
aria-label="Regenerate from thumbnails"
|
||||
onClick={() => regenerateDescription("thumbnails")}
|
||||
>
|
||||
Regenerate from Thumbnails
|
||||
</DropdownMenuItem>
|
||||
</DropdownMenuContent>
|
||||
</DropdownMenu>
|
||||
)}
|
||||
</div>
|
||||
)}
|
||||
{((config?.cameras[search.camera].genai.enabled && search.end_time) ||
|
||||
!config?.cameras[search.camera].genai.enabled) && (
|
||||
<Button
|
||||
variant="select"
|
||||
aria-label="Save"
|
||||
onClick={updateDescription}
|
||||
>
|
||||
Save
|
||||
</Button>
|
||||
</>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -46,7 +46,7 @@ export default function SearchSettings({
|
||||
const trigger = (
|
||||
<Button
|
||||
className="flex items-center gap-2"
|
||||
aria-label="Explore Settings"
|
||||
aria-label="Search Settings"
|
||||
size="sm"
|
||||
>
|
||||
<FaCog className="text-secondary-foreground" />
|
||||
|
||||
@@ -328,12 +328,12 @@ export default function Explore() {
|
||||
<div className="flex max-w-96 flex-col items-center justify-center space-y-3 rounded-lg bg-background/50 p-5">
|
||||
<div className="my-5 flex flex-col items-center gap-2 text-xl">
|
||||
<TbExclamationCircle className="mb-3 size-10" />
|
||||
<div>Explore is Unavailable</div>
|
||||
<div>Search Unavailable</div>
|
||||
</div>
|
||||
{embeddingsReindexing && allModelsLoaded && (
|
||||
<>
|
||||
<div className="text-center text-primary-variant">
|
||||
Explore can be used after tracked object embeddings have
|
||||
Search can be used after tracked object embeddings have
|
||||
finished reindexing.
|
||||
</div>
|
||||
<div className="pt-5 text-center">
|
||||
@@ -384,8 +384,8 @@ export default function Explore() {
|
||||
<>
|
||||
<div className="text-center text-primary-variant">
|
||||
Frigate is downloading the necessary embeddings models to
|
||||
support the Semantic Search feature. This may take several
|
||||
minutes depending on the speed of your network connection.
|
||||
support semantic searching. This may take several minutes
|
||||
depending on the speed of your network connection.
|
||||
</div>
|
||||
<div className="flex w-96 flex-col gap-2 py-5">
|
||||
<div className="flex flex-row items-center justify-center gap-2">
|
||||
|
||||
@@ -40,7 +40,7 @@ import UiSettingsView from "@/views/settings/UiSettingsView";
|
||||
|
||||
const allSettingsViews = [
|
||||
"UI settings",
|
||||
"explore settings",
|
||||
"search settings",
|
||||
"camera settings",
|
||||
"masks / zones",
|
||||
"motion tuner",
|
||||
@@ -175,7 +175,7 @@ export default function Settings() {
|
||||
</div>
|
||||
<div className="mt-2 flex h-full w-full flex-col items-start md:h-dvh md:pb-24">
|
||||
{page == "UI settings" && <UiSettingsView />}
|
||||
{page == "explore settings" && (
|
||||
{page == "search settings" && (
|
||||
<SearchSettingsView setUnsavedChanges={setUnsavedChanges} />
|
||||
)}
|
||||
{page == "debug" && (
|
||||
|
||||
@@ -91,7 +91,7 @@ export default function SearchSettingsView({
|
||||
)
|
||||
.then((res) => {
|
||||
if (res.status === 200) {
|
||||
toast.success("Explore settings have been saved.", {
|
||||
toast.success("Search settings have been saved.", {
|
||||
position: "top-center",
|
||||
});
|
||||
setChangedValue(false);
|
||||
@@ -128,7 +128,7 @@ export default function SearchSettingsView({
|
||||
if (changedValue) {
|
||||
addMessage(
|
||||
"search_settings",
|
||||
`Unsaved Explore settings changes`,
|
||||
`Unsaved search settings changes`,
|
||||
undefined,
|
||||
"search_settings",
|
||||
);
|
||||
@@ -140,7 +140,7 @@ export default function SearchSettingsView({
|
||||
}, [changedValue]);
|
||||
|
||||
useEffect(() => {
|
||||
document.title = "Explore Settings - Frigate";
|
||||
document.title = "Search Settings - Frigate";
|
||||
}, []);
|
||||
|
||||
if (!config) {
|
||||
@@ -152,7 +152,7 @@ export default function SearchSettingsView({
|
||||
<Toaster position="top-center" closeButton={true} />
|
||||
<div className="scrollbar-container order-last mb-10 mt-2 flex h-full w-full flex-col overflow-y-auto rounded-lg border-[1px] border-secondary-foreground bg-background_alt p-2 md:order-none md:mb-0 md:mr-2 md:mt-0">
|
||||
<Heading as="h3" className="my-2">
|
||||
Explore Settings
|
||||
Search Settings
|
||||
</Heading>
|
||||
<Separator className="my-2 flex bg-secondary" />
|
||||
<Heading as="h4" className="my-2">
|
||||
@@ -221,7 +221,7 @@ export default function SearchSettingsView({
|
||||
<div className="text-md">Model Size</div>
|
||||
<div className="space-y-1 text-sm text-muted-foreground">
|
||||
<p>
|
||||
The size of the model used for Semantic Search embeddings.
|
||||
The size of the model used for semantic search embeddings.
|
||||
</p>
|
||||
<ul className="list-disc pl-5 text-sm">
|
||||
<li>
|
||||
|
||||
Reference in New Issue
Block a user