Compare commits

..

20 Commits

Author SHA1 Message Date
Blake Blackshear
e954891135 better logging for unsupported labels in Frigate+ 2023-10-05 06:28:13 -05:00
Nicolas Mowen
9a4f970337 Set default min score for attributes labels to 0.7 (#8001)
* Set min score for attributes to 0.7

* Allow other fields to be set
2023-09-30 07:38:15 -05:00
Josh Hawkins
22b9507797 add image of debug view (#8003) 2023-09-30 06:25:39 -05:00
Nicolas Mowen
37379e6fba Update autotracking gif (#8002) 2023-09-29 20:08:15 -05:00
Nicolas Mowen
232588636f Force birdseye to standard aspect ratio (#7994)
* Force birdseye to standard aspect ratio

* Make rounding consistent

* Formatting
2023-09-29 17:53:45 -05:00
Josh Hawkins
e77fedc445 docs for onvif camera support (#7999)
* docs for onvif camera support

* fix warning

* warning to caution

* update table

* centering

* no autotracking for reolinks

* zoom only for 511WA
2023-09-29 17:52:57 -05:00
Josh Hawkins
ead03c381b Autotracking improvements and bugfixes (#7984)
* add zoom factor and catch motion exception

* reword error message

* check euclidean distance of estimate points

* use numpy for euclidean distance

* config entry

* use zoom factor and zoom based on velocity

* move debug inside try

* change log type to info

* logger level warning

* docs

* exception handling
2023-09-28 18:21:37 -05:00
Nicolas Mowen
0048cd5edc Pull radeon driver from bookworm (#7983) 2023-09-28 18:20:48 -05:00
On Freund
56dfcd7a32 Update CAP_PERFMON instructions on hardware_acceleration.md (#7957)
* Update CAP_PERFMON instructions on hardware_acceleration.md

* Three -> there
2023-09-28 18:20:09 -05:00
Nicolas Mowen
9f3ac19e05 Limit max player height (#7974) 2023-09-28 18:01:23 -05:00
Josh Hawkins
50f13b7196 thread lock for move queues (#7973) 2023-09-28 18:01:05 -05:00
tpjanssen
50b17031c4 Update api.md (#7971)
* Update api.md

* Update api.md
2023-09-28 18:00:32 -05:00
mvn23
d11c1a2066 Update camera_specific.md (#7694)
Add information for TP-Link VIGI stream settings
2023-09-27 06:19:29 -05:00
Josh Hawkins
27144eb0b9 Autotracker: Basic zooming and moves with velocity estimation (#7713)
* don't zoom if camera doesn't support it

* basic zooming

* make zooming configurable

* zooming docs

* optional zooming in camera status

* Use absolute instead of relative zooming

* increase edge threshold

* zoom considering object area

* bugfixes

* catch onvif zooming errors

* relative zooming option for dahua/amcrest cams

* docs

* docs

* don't make small movements

* remove old logger statement

* fix small movements

* use enum in config for zooming

* fix formatting

* empty move queue first

* clear tracked object before waiting for stop

* use velocity estimation for movements

* docs updates

* add tests

* typos

* recalc every 50 moves

* adjust zoom based on estimate box if calibrated

* tweaks for fast objects and large movements

* use real time for calibration and add info logging

* docs updates

* remove area scale

* Add example video to docs

* zooming font header size the same as the others

* log an error if a ptz doesn't report a MoveStatus

* debug logging for onvif service capabilities

* ensure camera supports ONVIF MoveStatus
2023-09-27 06:19:10 -05:00
Josh Hawkins
64705c065f update docs sidebar for go2rc 1.7.1 (#7946) 2023-09-27 06:11:37 -05:00
Nicolas Mowen
08eefd8385 Fix frame height default value in docs (#7947) 2023-09-27 06:11:23 -05:00
Blake Blackshear
705ee54315 plus docs update (#7964)
* plus docs update

* add attribute labels
2023-09-27 06:10:53 -05:00
Nicolas Mowen
e26bb94007 Add seconds to exports (#7955) 2023-09-27 06:10:37 -05:00
Nicolas Mowen
1aba8c1ef5 Refactor time filter (#7962)
* Add ability to filter events by start time

* Add tests

* Add time param to events

* Add time picker

* Update docs

* Catch overnight case

Update comment

* Cleanup

* Fix tests
2023-09-27 06:09:38 -05:00
Nicolas Mowen
f92237c9c1 Fix recording timeline info text in light mode (#7963) 2023-09-27 06:08:58 -05:00
29 changed files with 1176 additions and 375 deletions

View File

@@ -55,13 +55,20 @@ fi
# arch specific packages
if [[ "${TARGETARCH}" == "amd64" ]]; then
# Use debian testing repo only for hwaccel packages
# use debian bookworm for AMD hwaccel packages
echo 'deb https://deb.debian.org/debian bookworm main contrib' >/etc/apt/sources.list.d/debian-bookworm.list
apt-get -qq update
apt-get -qq install --no-install-recommends --no-install-suggests -y \
mesa-va-drivers radeontop
rm -f /etc/apt/sources.list.d/debian-bookworm.list
# Use debian testing repo only for intel hwaccel packages
echo 'deb http://deb.debian.org/debian testing main non-free' >/etc/apt/sources.list.d/debian-testing.list
apt-get -qq update
# intel-opencl-icd specifically for GPU support in OpenVino
apt-get -qq install --no-install-recommends --no-install-suggests -y \
intel-opencl-icd \
mesa-va-drivers libva-drm2 intel-media-va-driver-non-free i965-va-driver libmfx1 radeontop intel-gpu-tools
libva-drm2 intel-media-va-driver-non-free i965-va-driver libmfx1 intel-gpu-tools
# something about this dependency requires it to be installed in a separate call rather than in the line above
apt-get -qq install --no-install-recommends --no-install-suggests -y \
i965-va-driver-shaders

View File

@@ -5,6 +5,8 @@ title: Camera Autotracking
An ONVIF-capable, PTZ (pan-tilt-zoom) camera that supports relative movement within the field of view (FOV) can be configured to automatically track moving objects and keep them in the center of the frame.
![Autotracking example with zooming](/img/frigate-autotracking-example.gif)
## Autotracking behavior
Once Frigate determines that an object is not a false positive and has entered one of the required zones, the autotracker will move the PTZ camera to keep the object centered in the frame until the object either moves out of the frame, the PTZ is not capable of any more movement, or Frigate loses track of it.
@@ -50,6 +52,23 @@ cameras:
autotracking:
# Optional: enable/disable object autotracking. (default: shown below)
enabled: False
# Optional: calibrate the camera on startup (default: shown below)
# A calibration will move the PTZ in increments and measure the time it takes to move.
# The results are used to help estimate the position of tracked objects after a camera move.
# Frigate will update your config file automatically after a calibration with
# a "movement_weights" entry for the camera. You should then set calibrate_on_startup to False.
calibrate_on_startup: False
# Optional: the mode to use for zooming in/out on objects during autotracking. (default: shown below)
# Available options are: disabled, absolute, and relative
# disabled - don't zoom in/out on autotracked objects, use pan/tilt only
# absolute - use absolute zooming (supported by most PTZ capable cameras)
# relative - use relative zooming (not supported on all PTZs, but makes concurrent pan/tilt/zoom movements)
zooming: disabled
# Optional: A value to change the behavior of zooming on autotracked objects. (default: shown below)
# A lower value will keep more of the scene in view around a tracked object.
# A higher value will zoom in more on a tracked object, but Frigate may lose tracking more quickly.
# The value should be between 0.1 and 0.75
zoom_factor: 0.3
# Optional: list of objects to track from labelmap.txt (default: shown below)
track:
- person
@@ -60,17 +79,47 @@ cameras:
return_preset: home
# Optional: Seconds to delay before returning to preset. (default: shown below)
timeout: 10
# Optional: Values generated automatically by a camera calibration. Do not modify these manually. (default: shown below)
movement_weights: []
```
## Calibration
PTZ motors operate at different speeds. Performing a calibration will direct Frigate to measure this speed over a variety of movements and use those measurements to better predict the amount of movement necessary to keep autotracked objects in the center of the frame.
Calibration is optional, but will greatly assist Frigate in autotracking objects that move across the camera's field of view more quickly.
To begin calibration, set the `calibrate_on_startup` for your camera to `True` and restart Frigate. Frigate will then make a series of 30 small and large movements with your camera. Don't move the PTZ manually while calibration is in progress. Once complete, camera motion will stop and your config file will be automatically updated with a `movement_weights` parameter to be used in movement calculations. You should not modify this parameter manually.
After calibration has ended, your PTZ will be moved to the preset specified by `return_preset` and you should set `calibrate_on_startup` in your config file to `False`.
Note that Frigate will refine and update the `movement_weights` parameter in your config automatically as the PTZ moves during autotracking and more measurements are obtained.
You can recalibrate at any time by removing the `movement_weights` parameter, setting `calibrate_on_startup` to `True`, and then restarting Frigate. You may need to recalibrate or remove `movement_weights` from your config altogether if autotracking is erratic. If you change your `return_preset` in any way, a recalibration is also recommended.
## Best practices and considerations
Every PTZ camera is different, so autotracking may not perform ideally in every situation. This experimental feature was initially developed using an EmpireTech/Dahua SD1A404XB-GNR.
The object tracker in Frigate estimates the motion of the PTZ so that tracked objects are preserved when the camera moves. In most cases (especially for faster moving objects), the default 5 fps is insufficient for the motion estimator to perform accurately. 10 fps is the current recommendation. Higher frame rates will likely not be more performant and will only slow down Frigate and the motion estimator. Adjust your camera to output at least 10 frames per second and change the `fps` parameter in the [detect configuration](index.md) of your configuration file.
A fast [detector](object_detectors.md) is recommended. CPU detectors will not perform well or won't work at all. If Frigate already has trouble keeping track of your object, the autotracker will struggle as well.
A fast [detector](object_detectors.md) is recommended. CPU detectors will not perform well or won't work at all. You can watch Frigate's debug viewer for your camera to see a thicker colored box around the object currently being autotracked.
The autotracker will add PTZ motion requests to a queue while the motor is moving. Once the motor stops, the events in the queue will be executed together as one large move (rather than incremental moves). If your PTZ's motor is slow, you may not be able to reliably autotrack fast moving objects.
![Autotracking Debug View](/img/autotracking-debug.gif)
A full-frame zone in `required_zones` is not recommended, especially if you've calibrated your camera and there are `movement_weights` defined in the configuration file. Frigate will continue to autotrack an object that has entered one of the `required_zones`, even if it moves outside of that zone.
## Zooming
Zooming is still a very experimental feature and may use significantly more CPU when tracking objects than panning/tilting only. It may be helpful to tweak your camera's autofocus settings if you are noticing focus problems when using zooming.
Absolute zooming makes zoom movements separate from pan/tilt movements. Most PTZ cameras will support absolute zooming.
Relative zooming attempts to make a zoom movement concurrently with any pan/tilt movements. It was tested to work with some Dahua and Amcrest PTZs. But the ONVIF specification indicates that there no assumption about how the generic zoom range is mapped to magnification, field of view or other physical zoom dimension when using relative zooming. So if relative zooming behavior is erratic or just doesn't work, use absolute zooming.
You can optionally adjust the `zoom_factor` for your camera in your configuration file. Lower values will leave more space from the scene around the tracked object while higher values will cause your camera to zoom in more on the object. However, keep in mind that Frigate needs a fair amount of pixels and scene details outside of the bounding box of the tracked object to estimate the motion of your camera. If the object is taking up too much of the frame, Frigate will not be able to track the motion of the camera and your object will be lost.
The range of this option is from 0.1 to 0.75. The default value of 0.3 should be sufficient for most users. If you have a powerful zoom lens on your PTZ or you find your autotracked objects are often lost, you may want to lower this value. Because every PTZ and scene is different, you should experiment to determine what works best for you.
## Usage applications

View File

@@ -150,3 +150,7 @@ ffmpeg:
record: preset-record-ubiquiti
rtmp: preset-rtmp-ubiquiti # recommend using go2rtc instead
```
### TP-Link VIGI Cameras
TP-Link VIGI cameras need some adjustments to the main stream settings on the camera itself to avoid issues. The stream needs to be configured as `H264` with `Smart Coding` set to `off`. Without these settings you may have problems when trying to watch recorded events. For example Firefox will stop playback after a few seconds and show the following error message: `The media playback was aborted due to a corruption problem or because the media used features your browser did not support.`.

View File

@@ -11,11 +11,11 @@ A camera is enabled by default but can be temporarily disabled by using `enabled
Each role can only be assigned to one input per camera. The options for roles are as follows:
| Role | Description |
| ---------- | ---------------------------------------------------------------------------------------- |
| `detect` | Main feed for object detection |
| `record` | Saves segments of the video feed based on configuration settings. [docs](record.md) |
| `rtmp` | Deprecated: Broadcast as an RTMP feed for other services to consume. [docs](restream.md) |
| Role | Description |
| -------- | ---------------------------------------------------------------------------------------- |
| `detect` | Main feed for object detection |
| `record` | Saves segments of the video feed based on configuration settings. [docs](record.md) |
| `rtmp` | Deprecated: Broadcast as an RTMP feed for other services to consume. [docs](restream.md) |
```yaml
mqtt:
@@ -51,13 +51,18 @@ For camera model specific settings check the [camera specific](camera_specific.m
## Setting up camera PTZ controls
Add onvif config to camera
:::caution
Not every PTZ supports ONVIF, which is the standard protocol Frigate uses to communicate with your camera. Check your camera documentation or manufacturer's website to ensure your camera supports ONVIF. If your camera supports ONVIF and you continue to have trouble, make sure your camera is running the latest firmware.
:::
Add the onvif section to your camera in your configuration file:
```yaml
cameras:
back:
ffmpeg:
...
ffmpeg: ...
onvif:
host: 10.0.10.10
port: 8000
@@ -65,6 +70,20 @@ cameras:
password: password
```
then PTZ controls will be available in the cameras WebUI.
If the ONVIF connection is successful, PTZ controls will be available in the camera's WebUI.
An ONVIF-capable camera that supports relative movement within the field of view (FOV) can also be configured to automatically track moving objects and keep them in the center of the frame. For autotracking setup, see the [autotracking](autotracking.md) docs.
## ONVIF PTZ camera recommendations
This list of working and non-working PTZ cameras is based on user feedback.
| Brand or specific camera | PTZ Controls | Autotracking | Notes |
| ------------------------ | :----------: | :----------: | ------------------------------------------------------- |
| Amcrest | ✅ | ⛔️ | Some older models (IP2M-841) don't support autotracking |
| Amcrest ASH21 | ❌ | ❌ | No ONVIF support |
| Dahua | ✅ | ✅ |
| Reolink 511WA | ✅ | ❌ | Zoom only |
| Reolink E1 Zoom | ✅ | ❌ | |
| Tapo C210 | ❌ | ❌ | Incomplete ONVIF support |
| Vikylin PTZ-2804X-I2 | ❌ | ❌ | Incomplete ONVIF support |

View File

@@ -64,11 +64,10 @@ ffmpeg:
### Configuring Intel GPU Stats in Docker
Additional configuration is needed for the Docker container to be able to access the `intel_gpu_top` command for GPU stats. Three possible changes can be made:
Additional configuration is needed for the Docker container to be able to access the `intel_gpu_top` command for GPU stats. There are two options:
1. Run the container as privileged.
2. Adding the `CAP_PERFMON` capability.
3. Setting the `perf_event_paranoid` low enough to allow access to the performance event system.
2. Add the `CAP_PERFMON` capability (note: you might need to set the `perf_event_paranoid` low enough to allow access to the performance event system.)
#### Run as privileged
@@ -125,7 +124,7 @@ _Note: This setting must be changed for the entire system._
For more information on the various values across different distributions, see https://askubuntu.com/questions/1400874/what-does-perf-paranoia-level-four-do.
Depending on your OS and kernel configuration, you may need to change the `/proc/sys/kernel/perf_event_paranoid` kernel tunable. You can test the change by running `sudo sh -c 'echo 2 >/proc/sys/kernel/perf_event_paranoid'` which will persist until a reboot. Make it permanent by running `sudo sh -c 'echo kernel.perf_event_paranoid=1 >> /etc/sysctl.d/local.conf'`
Depending on your OS and kernel configuration, you may need to change the `/proc/sys/kernel/perf_event_paranoid` kernel tunable. You can test the change by running `sudo sh -c 'echo 2 >/proc/sys/kernel/perf_event_paranoid'` which will persist until a reboot. Make it permanent by running `sudo sh -c 'echo kernel.perf_event_paranoid=2 >> /etc/sysctl.d/local.conf'`
## AMD/ATI GPUs (Radeon HD 2000 and newer GPUs) via libva-mesa-driver

View File

@@ -324,7 +324,7 @@ motion:
# Low values will cause things like moving shadows to be detected as motion for longer.
# https://www.geeksforgeeks.org/background-subtraction-in-an-image-using-concept-of-running-average/
frame_alpha: 0.01
# Optional: Height of the resized motion frame (default: 50)
# Optional: Height of the resized motion frame (default: 100)
# Higher values will result in more granular motion detection at the expense of higher CPU usage.
# Lower values result in less CPU, but small changes may not register as motion.
frame_height: 100
@@ -584,6 +584,23 @@ cameras:
autotracking:
# Optional: enable/disable object autotracking. (default: shown below)
enabled: False
# Optional: calibrate the camera on startup (default: shown below)
# A calibration will move the PTZ in increments and measure the time it takes to move.
# The results are used to help estimate the position of tracked objects after a camera move.
# Frigate will update your config file automatically after a calibration with
# a "movement_weights" entry for the camera. You should then set calibrate_on_startup to False.
calibrate_on_startup: False
# Optional: the mode to use for zooming in/out on objects during autotracking. (default: shown below)
# Available options are: disabled, absolute, and relative
# disabled - don't zoom in/out on autotracked objects, use pan/tilt only
# absolute - use absolute zooming (supported by most PTZ capable cameras)
# relative - use relative zooming (not supported on all PTZs, but makes concurrent pan/tilt/zoom movements)
zooming: disabled
# Optional: A value to change the behavior of zooming on autotracked objects. (default: shown below)
# A lower value will keep more of the scene in view around a tracked object.
# A higher value will zoom in more on a tracked object, but Frigate may lose tracking more quickly.
# The value should be between 0.1 and 0.75
zoom_factor: 0.3
# Optional: list of objects to track from labelmap.txt (default: shown below)
track:
- person
@@ -591,9 +608,11 @@ cameras:
required_zones:
- zone_name
# Required: Name of ONVIF preset in camera's firmware to return to when tracking is over. (default: shown below)
return_preset: preset_name
return_preset: home
# Optional: Seconds to delay before returning to preset. (default: shown below)
timeout: 10
# Optional: Values generated automatically by a camera calibration. Do not modify these manually. (default: shown below)
movement_weights: []
# Optional: Configuration for how to sort the cameras in the Birdseye view.
birdseye:

View File

@@ -155,18 +155,20 @@ Version info
Events from the database. Accepts the following query string parameters:
| param | Type | Description |
| -------------------- | ---- | --------------------------------------------- |
| `before` | int | Epoch time |
| `after` | int | Epoch time |
| `cameras` | str | , separated list of cameras |
| `labels` | str | , separated list of labels |
| `zones` | str | , separated list of zones |
| `limit` | int | Limit the number of events returned |
| `has_snapshot` | int | Filter to events that have snapshots (0 or 1) |
| `has_clip` | int | Filter to events that have clips (0 or 1) |
| `include_thumbnails` | int | Include thumbnails in the response (0 or 1) |
| `in_progress` | int | Limit to events in progress (0 or 1) |
| param | Type | Description |
| -------------------- | ---- | ----------------------------------------------- |
| `before` | int | Epoch time |
| `after` | int | Epoch time |
| `cameras` | str | , separated list of cameras |
| `labels` | str | , separated list of labels |
| `zones` | str | , separated list of zones |
| `limit` | int | Limit the number of events returned |
| `has_snapshot` | int | Filter to events that have snapshots (0 or 1) |
| `has_clip` | int | Filter to events that have clips (0 or 1) |
| `include_thumbnails` | int | Include thumbnails in the response (0 or 1) |
| `in_progress` | int | Limit to events in progress (0 or 1) |
| `time_range` | str | Time range in format after,before (00:00,24:00) |
| `timezone` | str | Timezone to use for time range |
### `GET /api/timeline`
@@ -252,7 +254,7 @@ Accepts the following query string parameters, but they are only applied when an
Returns the snapshot image from the latest event for the given camera and label combo. Using `any` as the label will return the latest thumbnail regardless of type.
### `GET /api/<camera_name>/recording/<frame_time>/snapshot.png`
### `GET /api/<camera_name>/recordings/<frame_time>/snapshot.png`
Returns the snapshot image from the specific point in that cameras recordings.
@@ -319,7 +321,7 @@ Create a manual event with a given `label` (ex: doorbell press) to capture a spe
```json
{
"subLabel": "some_string", // add sub label to event
"sub_label": "some_string", // add sub label to event
"duration": 30, // predetermined length of event (default: 30 seconds) or can be to null for indeterminate length event
"include_recording": true, // whether the event should save recordings along with the snapshot that is taken
"draw": {

View File

@@ -11,7 +11,13 @@ Information on how to integrate Frigate+ with Frigate can be found in the [integ
## Frequently asked questions
While developing these models, there were some common questions that arose.
### Are my models trained just on my image uploads? How are they built?
Frigate+ models are built by fine tuning a base model with the images you have annotated and verified. The base model is trained from scratch from a sampling of images across all Frigate+ user submissions and takes weeks of expensive GPU resources to train. If the models were built using your image uploads alone, you would need to provide tens of thousands of examples and it would take more than a week (and considerable cost) to train. Diversity helps the model generalize.
### What is a training credit and how do I use them?
Essentially, `1 training credit = 1 trained model`. When you have uploaded, annotated, and verified additional images and you are ready to train your model, you will submit a model request which will use one credit. The model that is trained will utilize all of the verified images in your account.
### Are my video feeds sent to the cloud for analysis when using Frigate+ models?
@@ -79,6 +85,23 @@ Frigate+ models support a more relevant set of objects for security cameras. Cur
Frigate has special handling for some labels when using Frigate+ models. `face`, `license_plate`, `amazon`, `ups`, and `fedex` are considered attribute labels which are not tracked like regular objects and do not generate events. In addition, the `threshold` filter will have no effect on these labels. You should adjust the `min_score` and other filter values as needed.
In order to have Frigate start using these attribute labels, you will need to add them to the list of objects to track:
```yaml
objects:
track:
- person
- face
- license_plate
- dog
- cat
- car
- amazon
- fedex
- ups
- package
```
When using Frigate+ models, Frigate will choose the snapshot of a person object that has the largest visible face. For cars, the snapshot with the largest visible license plate will be selected. This aids in secondary processing such as facial and license plate recognition for person and car objects.
![Face Attribute](/img/plus/attribute-example-face.jpg)

View File

@@ -21,8 +21,8 @@ module.exports = {
{
type: "link",
label: "Go2RTC Configuration Reference",
href: "https://github.com/AlexxIT/go2rtc/tree/v1.6.2#configuration"
}
href: "https://github.com/AlexxIT/go2rtc/tree/v1.7.1#configuration",
},
],
Detectors: [
"configuration/object_detectors",
@@ -57,16 +57,11 @@ module.exports = {
"integrations/mqtt",
"integrations/third_party_extensions",
],
"Frigate+": [
"plus/index"
],
Troubleshooting: [
"troubleshooting/faqs",
"troubleshooting/recordings",
],
"Frigate+": ["plus/index"],
Troubleshooting: ["troubleshooting/faqs", "troubleshooting/recordings"],
Development: [
"development/contributing",
"development/contributing-boards"
"development/contributing-boards",
],
},
};

BIN
docs/static/img/autotracking-debug.gif vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 MiB

View File

@@ -179,6 +179,12 @@ class FrigateApp:
"ptz_stop_time": mp.Value("d", 0.0), # type: ignore[typeddict-item]
# issue https://github.com/python/typeshed/issues/8799
# from mypy 0.981 onwards
"ptz_frame_time": mp.Value("d", 0.0), # type: ignore[typeddict-item]
# issue https://github.com/python/typeshed/issues/8799
# from mypy 0.981 onwards
"ptz_zoom_level": mp.Value("d", 0.0), # type: ignore[typeddict-item]
# issue https://github.com/python/typeshed/issues/8799
# from mypy 0.981 onwards
}
self.ptz_metrics[camera_name]["ptz_stopped"].set()
self.feature_metrics[camera_name] = {

View File

@@ -13,6 +13,7 @@ from pydantic import BaseModel, Extra, Field, parse_obj_as, validator
from pydantic.fields import PrivateAttr
from frigate.const import (
ALL_ATTRIBUTE_LABELS,
AUDIO_MIN_CONFIDENCE,
CACHE_DIR,
DEFAULT_DB_PATH,
@@ -138,8 +139,26 @@ class MqttConfig(FrigateBaseModel):
return v
class ZoomingModeEnum(str, Enum):
disabled = "disabled"
absolute = "absolute"
relative = "relative"
class PtzAutotrackConfig(FrigateBaseModel):
enabled: bool = Field(default=False, title="Enable PTZ object autotracking.")
calibrate_on_startup: bool = Field(
default=False, title="Perform a camera calibration when Frigate starts."
)
zooming: ZoomingModeEnum = Field(
default=ZoomingModeEnum.disabled, title="Autotracker zooming mode."
)
zoom_factor: float = Field(
default=0.3,
title="Zooming factor (0.1-0.75).",
ge=0.1,
le=0.75,
)
track: List[str] = Field(default=DEFAULT_TRACKED_OBJECTS, title="Objects to track.")
required_zones: List[str] = Field(
default_factory=list,
@@ -152,6 +171,27 @@ class PtzAutotrackConfig(FrigateBaseModel):
timeout: int = Field(
default=10, title="Seconds to delay before returning to preset."
)
movement_weights: Optional[Union[float, List[float]]] = Field(
default=[],
title="Internal value used for PTZ movements based on the speed of your camera's motor.",
)
@validator("movement_weights", pre=True)
def validate_weights(cls, v):
if v is None:
return None
if isinstance(v, str):
weights = list(map(float, v.split(",")))
elif isinstance(v, list):
weights = [float(val) for val in v]
else:
raise ValueError("Invalid type for movement_weights")
if len(weights) != 3:
raise ValueError("movement_weights must have exactly 3 floats")
return weights
class OnvifConfig(FrigateBaseModel):
@@ -434,7 +474,7 @@ class ZoneConfig(BaseModel):
class ObjectConfig(FrigateBaseModel):
track: List[str] = Field(default=DEFAULT_TRACKED_OBJECTS, title="Objects to track.")
filters: Optional[Dict[str, FilterConfig]] = Field(title="Object filters.")
filters: Dict[str, FilterConfig] = Field(default={}, title="Object filters.")
mask: Union[str, List[str]] = Field(default="", title="Object mask.")
@@ -1038,6 +1078,13 @@ class FrigateConfig(FrigateBaseModel):
config.mqtt.user = config.mqtt.user.format(**FRIGATE_ENV_VARS)
config.mqtt.password = config.mqtt.password.format(**FRIGATE_ENV_VARS)
# set default min_score for object attributes
for attribute in ALL_ATTRIBUTE_LABELS:
if not config.objects.filters.get(attribute):
config.objects.filters[attribute] = FilterConfig(min_score=0.7)
elif config.objects.filters[attribute].min_score == 0.5:
config.objects.filters[attribute].min_score = 0.7
# Global config to propagate down to camera level
global_config = config.dict(
include={

View File

@@ -56,6 +56,8 @@ from frigate.version import VERSION
logger = logging.getLogger(__name__)
DEFAULT_TIME_RANGE = "00:00,24:00"
bp = Blueprint("frigate", __name__)
@@ -268,11 +270,9 @@ def send_to_plus(id):
event.label,
)
except Exception as ex:
# log the exception, but dont return an error response
logger.warn(f"Unable to upload annotation for {event.label} to Frigate+")
logger.exception(ex)
return make_response(
jsonify({"success": False, "message": str(ex)}),
400,
)
return make_response(jsonify({"success": True, "plus_id": plus_id}), 200)
@@ -339,6 +339,7 @@ def false_positive(id):
event.detector_type,
)
except Exception as ex:
logger.warn(f"Unable to upload false positive for {event.label} to Frigate+")
logger.exception(ex)
return make_response(
jsonify({"success": False, "message": str(ex)}),
@@ -769,6 +770,7 @@ def events():
limit = request.args.get("limit", 100)
after = request.args.get("after", type=float)
before = request.args.get("before", type=float)
time_range = request.args.get("time_range", DEFAULT_TIME_RANGE)
has_clip = request.args.get("has_clip", type=int)
has_snapshot = request.args.get("has_snapshot", type=int)
in_progress = request.args.get("in_progress", type=int)
@@ -851,6 +853,36 @@ def events():
if before:
clauses.append((Event.start_time < before))
if time_range != DEFAULT_TIME_RANGE:
# get timezone arg to ensure browser times are used
tz_name = request.args.get("timezone", default="utc", type=str)
hour_modifier, minute_modifier = get_tz_modifiers(tz_name)
times = time_range.split(",")
time_after = times[0]
time_before = times[1]
start_hour_fun = fn.strftime(
"%H:%M",
fn.datetime(Event.start_time, "unixepoch", hour_modifier, minute_modifier),
)
# cases where user wants events overnight, ex: from 20:00 to 06:00
# should use or operator
if time_after > time_before:
clauses.append(
(
reduce(
operator.or_,
[(start_hour_fun > time_after), (start_hour_fun < time_before)],
)
)
)
# all other cases should be and operator
else:
clauses.append((start_hour_fun > time_after))
clauses.append((start_hour_fun < time_before))
if has_clip is not None:
clauses.append((Event.has_clip == has_clip))

View File

@@ -33,7 +33,7 @@ from frigate.util.image import (
logger = logging.getLogger(__name__)
def get_standard_aspect_ratio(width, height) -> tuple[int, int]:
def get_standard_aspect_ratio(width: int, height: int) -> tuple[int, int]:
"""Ensure that only standard aspect ratios are used."""
known_aspects = [
(16, 9),
@@ -52,6 +52,22 @@ def get_standard_aspect_ratio(width, height) -> tuple[int, int]:
return known_aspects[known_aspects_ratios.index(closest)]
def get_canvas_shape(width: int, height: int) -> tuple[int, int]:
"""Get birdseye canvas shape."""
canvas_width = width
canvas_height = height
a_w, a_h = get_standard_aspect_ratio(width, height)
if round(a_w / a_h, 2) != round(width / height, 2):
canvas_width = width
canvas_height = (canvas_width / a_w) * a_h
logger.warning(
f"The birdseye resolution is a non-standard aspect ratio, forcing birdseye resolution to {canvas_width} x {canvas_height}"
)
return (canvas_width, canvas_height)
class Canvas:
def __init__(self, canvas_width: int, canvas_height: int) -> None:
gcd = math.gcd(canvas_width, canvas_height)
@@ -226,8 +242,7 @@ class BirdsEyeFrameManager:
self.config = config
self.mode = config.birdseye.mode
self.frame_manager = frame_manager
width = config.birdseye.width
height = config.birdseye.height
width, height = get_canvas_shape(config.birdseye.width, config.birdseye.height)
self.frame_shape = (height, width)
self.yuv_shape = (height * 3 // 2, width)
self.frame = np.ndarray(self.yuv_shape, dtype=np.uint8)

View File

@@ -2,7 +2,7 @@
import copy
import logging
import math
import os
import queue
import threading
import time
@@ -11,11 +11,17 @@ from multiprocessing.synchronize import Event as MpEvent
import cv2
import numpy as np
from norfair.camera_motion import MotionEstimator, TranslationTransformationGetter
from norfair.camera_motion import (
HomographyTransformationGetter,
MotionEstimator,
TranslationTransformationGetter,
)
from frigate.config import CameraConfig, FrigateConfig
from frigate.config import CameraConfig, FrigateConfig, ZoomingModeEnum
from frigate.const import CONFIG_DIR
from frigate.ptz.onvif import OnvifController
from frigate.types import PTZMetricsTypes
from frigate.util.builtin import update_yaml_file
from frigate.util.image import SharedMemoryFrameManager, intersection_over_union
logger = logging.getLogger(__name__)
@@ -26,12 +32,8 @@ def ptz_moving_at_frame_time(frame_time, ptz_start_time, ptz_stop_time):
# for non ptz/autotracking cameras, this will always return False
# ptz_start_time is initialized to 0 on startup and only changes
# when autotracking movements are made
# the offset "primes" the motion estimator with a few frames before movement
offset = 0.5
return (ptz_start_time != 0.0 and frame_time >= ptz_start_time - offset) and (
ptz_stop_time == 0.0 or (ptz_start_time - offset <= frame_time <= ptz_stop_time)
return (ptz_start_time != 0.0 and frame_time > ptz_start_time) and (
ptz_stop_time == 0.0 or (ptz_start_time <= frame_time <= ptz_stop_time)
)
@@ -54,13 +56,24 @@ class PtzMotionEstimator:
# If we've just started up or returned to our preset, reset motion estimator for new tracking session
if self.ptz_metrics["ptz_reset"].is_set():
self.ptz_metrics["ptz_reset"].clear()
logger.debug("Motion estimator reset")
# homography is nice (zooming) but slow, translation is pan/tilt only but fast.
if (
self.camera_config.onvif.autotracking.zooming
!= ZoomingModeEnum.disabled
):
logger.debug("Motion estimator reset - homography")
transformation_type = HomographyTransformationGetter()
else:
logger.debug("Motion estimator reset - translation")
transformation_type = TranslationTransformationGetter()
self.norfair_motion_estimator = MotionEstimator(
transformations_getter=TranslationTransformationGetter(),
transformations_getter=transformation_type,
min_distance=30,
max_points=900,
)
self.coord_transformations = None
if ptz_moving_at_frame_time(
@@ -91,16 +104,22 @@ class PtzMotionEstimator:
# Norfair estimator function needs color so it can convert it right back to gray
frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2BGRA)
self.coord_transformations = self.norfair_motion_estimator.update(
frame, mask
)
try:
self.coord_transformations = self.norfair_motion_estimator.update(
frame, mask
)
logger.debug(
f"Motion estimator transformation: {self.coord_transformations.rel_to_abs([[0,0]])}"
)
except Exception:
# sometimes opencv can't find enough features in the image to find homography, so catch this error
logger.warning(
f"Autotracker: motion estimator couldn't get transformations for {camera_name} at frame time {frame_time}"
)
self.coord_transformations = None
self.frame_manager.close(frame_id)
logger.debug(
f"Motion estimator transformation: {self.coord_transformations.rel_to_abs((0,0))}"
)
return self.coord_transformations
@@ -147,12 +166,18 @@ class PtzAutoTracker:
self.ptz_metrics = ptz_metrics
self.tracked_object: dict[str, object] = {}
self.tracked_object_previous: dict[str, object] = {}
self.previous_frame_time = None
self.object_types = {}
self.required_zones = {}
self.move_queues = {}
self.move_threads = {}
self.autotracker_init = {}
self.previous_frame_time: dict[str, object] = {}
self.object_types: dict[str, object] = {}
self.required_zones: dict[str, object] = {}
self.move_queues: dict[str, object] = {}
self.move_queue_locks: dict[str, object] = {}
self.move_threads: dict[str, object] = {}
self.autotracker_init: dict[str, object] = {}
self.move_metrics: dict[str, object] = {}
self.calibrating: dict[str, object] = {}
self.intercept: dict[str, object] = {}
self.move_coefficients: dict[str, object] = {}
self.zoom_factor: dict[str, object] = {}
# if cam is set to autotrack, onvif should be set up
for camera_name, cam in self.config.cameras.items():
@@ -168,11 +193,18 @@ class PtzAutoTracker:
self.object_types[camera_name] = cam.onvif.autotracking.track
self.required_zones[camera_name] = cam.onvif.autotracking.required_zones
self.zoom_factor[camera_name] = cam.onvif.autotracking.zoom_factor
self.tracked_object[camera_name] = None
self.tracked_object_previous[camera_name] = None
self.calibrating[camera_name] = False
self.move_metrics[camera_name] = []
self.intercept[camera_name] = None
self.move_coefficients[camera_name] = []
self.move_queues[camera_name] = queue.Queue()
self.move_queue_locks[camera_name] = threading.Lock()
if not self.onvif.cams[camera_name]["init"]:
if not self.onvif._init_onvif(camera_name):
@@ -182,7 +214,7 @@ class PtzAutoTracker:
return
if not self.onvif.cams[camera_name]["relative_fov_supported"]:
if "pt-r-fov" not in self.onvif.cams[camera_name]["features"]:
cam.onvif.autotracking.enabled = False
self.ptz_metrics[camera_name]["ptz_autotracker_enabled"].value = False
logger.warning(
@@ -191,6 +223,19 @@ class PtzAutoTracker:
return
movestatus_supported = self.onvif.get_service_capabilities(camera_name)
if movestatus_supported is None or movestatus_supported.lower() != "true":
cam.onvif.autotracking.enabled = False
self.ptz_metrics[camera_name]["ptz_autotracker_enabled"].value = False
logger.warning(
f"Disabling autotracking for {camera_name}: ONVIF MoveStatus not supported"
)
return
self.onvif.get_camera_status(camera_name)
# movement thread per camera
if not self.move_threads or not self.move_threads[camera_name]:
self.move_threads[camera_name] = threading.Thread(
@@ -200,13 +245,145 @@ class PtzAutoTracker:
self.move_threads[camera_name].daemon = True
self.move_threads[camera_name].start()
if cam.onvif.autotracking.movement_weights:
self.intercept[camera_name] = cam.onvif.autotracking.movement_weights[0]
self.move_coefficients[
camera_name
] = cam.onvif.autotracking.movement_weights[1:]
if cam.onvif.autotracking.calibrate_on_startup:
self._calibrate_camera(camera_name)
self.autotracker_init[camera_name] = True
def write_config(self, camera):
config_file = os.environ.get("CONFIG_FILE", f"{CONFIG_DIR}/config.yml")
logger.debug(
f"Writing new config with autotracker motion coefficients: {self.config.cameras[camera].onvif.autotracking.movement_weights}"
)
update_yaml_file(
config_file,
["cameras", camera, "onvif", "autotracking", "movement_weights"],
self.config.cameras[camera].onvif.autotracking.movement_weights,
)
def _calibrate_camera(self, camera):
# move the camera from the preset in steps and measure the time it takes to move that amount
# this will allow us to predict movement times with a simple linear regression
# start with 0 so we can determine a baseline (to be used as the intercept in the regression calc)
# TODO: take zooming into account too
num_steps = 30
step_sizes = np.linspace(0, 1, num_steps)
self.calibrating[camera] = True
logger.info(f"Camera calibration for {camera} in progress")
self.onvif._move_to_preset(
camera,
self.config.cameras[camera].onvif.autotracking.return_preset.lower(),
)
self.ptz_metrics[camera]["ptz_reset"].set()
self.ptz_metrics[camera]["ptz_stopped"].clear()
# Wait until the camera finishes moving
while not self.ptz_metrics[camera]["ptz_stopped"].is_set():
self.onvif.get_camera_status(camera)
for step in range(num_steps):
pan = step_sizes[step]
tilt = step_sizes[step]
start_time = time.time()
self.onvif._move_relative(camera, pan, tilt, 0, 1)
# Wait until the camera finishes moving
while not self.ptz_metrics[camera]["ptz_stopped"].is_set():
self.onvif.get_camera_status(camera)
stop_time = time.time()
self.move_metrics[camera].append(
{
"pan": pan,
"tilt": tilt,
"start_timestamp": start_time,
"end_timestamp": stop_time,
}
)
self.onvif._move_to_preset(
camera,
self.config.cameras[camera].onvif.autotracking.return_preset.lower(),
)
self.ptz_metrics[camera]["ptz_reset"].set()
self.ptz_metrics[camera]["ptz_stopped"].clear()
# Wait until the camera finishes moving
while not self.ptz_metrics[camera]["ptz_stopped"].is_set():
self.onvif.get_camera_status(camera)
self.calibrating[camera] = False
logger.info(f"Calibration for {camera} complete")
# calculate and save new intercept and coefficients
self._calculate_move_coefficients(camera, True)
def _calculate_move_coefficients(self, camera, calibration=False):
# calculate new coefficients when we have 50 more new values. Save up to 500
if calibration or (
len(self.move_metrics[camera]) % 50 == 0
and len(self.move_metrics[camera]) != 0
and len(self.move_metrics[camera]) <= 500
):
X = np.array(
[abs(d["pan"]) + abs(d["tilt"]) for d in self.move_metrics[camera]]
)
y = np.array(
[
d["end_timestamp"] - d["start_timestamp"]
for d in self.move_metrics[camera]
]
)
# simple linear regression with intercept
X_with_intercept = np.column_stack((np.ones(X.shape[0]), X))
self.move_coefficients[camera] = np.linalg.lstsq(
X_with_intercept, y, rcond=None
)[0]
# only assign a new intercept if we're calibrating
if calibration:
self.intercept[camera] = y[0]
# write the intercept and coefficients back to the config file as a comma separated string
movement_weights = np.concatenate(
([self.intercept[camera]], self.move_coefficients[camera])
)
self.config.cameras[camera].onvif.autotracking.movement_weights = ", ".join(
map(str, movement_weights)
)
logger.debug(
f"New regression parameters - intercept: {self.intercept[camera]}, coefficients: {self.move_coefficients[camera]}"
)
self.write_config(camera)
def _predict_movement_time(self, camera, pan, tilt):
combined_movement = abs(pan) + abs(tilt)
input_data = np.array([self.intercept[camera], combined_movement])
return np.dot(self.move_coefficients[camera], input_data)
def _process_move_queue(self, camera):
while True:
try:
move_data = self.move_queues[camera].get()
frame_time, pan, tilt = move_data
move_data = self.move_queues[camera].get()
with self.move_queue_locks[camera]:
frame_time, pan, tilt, zoom = move_data
# if we're receiving move requests during a PTZ move, ignore them
if ptz_moving_at_frame_time(
@@ -217,50 +394,234 @@ class PtzAutoTracker:
# instead of dequeueing this might be a good place to preemptively move based
# on an estimate - for fast moving objects, etc.
logger.debug(
f"Move queue: PTZ moving, dequeueing move request - frame time: {frame_time}, final pan: {pan}, final tilt: {tilt}"
f"Move queue: PTZ moving, dequeueing move request - frame time: {frame_time}, final pan: {pan}, final tilt: {tilt}, final zoom: {zoom}"
)
continue
else:
# on some cameras with cheaper motors it seems like small values can cause jerky movement
# TODO: double check, might not need this
if abs(pan) > 0.02 or abs(tilt) > 0.02:
self.onvif._move_relative(camera, pan, tilt, 1)
if (
self.config.cameras[camera].onvif.autotracking.zooming
== ZoomingModeEnum.relative
):
self.onvif._move_relative(camera, pan, tilt, zoom, 1)
else:
logger.debug(
f"Not moving, pan and tilt too small: {pan}, {tilt}"
)
if zoom > 0:
self.onvif._zoom_absolute(camera, zoom, 1)
else:
self.onvif._move_relative(camera, pan, tilt, 0, 1)
# Wait until the camera finishes moving
while not self.ptz_metrics[camera]["ptz_stopped"].is_set():
# check if ptz is moving
self.onvif.get_camera_status(camera)
except queue.Empty:
continue
if self.config.cameras[camera].onvif.autotracking.movement_weights:
logger.debug(
f"Predicted movement time: {self._predict_movement_time(camera, pan, tilt)}"
)
logger.debug(
f'Actual movement time: {self.ptz_metrics[camera]["ptz_stop_time"].value-self.ptz_metrics[camera]["ptz_start_time"].value}'
)
# save metrics for better estimate calculations
if (
self.intercept[camera] is not None
and len(self.move_metrics[camera]) < 500
):
logger.debug("Adding new values to move metrics")
self.move_metrics[camera].append(
{
"pan": pan,
"tilt": tilt,
"start_timestamp": self.ptz_metrics[camera][
"ptz_start_time"
].value,
"end_timestamp": self.ptz_metrics[camera][
"ptz_stop_time"
].value,
}
)
# calculate new coefficients if we have enough data
self._calculate_move_coefficients(camera)
def _enqueue_move(self, camera, frame_time, pan, tilt, zoom):
def split_value(value):
clipped = np.clip(value, -1, 1)
return clipped, value - clipped
def _enqueue_move(self, camera, frame_time, pan, tilt):
move_data = (frame_time, pan, tilt)
if (
frame_time > self.ptz_metrics[camera]["ptz_start_time"].value
and frame_time > self.ptz_metrics[camera]["ptz_stop_time"].value
and not self.move_queue_locks[camera].locked()
):
logger.debug(f"enqueue pan: {pan}, enqueue tilt: {tilt}")
self.move_queues[camera].put(move_data)
# don't make small movements
if abs(pan) < 0.02:
pan = 0
if abs(tilt) < 0.02:
tilt = 0
# split up any large moves caused by velocity estimated movements
while pan != 0 or tilt != 0 or zoom != 0:
pan, pan_excess = split_value(pan)
tilt, tilt_excess = split_value(tilt)
zoom, zoom_excess = split_value(zoom)
logger.debug(
f"Enqueue movement for frame time: {frame_time} pan: {pan}, enqueue tilt: {tilt}, enqueue zoom: {zoom}"
)
move_data = (frame_time, pan, tilt, zoom)
self.move_queues[camera].put(move_data)
pan = pan_excess
tilt = tilt_excess
zoom = zoom_excess
def _should_zoom_in(self, camera, box, area, average_velocity):
camera_config = self.config.cameras[camera]
camera_width = camera_config.frame_shape[1]
camera_height = camera_config.frame_shape[0]
camera_area = camera_width * camera_height
bb_left, bb_top, bb_right, bb_bottom = box
# If bounding box is not within 5% of an edge
# If object area is less than 70% of frame
# Then zoom in, otherwise try zooming out
# should we make these configurable?
#
# TODO: Take into account the area changing when an object is moving out of frame
edge_threshold = 0.15
area_threshold = self.zoom_factor[camera]
velocity_threshold = 0.1
# if we have a fast moving object, let's zoom out
# fast moving is defined as a velocity of more than 10% of the camera's width or height
# so an object with an x velocity of 15 pixels on a 1280x720 camera would trigger a zoom out
velocity_threshold = average_velocity[0] > (
camera_width * velocity_threshold
) or average_velocity[1] > (camera_height * velocity_threshold)
# returns True to zoom in, False to zoom out
return (
bb_left > edge_threshold * camera_width
and bb_right < (1 - edge_threshold) * camera_width
and bb_top > edge_threshold * camera_height
and bb_bottom < (1 - edge_threshold) * camera_height
and area < area_threshold * camera_area
and not velocity_threshold
)
def _autotrack_move_ptz(self, camera, obj):
camera_config = self.config.cameras[camera]
average_velocity = (0,) * 4
# # frame width and height
camera_width = camera_config.frame_shape[1]
camera_height = camera_config.frame_shape[0]
camera_fps = camera_config.detect.fps
centroid_x = obj.obj_data["centroid"][0]
centroid_y = obj.obj_data["centroid"][1]
# Normalize coordinates. top right of the fov is (1,1), center is (0,0), bottom left is (-1, -1).
pan = ((obj.obj_data["centroid"][0] / camera_width) - 0.5) * 2
tilt = (0.5 - (obj.obj_data["centroid"][1] / camera_height)) * 2
pan = ((centroid_x / camera_width) - 0.5) * 2
tilt = (0.5 - (centroid_y / camera_height)) * 2
# ideas: check object velocity for camera speed?
self._enqueue_move(camera, obj.obj_data["frame_time"], pan, tilt)
if (
camera_config.onvif.autotracking.movement_weights
): # use estimates if we have available coefficients
predicted_movement_time = self._predict_movement_time(camera, pan, tilt)
# Norfair gives us two points for the velocity of an object represented as x1, y1, x2, y2
x1, y1, x2, y2 = obj.obj_data["estimate_velocity"]
average_velocity = (
(x1 + x2) / 2,
(y1 + y2) / 2,
(x1 + x2) / 2,
(y1 + y2) / 2,
)
# get euclidean distance of the two points, sometimes the estimate is way off
distance = np.linalg.norm([x2 - x1, y2 - y1])
if distance <= 5:
# this box could exceed the frame boundaries if velocity is high
# but we'll handle that in _enqueue_move() as two separate moves
predicted_box = [
round(x + camera_fps * predicted_movement_time * v)
for x, v in zip(obj.obj_data["box"], average_velocity)
]
else:
# estimate was bad
predicted_box = obj.obj_data["box"]
centroid_x = round((predicted_box[0] + predicted_box[2]) / 2)
centroid_y = round((predicted_box[1] + predicted_box[3]) / 2)
# recalculate pan and tilt with new centroid
pan = ((centroid_x / camera_width) - 0.5) * 2
tilt = (0.5 - (centroid_y / camera_height)) * 2
logger.debug(f'Original box: {obj.obj_data["box"]}')
logger.debug(f"Predicted box: {predicted_box}")
logger.debug(f'Velocity: {obj.obj_data["estimate_velocity"]}')
if camera_config.onvif.autotracking.zooming == ZoomingModeEnum.relative:
# relative zooming concurrently with pan/tilt
zoom = min(
obj.obj_data["area"]
/ (camera_width * camera_height)
* 100
* self.zoom_factor[camera],
1,
)
logger.debug(f"Zoom value: {zoom}")
# test if we need to zoom out
if not self._should_zoom_in(
camera,
predicted_box
if camera_config.onvif.autotracking.movement_weights
else obj.obj_data["box"],
obj.obj_data["area"],
average_velocity,
):
zoom = -(1 - zoom)
# don't make small movements to zoom in if area hasn't changed significantly
# but always zoom out if necessary
if (
"area" in obj.previous
and abs(obj.obj_data["area"] - obj.previous["area"])
/ obj.obj_data["area"]
< 0.2
and zoom > 0
):
zoom = 0
else:
zoom = 0
self._enqueue_move(camera, obj.obj_data["frame_time"], pan, tilt, zoom)
def _autotrack_zoom_only(self, camera, obj):
camera_config = self.config.cameras[camera]
# absolute zooming separately from pan/tilt
if camera_config.onvif.autotracking.zooming == ZoomingModeEnum.absolute:
zoom_level = self.ptz_metrics[camera]["ptz_zoom_level"].value
if 0 < zoom_level <= 1:
if self._should_zoom_in(
camera, obj.obj_data["box"], obj.obj_data["area"], (0, 0, 0, 0)
):
zoom = min(1.0, zoom_level + 0.1)
else:
zoom = max(0.0, zoom_level - 0.1)
if zoom != zoom_level:
self._enqueue_move(camera, obj.obj_data["frame_time"], 0, 0, zoom)
def autotrack_object(self, camera, obj):
camera_config = self.config.cameras[camera]
@@ -269,6 +630,10 @@ class PtzAutoTracker:
if not self.autotracker_init[camera]:
self._autotracker_setup(self.config.cameras[camera], camera)
if self.calibrating[camera]:
logger.debug("Calibrating camera")
return
# either this is a brand new object that's on our camera, has our label, entered the zone, is not a false positive,
# and is not initially motionless - or one we're already tracking, which assumes all those things are already true
if (
@@ -287,7 +652,7 @@ class PtzAutoTracker:
)
self.tracked_object[camera] = obj
self.tracked_object_previous[camera] = copy.deepcopy(obj)
self.previous_frame_time = obj.obj_data["frame_time"]
self.previous_frame_time[camera] = obj.obj_data["frame_time"]
self._autotrack_move_ptz(camera, obj)
return
@@ -299,7 +664,7 @@ class PtzAutoTracker:
and obj.obj_data["id"] == self.tracked_object[camera].obj_data["id"]
and obj.obj_data["frame_time"] != self.previous_frame_time
):
self.previous_frame_time = obj.obj_data["frame_time"]
self.previous_frame_time[camera] = obj.obj_data["frame_time"]
# Don't move ptz if Euclidean distance from object to center of frame is
# less than 15% of the of the larger dimension (width or height) of the frame,
# multiplied by a scaling factor for object size.
@@ -307,10 +672,11 @@ class PtzAutoTracker:
# more often to keep the object in the center. Raising the percentage will cause less
# movement and will be more flexible with objects not quite being centered.
# TODO: there's probably a better way to approach this
distance = math.sqrt(
(obj.obj_data["centroid"][0] - camera_config.detect.width / 2) ** 2
+ (obj.obj_data["centroid"][1] - camera_config.detect.height / 2)
** 2
distance = np.linalg.norm(
[
obj.obj_data["centroid"][0] - camera_config.detect.width / 2,
obj.obj_data["centroid"][1] - camera_config.detect.height / 2,
]
)
obj_width = obj.obj_data["box"][2] - obj.obj_data["box"][0]
@@ -337,6 +703,10 @@ class PtzAutoTracker:
logger.debug(
f"Autotrack: Existing object (do NOT move ptz): {obj.obj_data['id']} {obj.obj_data['box']} {obj.obj_data['frame_time']}"
)
# no need to move, but try absolute zooming
self._autotrack_zoom_only(camera, obj)
return
logger.debug(
@@ -345,6 +715,9 @@ class PtzAutoTracker:
self.tracked_object_previous[camera] = copy.deepcopy(obj)
self._autotrack_move_ptz(camera, obj)
# try absolute zooming too
self._autotrack_zoom_only(camera, obj)
return
if (
@@ -356,10 +729,9 @@ class PtzAutoTracker:
and obj.obj_data["label"] in self.object_types[camera]
and not obj.previous["false_positive"]
and not obj.false_positive
and obj.obj_data["motionless_count"] == 0
and self.tracked_object_previous[camera] is not None
):
self.previous_frame_time = obj.obj_data["frame_time"]
self.previous_frame_time[camera] = obj.obj_data["frame_time"]
if (
intersection_over_union(
self.tracked_object_previous[camera].obj_data["region"],
@@ -388,6 +760,12 @@ class PtzAutoTracker:
self.tracked_object[camera] = None
def camera_maintenance(self, camera):
# bail and don't check anything if we're calibrating or tracking an object
if self.calibrating[camera] or self.tracked_object[camera] is not None:
return
logger.debug("Running camera maintenance")
# calls get_camera_status to check/update ptz movement
# returns camera to preset after timeout when tracking is over
autotracker_config = self.config.cameras[camera].onvif.autotracking
@@ -404,19 +782,26 @@ class PtzAutoTracker:
and self.tracked_object_previous[camera] is not None
and (
# might want to use a different timestamp here?
time.time()
self.ptz_metrics[camera]["ptz_frame_time"].value
- self.tracked_object_previous[camera].obj_data["frame_time"]
> autotracker_config.timeout
)
and autotracker_config.return_preset
):
# empty move queue
while not self.move_queues[camera].empty():
self.move_queues[camera].get()
# clear tracked object
self.tracked_object[camera] = None
self.tracked_object_previous[camera] = None
self.ptz_metrics[camera]["ptz_stopped"].wait()
logger.debug(
f"Autotrack: Time is {time.time()}, returning to preset: {autotracker_config.return_preset}"
f"Autotrack: Time is {self.ptz_metrics[camera]['ptz_frame_time'].value}, returning to preset: {autotracker_config.return_preset}"
)
self.onvif._move_to_preset(
camera,
autotracker_config.return_preset.lower(),
)
self.ptz_metrics[camera]["ptz_reset"].set()
self.tracked_object_previous[camera] = None

View File

@@ -1,6 +1,5 @@
"""Configure and control camera via onvif."""
import datetime
import logging
import site
from enum import Enum
@@ -8,8 +7,9 @@ from enum import Enum
import numpy
from onvif import ONVIFCamera, ONVIFError
from frigate.config import FrigateConfig
from frigate.config import FrigateConfig, ZoomingModeEnum
from frigate.types import PTZMetricsTypes
from frigate.util.builtin import find_by_key
logger = logging.getLogger(__name__)
@@ -33,6 +33,7 @@ class OnvifController:
self, config: FrigateConfig, ptz_metrics: dict[str, PTZMetricsTypes]
) -> None:
self.cams: dict[str, ONVIFCamera] = {}
self.config = config
self.ptz_metrics = ptz_metrics
for cam_name, cam in config.cameras.items():
@@ -73,11 +74,20 @@ class OnvifController:
return False
ptz = onvif.create_ptz_service()
request = ptz.create_type("GetConfigurations")
configs = ptz.GetConfigurations(request)[0]
request = ptz.create_type("GetConfigurationOptions")
request.ConfigurationToken = profile.PTZConfiguration.token
ptz_config = ptz.GetConfigurationOptions(request)
logger.debug(f"Onvif config for {camera_name}: {ptz_config}")
service_capabilities_request = ptz.create_type("GetServiceCapabilities")
self.cams[camera_name][
"service_capabilities_request"
] = service_capabilities_request
fov_space_id = next(
(
i
@@ -89,6 +99,20 @@ class OnvifController:
None,
)
# autoracking relative panning/tilting needs a relative zoom value set to 0
# if camera supports relative movement
if self.config.cameras[camera_name].onvif.autotracking.zooming:
zoom_space_id = next(
(
i
for i, space in enumerate(
ptz_config.Spaces.RelativeZoomTranslationSpace
)
if "TranslationGenericSpace" in space["URI"]
),
None,
)
# setup continuous moving request
move_request = ptz.create_type("ContinuousMove")
move_request.ProfileToken = profile.token
@@ -105,19 +129,27 @@ class OnvifController:
"RelativePanTiltTranslationSpace"
][fov_space_id]["URI"]
# try setting relative zoom translation space
try:
move_request.Translation.Zoom.space = ptz_config["Spaces"][
"RelativeZoomTranslationSpace"
][0]["URI"]
if self.config.cameras[camera_name].onvif.autotracking.zooming:
if zoom_space_id is not None:
move_request.Translation.Zoom.space = ptz_config["Spaces"][
"RelativeZoomTranslationSpace"
][0]["URI"]
except Exception:
# camera does not support relative zoom
pass
if self.config.cameras[camera_name].onvif.autotracking.zoom_relative:
self.config.cameras[
camera_name
].onvif.autotracking.zoom_relative = False
logger.warning(
f"Disabling autotracking zooming for {camera_name}: Absolute zoom not supported"
)
if move_request.Speed is None:
move_request.Speed = ptz.GetStatus({"ProfileToken": profile.token}).Position
self.cams[camera_name]["relative_move_request"] = move_request
# setup relative moving request for autotracking
# setup absolute moving request for autotracking zooming
move_request = ptz.create_type("AbsoluteMove")
move_request.ProfileToken = profile.token
self.cams[camera_name]["absolute_move_request"] = move_request
@@ -126,6 +158,8 @@ class OnvifController:
status_request = ptz.create_type("GetStatus")
status_request.ProfileToken = profile.token
self.cams[camera_name]["status_request"] = status_request
status = ptz.GetStatus(status_request)
logger.debug(f"Onvif status config for {camera_name}: {status}")
# setup existing presets
try:
@@ -153,14 +187,28 @@ class OnvifController:
if ptz_config.Spaces and ptz_config.Spaces.RelativeZoomTranslationSpace:
supported_features.append("zoom-r")
if ptz_config.Spaces and ptz_config.Spaces.AbsoluteZoomPositionSpace:
supported_features.append("zoom-a")
try:
# get camera's zoom limits from onvif config
self.cams[camera_name][
"absolute_zoom_range"
] = ptz_config.Spaces.AbsoluteZoomPositionSpace[0]
self.cams[camera_name]["zoom_limits"] = configs.ZoomLimits
except Exception:
if self.config.cameras[camera_name].onvif.autotracking.zooming:
self.config.cameras[camera_name].onvif.autotracking.zooming = False
logger.warning(
f"Disabling autotracking zooming for {camera_name}: Absolute zoom not supported"
)
# set relative pan/tilt space for autotracker
if fov_space_id is not None:
supported_features.append("pt-r-fov")
self.cams[camera_name][
"relative_fov_range"
] = ptz_config.Spaces.RelativePanTiltTranslationSpace[fov_space_id]
self.cams[camera_name]["relative_fov_supported"] = fov_space_id is not None
self.cams[camera_name]["features"] = supported_features
self.cams[camera_name]["init"] = True
@@ -210,8 +258,8 @@ class OnvifController:
onvif.get_service("ptz").ContinuousMove(move_request)
def _move_relative(self, camera_name: str, pan, tilt, speed) -> None:
if not self.cams[camera_name]["relative_fov_supported"]:
def _move_relative(self, camera_name: str, pan, tilt, zoom, speed) -> None:
if "pt-r-fov" not in self.cams[camera_name]["features"]:
logger.error(f"{camera_name} does not support ONVIF RelativeMove (FOV).")
return
@@ -225,10 +273,12 @@ class OnvifController:
self.cams[camera_name]["active"] = True
self.ptz_metrics[camera_name]["ptz_stopped"].clear()
logger.debug(f"PTZ start time: {datetime.datetime.now().timestamp()}")
self.ptz_metrics[camera_name][
"ptz_start_time"
].value = datetime.datetime.now().timestamp()
logger.debug(
f"PTZ start time: {self.ptz_metrics[camera_name]['ptz_frame_time'].value}"
)
self.ptz_metrics[camera_name]["ptz_start_time"].value = self.ptz_metrics[
camera_name
]["ptz_frame_time"].value
self.ptz_metrics[camera_name]["ptz_stop_time"].value = 0
onvif: ONVIFCamera = self.cams[camera_name]["onvif"]
move_request = self.cams[camera_name]["relative_move_request"]
@@ -257,15 +307,30 @@ class OnvifController:
"x": speed,
"y": speed,
},
"Zoom": 0,
}
move_request.Translation.PanTilt.x = pan
move_request.Translation.PanTilt.y = tilt
move_request.Translation.Zoom.x = 0
if "zoom-r" in self.cams[camera_name]["features"]:
move_request.Speed = {
"PanTilt": {
"x": speed,
"y": speed,
},
"Zoom": {"x": speed},
}
move_request.Translation.Zoom.x = zoom
onvif.get_service("ptz").RelativeMove(move_request)
# reset after the move request
move_request.Translation.PanTilt.x = 0
move_request.Translation.PanTilt.y = 0
if "zoom-r" in self.cams[camera_name]["features"]:
move_request.Translation.Zoom.x = 0
self.cams[camera_name]["active"] = False
def _move_to_preset(self, camera_name: str, preset: str) -> None:
@@ -305,6 +370,50 @@ class OnvifController:
onvif.get_service("ptz").ContinuousMove(move_request)
def _zoom_absolute(self, camera_name: str, zoom, speed) -> None:
if "zoom-a" not in self.cams[camera_name]["features"]:
logger.error(f"{camera_name} does not support ONVIF AbsoluteMove zooming.")
return
logger.debug(f"{camera_name} called AbsoluteMove: zoom: {zoom}")
if self.cams[camera_name]["active"]:
logger.warning(
f"{camera_name} is already performing an action, not moving..."
)
return
self.cams[camera_name]["active"] = True
self.ptz_metrics[camera_name]["ptz_stopped"].clear()
logger.debug(
f"PTZ start time: {self.ptz_metrics[camera_name]['ptz_frame_time'].value}"
)
self.ptz_metrics[camera_name]["ptz_start_time"].value = self.ptz_metrics[
camera_name
]["ptz_frame_time"].value
self.ptz_metrics[camera_name]["ptz_stop_time"].value = 0
onvif: ONVIFCamera = self.cams[camera_name]["onvif"]
move_request = self.cams[camera_name]["absolute_move_request"]
# function takes in 0 to 1 for zoom, interpolate to the values of the camera.
zoom = numpy.interp(
zoom,
[0, 1],
[
self.cams[camera_name]["absolute_zoom_range"]["XRange"]["Min"],
self.cams[camera_name]["absolute_zoom_range"]["XRange"]["Max"],
],
)
move_request.Speed = {"Zoom": speed}
move_request.Position = {"Zoom": zoom}
logger.debug(f"Absolute zoom: {zoom}")
onvif.get_service("ptz").AbsoluteMove(move_request)
self.cams[camera_name]["active"] = False
def handle_command(
self, camera_name: str, command: OnvifCommandEnum, param: str = ""
) -> None:
@@ -344,7 +453,30 @@ class OnvifController:
"presets": list(self.cams[camera_name]["presets"].keys()),
}
def get_camera_status(self, camera_name: str) -> dict[str, any]:
def get_service_capabilities(self, camera_name: str) -> None:
if camera_name not in self.cams.keys():
logger.error(f"Onvif is not setup for {camera_name}")
return {}
if not self.cams[camera_name]["init"]:
self._init_onvif(camera_name)
onvif: ONVIFCamera = self.cams[camera_name]["onvif"]
service_capabilities_request = self.cams[camera_name][
"service_capabilities_request"
]
service_capabilities = onvif.get_service("ptz").GetServiceCapabilities(
service_capabilities_request
)
logger.debug(
f"Onvif service capabilities for {camera_name}: {service_capabilities}"
)
# MoveStatus is required for autotracking - should return "true" if supported
return find_by_key(vars(service_capabilities), "MoveStatus")
def get_camera_status(self, camera_name: str) -> None:
if camera_name not in self.cams.keys():
logger.error(f"Onvif is not setup for {camera_name}")
return {}
@@ -356,32 +488,66 @@ class OnvifController:
status_request = self.cams[camera_name]["status_request"]
status = onvif.get_service("ptz").GetStatus(status_request)
if status.MoveStatus.PanTilt == "IDLE" and status.MoveStatus.Zoom == "IDLE":
# there doesn't seem to be an onvif standard with this optional parameter
# some cameras can report MoveStatus with or without PanTilt or Zoom attributes
pan_tilt_status = getattr(status.MoveStatus, "PanTilt", None)
zoom_status = getattr(status.MoveStatus, "Zoom", None)
# if it's not an attribute, see if MoveStatus even exists in the status result
if pan_tilt_status is None:
pan_tilt_status = getattr(status, "MoveStatus", None)
# we're unsupported
if pan_tilt_status is None or pan_tilt_status.lower() not in [
"idle",
"moving",
]:
logger.error(
f"Camera {camera_name} does not support the ONVIF GetStatus method. Autotracking will not function correctly and must be disabled in your config."
)
return
if pan_tilt_status.lower() == "idle" and (
zoom_status is None or zoom_status.lower() == "idle"
):
self.cams[camera_name]["active"] = False
if not self.ptz_metrics[camera_name]["ptz_stopped"].is_set():
self.ptz_metrics[camera_name]["ptz_stopped"].set()
logger.debug(f"PTZ stop time: {datetime.datetime.now().timestamp()}")
logger.debug(
f"PTZ stop time: {self.ptz_metrics[camera_name]['ptz_frame_time'].value}"
)
self.ptz_metrics[camera_name][
"ptz_stop_time"
].value = datetime.datetime.now().timestamp()
self.ptz_metrics[camera_name]["ptz_stop_time"].value = self.ptz_metrics[
camera_name
]["ptz_frame_time"].value
else:
self.cams[camera_name]["active"] = True
if self.ptz_metrics[camera_name]["ptz_stopped"].is_set():
self.ptz_metrics[camera_name]["ptz_stopped"].clear()
logger.debug(f"PTZ start time: {datetime.datetime.now().timestamp()}")
logger.debug(
f"PTZ start time: {self.ptz_metrics[camera_name]['ptz_frame_time'].value}"
)
self.ptz_metrics[camera_name][
"ptz_start_time"
].value = datetime.datetime.now().timestamp()
].value = self.ptz_metrics[camera_name]["ptz_frame_time"].value
self.ptz_metrics[camera_name]["ptz_stop_time"].value = 0
return {
"pan": status.Position.PanTilt.x,
"tilt": status.Position.PanTilt.y,
"zoom": status.Position.Zoom.x,
"pantilt_moving": status.MoveStatus.PanTilt,
"zoom_moving": status.MoveStatus.Zoom,
}
if (
self.config.cameras[camera_name].onvif.autotracking.zooming
== ZoomingModeEnum.absolute
):
# store absolute zoom level as 0 to 1 interpolated from the values of the camera
self.ptz_metrics[camera_name]["ptz_zoom_level"].value = numpy.interp(
round(status.Position.Zoom.x, 2),
[0, 1],
[
self.cams[camera_name]["absolute_zoom_range"]["XRange"]["Min"],
self.cams[camera_name]["absolute_zoom_range"]["XRange"]["Max"],
],
)
logger.debug(
f'Camera zoom level: {self.ptz_metrics[camera_name]["ptz_zoom_level"].value}'
)

View File

@@ -0,0 +1,47 @@
"""Test camera user and password cleanup."""
import unittest
from frigate.output import get_canvas_shape
class TestBirdseye(unittest.TestCase):
def test_16x9(self):
"""Test 16x9 aspect ratio works as expected for birdseye."""
width = 1280
height = 720
canvas_width, canvas_height = get_canvas_shape(width, height)
assert canvas_width == width
assert canvas_height == height
def test_4x3(self):
"""Test 4x3 aspect ratio works as expected for birdseye."""
width = 1280
height = 960
canvas_width, canvas_height = get_canvas_shape(width, height)
assert canvas_width == width
assert canvas_height == height
def test_32x9(self):
"""Test 32x9 aspect ratio works as expected for birdseye."""
width = 2560
height = 720
canvas_width, canvas_height = get_canvas_shape(width, height)
assert canvas_width == width
assert canvas_height == height
def test_9x16(self):
"""Test 9x16 aspect ratio works as expected for birdseye."""
width = 720
height = 1280
canvas_width, canvas_height = get_canvas_shape(width, height)
assert canvas_width == width
assert canvas_height == height
def test_non_16x9(self):
"""Test non 16x9 aspect ratio fails for birdseye."""
width = 1280
height = 840
canvas_width, canvas_height = get_canvas_shape(width, height)
assert canvas_width == width # width will be the same
assert canvas_height != height

View File

@@ -1536,6 +1536,46 @@ class TestConfig(unittest.TestCase):
assert runtime_config.cameras["back"].objects.filters["dog"].min_ratio == 0.2
assert runtime_config.cameras["back"].objects.filters["dog"].max_ratio == 10.1
def test_valid_movement_weights(self):
config = {
"mqtt": {"host": "mqtt"},
"cameras": {
"back": {
"ffmpeg": {
"inputs": [
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
]
},
"onvif": {"autotracking": {"movement_weights": "1.23, 2.34, 0.50"}},
}
},
}
frigate_config = FrigateConfig(**config)
runtime_config = frigate_config.runtime_config()
assert runtime_config.cameras["back"].onvif.autotracking.movement_weights == [
1.23,
2.34,
0.50,
]
def test_fails_invalid_movement_weights(self):
config = {
"mqtt": {"host": "mqtt"},
"cameras": {
"back": {
"ffmpeg": {
"inputs": [
{"path": "rtsp://10.0.0.1:554/video", "roles": ["detect"]}
]
},
"onvif": {"autotracking": {"movement_weights": "1.234, 2.345a"}},
}
},
}
self.assertRaises(ValueError, lambda: FrigateConfig(**config))
if __name__ == "__main__":
unittest.main(verbosity=2)

View File

@@ -236,6 +236,44 @@ class TestHttp(unittest.TestCase):
assert event["id"] == id
assert event["retain_indefinitely"] is False
def test_event_time_filtering(self):
app = create_app(
FrigateConfig(**self.minimal_config),
self.db,
None,
None,
None,
None,
None,
PlusApi(),
)
morning_id = "123456.random"
evening_id = "654321.random"
morning = 1656590400 # 06/30/2022 6 am (GMT)
evening = 1656633600 # 06/30/2022 6 pm (GMT)
with app.test_client() as client:
_insert_mock_event(morning_id, morning)
_insert_mock_event(evening_id, evening)
# both events come back
events = client.get("/events").json
assert events
assert len(events) == 2
# morning event is excluded
events = client.get(
"/events",
query_string={"time_range": "07:00,24:00"},
).json
assert events
# assert len(events) == 1
# evening event is excluded
events = client.get(
"/events",
query_string={"time_range": "00:00,18:00"},
).json
assert events
assert len(events) == 1
def test_set_delete_sub_label(self):
app = create_app(
FrigateConfig(**self.minimal_config),
@@ -351,14 +389,17 @@ class TestHttp(unittest.TestCase):
assert stats == self.test_stats
def _insert_mock_event(id: str) -> Event:
def _insert_mock_event(
id: str,
start_time: datetime.datetime = datetime.datetime.now().timestamp(),
) -> Event:
"""Inserts a basic event model with a given id."""
return Event.insert(
id=id,
label="Mock",
camera="front_door",
start_time=datetime.datetime.now().timestamp(),
end_time=datetime.datetime.now().timestamp() + 20,
start_time=start_time,
end_time=start_time + 20,
top_score=100,
false_positive=False,
zones=list(),

View File

@@ -278,9 +278,11 @@ class NorfairTracker(ObjectTracker):
min(self.detect_config.width - 1, estimate[2]),
min(self.detect_config.height - 1, estimate[3]),
)
estimate_velocity = tuple(t.estimate_velocity.flatten().astype(int))
obj = {
**t.last_detection.data,
"estimate": estimate,
"estimate_velocity": estimate_velocity,
}
active_ids.append(t.global_id)
if t.global_id not in self.track_id_map:

View File

@@ -31,6 +31,8 @@ class PTZMetricsTypes(TypedDict):
ptz_reset: Event
ptz_start_time: Synchronized
ptz_stop_time: Synchronized
ptz_frame_time: Synchronized
ptz_zoom_level: Synchronized
class FeatureMetricsTypes(TypedDict):

View File

@@ -249,3 +249,15 @@ def update_yaml(data, key_path, new_value):
temp[last_key] = new_value
return data
def find_by_key(dictionary, target_key):
if target_key in dictionary:
return dictionary[target_key]
else:
for value in dictionary.values():
if isinstance(value, dict):
result = find_by_key(value, target_key)
if result is not None:
return result
return None

View File

@@ -767,6 +767,7 @@ def process_frames(
continue
current_frame_time.value = frame_time
ptz_metrics["ptz_frame_time"].value = frame_time
frame = frame_manager.get(
f"{camera_name}{frame_time}", (frame_shape[0] * 3 // 2, frame_shape[1])

View File

@@ -21,7 +21,7 @@ export default function LargeDialog({ children, portalRootID = 'dialogs' }) {
>
<div
role="modal"
className={`absolute rounded shadow-2xl bg-white dark:bg-gray-700 w-4/5 max-w-7xl text-gray-900 dark:text-white transition-transform transition-opacity duration-75 transform scale-90 opacity-0 ${
className={`absolute rounded shadow-2xl bg-white dark:bg-gray-700 w-4/5 md:h-2/3 max-w-7xl text-gray-900 dark:text-white transition-transform transition-opacity duration-75 transform scale-90 opacity-0 ${
show ? 'scale-100 opacity-100' : ''
}`}
>

View File

@@ -1,182 +1,18 @@
import { h } from 'preact';
import { useCallback, useEffect, useMemo, useState } from 'preact/hooks';
import { useState } from 'preact/hooks';
import { ArrowDropdown } from '../icons/ArrowDropdown';
import { ArrowDropup } from '../icons/ArrowDropup';
import Heading from './Heading';
const TimePicker = ({ dateRange, onChange }) => {
const [error, setError] = useState(null);
const [timeRange, setTimeRange] = useState(new Set());
const [hoverIdx, setHoverIdx] = useState(null);
const [reset, setReset] = useState(false);
const TimePicker = ({ timeRange, onChange }) => {
const times = timeRange.split(',');
const [after, setAfter] = useState(times[0]);
const [before, setBefore] = useState(times[1]);
/**
* Initializes two variables before and after with date objects,
* If they are not null, it creates a new Date object with the value of the property and if not,
* it creates a new Date object with the current hours to 0 and 24 respectively.
*/
const before = useMemo(() => {
return dateRange.before ? new Date(dateRange.before) : new Date(new Date().setHours(24, 0, 0, 0));
}, [dateRange]);
const after = useMemo(() => {
return dateRange.after ? new Date(dateRange.after) : new Date(new Date().setHours(0, 0, 0, 0));
}, [dateRange]);
useEffect(() => {
/**
* This will reset hours when user selects another date in the calendar.
*/
if (before.getHours() === 0 && after.getHours() === 0 && timeRange.size > 1) return setTimeRange(new Set());
}, [after, before, timeRange]);
useEffect(() => {
if (reset || !after) return;
/**
* calculates the number of hours between two dates, by finding the difference in days,
* converting it to hours and adding the hours from the before date.
*/
const days = Math.max(before.getDate() - after.getDate());
const hourOffset = days * 24;
const beforeOffset = before.getHours() ? hourOffset + before.getHours() : 0;
/**
* Fills the timeRange by iterating over the hours between 'after' and 'before' during component mount, to keep the selected hours persistent.
*/
for (let hour = after.getHours(); hour < beforeOffset; hour++) {
setTimeRange((timeRange) => timeRange.add(hour));
}
/**
* find an element by the id timeIndex- concatenated with the minimum value from timeRange array,
* and if that element is present, it will scroll into view if needed
*/
if (timeRange.size > 1) {
const element = document.getElementById(`timeIndex-${Math.max(...timeRange)}`);
if (element) {
element.scrollIntoViewIfNeeded(true);
}
}
}, [after, before, timeRange, reset]);
/**
* numberOfDaysSelected is a set that holds the number of days selected in the dateRange.
* The loop iterates through the days starting from the after date's day to the before date's day.
* If the before date's hour is 0, it skips it.
*/
const numberOfDaysSelected = useMemo(() => {
return new Set([...Array(Math.max(1, before.getDate() - after.getDate() + 1))].map((_, i) => after.getDate() + i));
}, [before, after]);
if (before.getHours() === 0) numberOfDaysSelected.delete(before.getDate());
// Create repeating array with the number of hours for each day selected ...23,24,0,1,2...
const hoursInDays = useMemo(() => {
return Array.from({ length: numberOfDaysSelected.size * 24 }, (_, i) => i % 24);
}, [numberOfDaysSelected]);
// function for handling the selected time from the provided list
const handleTime = useCallback(
(hour) => {
if (isNaN(hour)) return;
const _timeRange = new Set([...timeRange]);
_timeRange.add(hour);
// reset error messages
setError(null);
/**
* Check if the variable "hour" exists in the "timeRange" set.
* If it does, reset the timepicker
*/
if (timeRange.has(hour)) {
setTimeRange(new Set());
setReset(true);
const resetBefore = before.setDate(after.getDate() + numberOfDaysSelected.size - 1);
return onChange({
after: after.setHours(0, 0, 0, 0) / 1000,
before: new Date(resetBefore).setHours(24, 0, 0, 0) / 1000,
});
}
//update after
if (_timeRange.size === 1) {
// check if the first selected value is within first day
const firstSelectedHour = Math.ceil(Math.max(..._timeRange));
if (firstSelectedHour > 23) {
return setError('Select a time on the initial day!');
}
// calculate days offset
const dayOffsetAfter = new Date(after).setHours(Math.min(..._timeRange));
let dayOffsetBefore = before;
if (numberOfDaysSelected.size === 1) {
dayOffsetBefore = new Date(after).setHours(Math.min(..._timeRange) + 1);
}
onChange({
after: dayOffsetAfter / 1000,
before: dayOffsetBefore / 1000,
});
}
//update before
if (_timeRange.size > 1) {
let selectedDay = Math.ceil(Math.max(..._timeRange) / 24);
// if user selects time 00:00 for the next day, add one day
if (hour === 24 && selectedDay === numberOfDaysSelected.size - 1) {
selectedDay += 1;
}
// Check if end time is on the last day
if (selectedDay !== numberOfDaysSelected.size) {
return setError('Ending must occur on final day!');
}
// Check if end time is later than start time
const startHour = Math.min(..._timeRange);
if (hour <= startHour) {
return setError('Ending hour must be greater than start time!');
}
// Add all hours between start and end times to the set
for (let x = startHour; x <= hour; x++) {
_timeRange.add(x);
}
// calculate days offset
const dayOffsetBefore = new Date(dateRange.after);
onChange({
after: dateRange.after / 1000,
// we add one hour to get full 60min of last selected hour
before: dayOffsetBefore.setHours(Math.max(..._timeRange) + 1) / 1000,
});
}
for (let i = 0; i < _timeRange.size; i++) {
setTimeRange((timeRange) => timeRange.add(Array.from(_timeRange)[i]));
}
},
[after, before, timeRange, dateRange.after, numberOfDaysSelected.size, onChange]
);
const isSelected = useCallback(
(idx) => {
return !!timeRange.has(idx);
},
[timeRange]
);
const isHovered = useCallback(
(idx) => {
return timeRange.size === 1 && idx > Math.max(...timeRange) && idx <= hoverIdx;
},
[timeRange, hoverIdx]
);
// Create repeating array with the number of hours for 1 day ...23,24,0,1,2...
const hoursInDays = Array.from({ length: 24 }, (_, i) => String(i % 24).padStart(2, '0'));
// background colors for each day
const isSelectedCss = 'bg-blue-600 transition duration-300 ease-in-out hover:rounded-none';
function randomGrayTone(shade) {
const grayTones = [
'bg-[#212529]/50',
@@ -193,44 +29,72 @@ const TimePicker = ({ dateRange, onChange }) => {
return grayTones[shade % grayTones.length];
}
const isSelected = (idx, current) => {
return current == `${idx}:00`;
};
const isSelectedCss = 'bg-blue-600 transition duration-300 ease-in-out hover:rounded-none';
const handleTime = (after, before) => {
setAfter(after);
setBefore(before);
onChange(`${after},${before}`);
};
return (
<>
{error ? <span className="text-red-400 text-center text-xs absolute top-1 right-0 pr-2">{error}</span> : null}
<div className="mt-2 pr-3 hidden xs:block" aria-label="Calendar timepicker, select a time range">
<div className="flex items-center justify-center">
<ArrowDropup className="w-10 text-center" />
</div>
<div className="w-20 px-1">
<div
className="border border-gray-400/50 cursor-pointer hide-scroll shadow-md rounded-md"
style={{ maxHeight: '17rem', overflowY: 'scroll' }}
>
{hoursInDays.map((_, idx) => (
<div
key={idx}
id={`timeIndex-${idx}`}
className={`${isSelected(idx) ? isSelectedCss : ''}
${isHovered(idx) ? 'opacity-30 bg-slate-900 transition duration-150 ease-in-out' : ''}
${Math.min(...timeRange) === idx ? 'rounded-t-lg' : ''}
${timeRange.size > 1 && Math.max(...timeRange) === idx ? 'rounded-b-lg' : ''}`}
onMouseEnter={() => setHoverIdx(idx)}
onMouseLeave={() => setHoverIdx(null)}
>
<div
className={`
<div className="px-1 flex justify-between">
<div>
<Heading className="text-center" size="sm">
After
</Heading>
<div
className="w-20 border border-gray-400/50 cursor-pointer hide-scroll shadow-md rounded-md"
style={{ maxHeight: '17rem', overflowY: 'scroll' }}
>
{hoursInDays.map((time, idx) => (
<div className={`${isSelected(time, after) ? isSelectedCss : ''}`} key={idx} id={`timeIndex-${idx}`}>
<div
className={`
text-gray-300 w-full font-light border border-transparent hover:border hover:rounded-md hover:border-gray-600 text-center text-sm
${randomGrayTone([Math.floor(idx / 24)])}`}
onClick={() => handleTime(idx)}
>
<span aria-label={`${idx}:00`}>{hoursInDays[idx]}:00</span>
onClick={() => handleTime(`${time}:00`, before)}
>
<span aria-label={`${idx}:00`}>{hoursInDays[idx]}:00</span>
</div>
</div>
</div>
))}
))}
</div>
</div>
<div className="flex items-center justify-center">
<ArrowDropdown className="w-10 text-center" />
<div>
<Heading className="text-center" size="sm">
Before
</Heading>
<div
className="w-20 border border-gray-400/50 cursor-pointer hide-scroll shadow-md rounded-md"
style={{ maxHeight: '17rem', overflowY: 'scroll' }}
>
{hoursInDays.map((time, idx) => (
<div className={`${isSelected(time, before) ? isSelectedCss : ''}`} key={idx} id={`timeIndex-${idx}`}>
<div
className={`
text-gray-300 w-full font-light border border-transparent hover:border hover:rounded-md hover:border-gray-600 text-center text-sm
${randomGrayTone([Math.floor(idx / 24)])}`}
onClick={() => handleTime(after, `${time}:00`)}
>
<span aria-label={`${idx}:00`}>{hoursInDays[idx]}:00</span>
</div>
</div>
))}
</div>
</div>
</div>
<div className="flex items-center justify-center">
<ArrowDropdown className="w-10 text-center" />
</div>
</div>
</>
);

View File

@@ -55,7 +55,7 @@ export default function TimelineEventOverlay({ eventOverlay, cameraConfig }) {
) : null}
</div>
{isHovering && (
<div className="absolute bg-white dark:bg-slate-800 p-4 block dark:text-white text-lg" style={getHoverStyle()}>
<div className="absolute bg-white dark:bg-slate-800 p-4 block text-black dark:text-white text-lg" style={getHoverStyle()}>
<div>{`Area: ${getObjectArea()} px`}</div>
<div>{`Ratio: ${getObjectRatio()}`}</div>
</div>

View File

@@ -48,6 +48,8 @@ const monthsAgo = (num) => {
export default function Events({ path, ...props }) {
const apiHost = useApiHost();
const { data: config } = useSWR('config');
const timezone = useMemo(() => config?.ui?.timezone || Intl.DateTimeFormat().resolvedOptions().timeZone, [config]);
const [searchParams, setSearchParams] = useState({
before: null,
after: null,
@@ -55,6 +57,8 @@ export default function Events({ path, ...props }) {
labels: props.labels ?? 'all',
zones: props.zones ?? 'all',
sub_labels: props.sub_labels ?? 'all',
time_range: '00:00,24:00',
timezone,
favorites: props.favorites ?? 0,
event: props.event,
});
@@ -87,14 +91,17 @@ export default function Events({ path, ...props }) {
showDeleteFavorite: false,
});
const eventsFetcher = useCallback((path, params) => {
if (searchParams.event) {
path = `${path}/${searchParams.event}`;
return axios.get(path).then((res) => [res.data]);
}
params = { ...params, include_thumbnails: 0, limit: API_LIMIT };
return axios.get(path, { params }).then((res) => res.data);
}, [searchParams]);
const eventsFetcher = useCallback(
(path, params) => {
if (searchParams.event) {
path = `${path}/${searchParams.event}`;
return axios.get(path).then((res) => [res.data]);
}
params = { ...params, include_thumbnails: 0, limit: API_LIMIT };
return axios.get(path, { params }).then((res) => res.data);
},
[searchParams]
);
const getKey = useCallback(
(index, prevData) => {
@@ -111,8 +118,6 @@ export default function Events({ path, ...props }) {
const { data: eventPages, mutate, size, setSize, isValidating } = useSWRInfinite(getKey, eventsFetcher);
const { data: config } = useSWR('config');
const { data: allLabels } = useSWR(['labels']);
const { data: allSubLabels } = useSWR(['sub_labels', { split_joined: 1 }]);
@@ -239,6 +244,13 @@ export default function Events({ path, ...props }) {
[searchParams, setSearchParams, state, setState]
);
const handleSelectTimeRange = useCallback(
(timeRange) => {
setSearchParams({ ...searchParams, time_range: timeRange });
},
[searchParams]
);
const onFilter = useCallback(
(name, value) => {
const updatedParams = { ...searchParams, [name]: value };
@@ -265,12 +277,16 @@ export default function Events({ path, ...props }) {
(node) => {
if (isValidating) return;
if (observer.current) observer.current.disconnect();
observer.current = new IntersectionObserver((entries) => {
if (entries[0].isIntersecting && !isDone) {
setSize(size + 1);
}
});
if (node) observer.current.observe(node);
try {
observer.current = new IntersectionObserver((entries) => {
if (entries[0].isIntersecting && !isDone) {
setSize(size + 1);
}
});
if (node) observer.current.observe(node);
} catch (e) {
// no op
}
},
[size, setSize, isValidating, isDone]
);
@@ -361,7 +377,7 @@ export default function Events({ path, ...props }) {
/>
)}
{searchParams.event && (
<Button className="ml-2" onClick={() => onFilter('event',null)} type="text">
<Button className="ml-2" onClick={() => onFilter('event', null)} type="text">
View All
</Button>
)}
@@ -399,7 +415,10 @@ export default function Events({ path, ...props }) {
download
/>
)}
{(event?.data?.type || "object") == "object" && downloadEvent.end_time && downloadEvent.has_snapshot && !downloadEvent.plus_id && (
{(event?.data?.type || 'object') == 'object' &&
downloadEvent.end_time &&
downloadEvent.has_snapshot &&
!downloadEvent.plus_id && (
<MenuItem
icon={UploadPlus}
label={uploading.includes(downloadEvent.id) ? 'Uploading...' : 'Send to Frigate+'}
@@ -459,10 +478,7 @@ export default function Events({ path, ...props }) {
dateRange={{ before: searchParams.before * 1000 || null, after: searchParams.after * 1000 || null }}
close={() => setState({ ...state, showCalendar: false })}
>
<Timepicker
dateRange={{ before: searchParams.before * 1000 || null, after: searchParams.after * 1000 || null }}
onChange={handleSelectDateRange}
/>
<Timepicker timeRange={searchParams.time_range} onChange={handleSelectTimeRange} />
</Calendar>
</Menu>
</span>
@@ -566,7 +582,11 @@ export default function Events({ path, ...props }) {
<p className="mb-2">Confirm deletion of saved event.</p>
</div>
<div className="p-2 flex justify-start flex-row-reverse space-x-2">
<Button className="ml-2" onClick={() => setDeleteFavoriteState({ ...state, showDeleteFavorite: false })} type="text">
<Button
className="ml-2"
onClick={() => setDeleteFavoriteState({ ...state, showDeleteFavorite: false })}
type="text"
>
Cancel
</Button>
<Button
@@ -635,10 +655,12 @@ export default function Events({ path, ...props }) {
<Camera className="h-5 w-5 mr-2 inline" />
{event.camera.replaceAll('_', ' ')}
</div>
{event.zones.length ? <div className="capitalize text-sm flex align-center">
<Zone className="w-5 h-5 mr-2 inline" />
{event.zones.join(', ').replaceAll('_', ' ')}
</div> : null}
{event.zones.length ? (
<div className="capitalize text-sm flex align-center">
<Zone className="w-5 h-5 mr-2 inline" />
{event.zones.join(', ').replaceAll('_', ' ')}
</div>
) : null}
<div className="capitalize text-sm flex align-center">
<Score className="w-5 h-5 mr-2 inline" />
{(event?.data?.top_score || event.top_score || 0) == 0
@@ -650,7 +672,7 @@ export default function Events({ path, ...props }) {
</div>
</div>
<div class="hidden sm:flex flex-col justify-end mr-2">
{event.end_time && event.has_snapshot && (event?.data?.type || "object") == "object" && (
{event.end_time && event.has_snapshot && (event?.data?.type || 'object') == 'object' && (
<Fragment>
{event.plus_id ? (
<div className="uppercase text-xs underline">

View File

@@ -28,9 +28,9 @@ export default function Export() {
const localISODate = localDate.toISOString().split('T')[0];
const [startDate, setStartDate] = useState(localISODate);
const [startTime, setStartTime] = useState('00:00');
const [startTime, setStartTime] = useState('00:00:00');
const [endDate, setEndDate] = useState(localISODate);
const [endTime, setEndTime] = useState('23:59');
const [endTime, setEndTime] = useState('23:59:59');
// Export States
@@ -185,6 +185,7 @@ export default function Export() {
id="startTime"
type="time"
value={startTime}
step="1"
onChange={(e) => setStartTime(e.target.value)}
/>
<Heading className="py-2" size="sm">
@@ -202,6 +203,7 @@ export default function Export() {
id="endTime"
type="time"
value={endTime}
step="1"
onChange={(e) => setEndTime(e.target.value)}
/>
</div>