forked from Github/frigate
Compare commits
42 Commits
v0.10.0-be
...
v0.10.0-rc
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e5714f5fbc | ||
|
|
3be0b915ad | ||
|
|
304ffa86e8 | ||
|
|
889835a59b | ||
|
|
ee01396b36 | ||
|
|
334e28fe54 | ||
|
|
6b2bae040c | ||
|
|
95ab22d411 | ||
|
|
4e52461aa9 | ||
|
|
7934f8699f | ||
|
|
adbc54bcfe | ||
|
|
4deb365758 | ||
|
|
1171770447 | ||
|
|
54d1a223a5 | ||
|
|
62c1a61ed0 | ||
|
|
9ecc7920dd | ||
|
|
45b56bdce5 | ||
|
|
54b88fb4a9 | ||
|
|
a3fa3cb716 | ||
|
|
64f80a4732 | ||
|
|
0b02f20b26 | ||
|
|
8670a3d808 | ||
|
|
3617a625d3 | ||
|
|
ad4929c621 | ||
|
|
9a0d276761 | ||
|
|
24f9937009 | ||
|
|
4e23967442 | ||
|
|
acc1022998 | ||
|
|
02c91d4c51 | ||
|
|
5e156f8151 | ||
|
|
47e0e1d221 | ||
|
|
f57501d033 | ||
|
|
1a3f21e5c1 | ||
|
|
5a2076fcab | ||
|
|
2d5ec25dca | ||
|
|
499f75e165 | ||
|
|
3600ebca39 | ||
|
|
50b5d40c10 | ||
|
|
21f1a98da4 | ||
|
|
21cc29be6f | ||
|
|
794a9ff162 | ||
|
|
7b4cb95825 |
@@ -22,3 +22,5 @@ RUN pip3 install pylint black
|
|||||||
# Install Node 14
|
# Install Node 14
|
||||||
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash - \
|
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash - \
|
||||||
&& apt-get install -y nodejs
|
&& apt-get install -y nodejs
|
||||||
|
|
||||||
|
RUN npm install -g npm@latest
|
||||||
|
|||||||
@@ -61,8 +61,8 @@ cameras:
|
|||||||
roles:
|
roles:
|
||||||
- detect
|
- detect
|
||||||
detect:
|
detect:
|
||||||
width: 640
|
width: 896
|
||||||
height: 480
|
height: 672
|
||||||
fps: 7
|
fps: 7
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -159,9 +159,23 @@ detect:
|
|||||||
enabled: True
|
enabled: True
|
||||||
# Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
|
# Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
|
||||||
max_disappeared: 25
|
max_disappeared: 25
|
||||||
# Optional: Frequency for running detection on stationary objects (default: 0)
|
# Optional: Configuration for stationary object tracking
|
||||||
# When set to 0, object detection will never be run on stationary objects. If set to 10, it will be run on every 10th frame.
|
stationary:
|
||||||
stationary_interval: 0
|
# Optional: Frequency for running detection on stationary objects (default: shown below)
|
||||||
|
# When set to 0, object detection will never be run on stationary objects. If set to 10, it will be run on every 10th frame.
|
||||||
|
interval: 0
|
||||||
|
# Optional: Number of frames without a position change for an object to be considered stationary (default: 10x the frame rate or 10s)
|
||||||
|
threshold: 50
|
||||||
|
# Optional: Define a maximum number of frames for tracking a stationary object (default: not set, track forever)
|
||||||
|
# This can help with false positives for objects that should only be stationary for a limited amount of time.
|
||||||
|
# It can also be used to disable stationary object tracking. For example, you may want to set a value for person, but leave
|
||||||
|
# car at the default.
|
||||||
|
max_frames:
|
||||||
|
# Optional: Default for all object types (default: not set, track forever)
|
||||||
|
default: 3000
|
||||||
|
# Optional: Object specific values
|
||||||
|
objects:
|
||||||
|
person: 1000
|
||||||
|
|
||||||
# Optional: Object configuration
|
# Optional: Object configuration
|
||||||
# NOTE: Can be overridden at the camera level
|
# NOTE: Can be overridden at the camera level
|
||||||
@@ -381,7 +395,7 @@ cameras:
|
|||||||
# camera.
|
# camera.
|
||||||
front_steps:
|
front_steps:
|
||||||
# Required: List of x,y coordinates to define the polygon of the zone.
|
# Required: List of x,y coordinates to define the polygon of the zone.
|
||||||
# NOTE: Coordinates can be generated at https://www.image-map.net/
|
# NOTE: Presence in a zone is evaluated only based on the bottom center of the objects bounding box.
|
||||||
coordinates: 545,1077,747,939,788,805
|
coordinates: 545,1077,747,939,788,805
|
||||||
# Optional: List of objects that can trigger this zone (default: all tracked objects)
|
# Optional: List of objects that can trigger this zone (default: all tracked objects)
|
||||||
objects:
|
objects:
|
||||||
|
|||||||
@@ -97,15 +97,3 @@ processes:
|
|||||||
| 0 N/A N/A 12827 C ffmpeg 417MiB |
|
| 0 N/A N/A 12827 C ffmpeg 417MiB |
|
||||||
+-----------------------------------------------------------------------------+
|
+-----------------------------------------------------------------------------+
|
||||||
```
|
```
|
||||||
|
|
||||||
To further improve performance, you can set ffmpeg to skip frames in the output,
|
|
||||||
using the fps filter:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
output_args:
|
|
||||||
- -filter:v
|
|
||||||
- fps=fps=5
|
|
||||||
```
|
|
||||||
|
|
||||||
This setting, for example, allows Frigate to consume my 10-15fps camera streams on
|
|
||||||
my relatively low powered Haswell machine with relatively low cpu usage.
|
|
||||||
|
|||||||
@@ -3,7 +3,9 @@ id: zones
|
|||||||
title: Zones
|
title: Zones
|
||||||
---
|
---
|
||||||
|
|
||||||
Zones allow you to define a specific area of the frame and apply additional filters for object types so you can determine whether or not an object is within a particular area. Zones cannot have the same name as a camera. If desired, a single zone can include multiple cameras if you have multiple cameras covering the same area by configuring zones with the same name for each camera.
|
Zones allow you to define a specific area of the frame and apply additional filters for object types so you can determine whether or not an object is within a particular area. Presence in a zone is evaluated based on the bottom center of the bounding box for the object. It does not matter how much of the bounding box overlaps with the zone.
|
||||||
|
|
||||||
|
Zones cannot have the same name as a camera. If desired, a single zone can include multiple cameras if you have multiple cameras covering the same area by configuring zones with the same name for each camera.
|
||||||
|
|
||||||
During testing, enable the Zones option for the debug feed so you can adjust as needed. The zone line will increase in thickness when any object enters the zone.
|
During testing, enable the Zones option for the debug feed so you can adjust as needed. The zone line will increase in thickness when any object enters the zone.
|
||||||
|
|
||||||
|
|||||||
@@ -62,6 +62,8 @@ cameras:
|
|||||||
roles:
|
roles:
|
||||||
- detect
|
- detect
|
||||||
- rtmp
|
- rtmp
|
||||||
|
rtmp:
|
||||||
|
enabled: False # <-- RTMP should be disabled if your stream is not H264
|
||||||
detect:
|
detect:
|
||||||
width: 1280 # <---- update for your camera's resolution
|
width: 1280 # <---- update for your camera's resolution
|
||||||
height: 720 # <---- update for your camera's resolution
|
height: 720 # <---- update for your camera's resolution
|
||||||
@@ -71,7 +73,9 @@ cameras:
|
|||||||
|
|
||||||
At this point you should be able to start Frigate and see the the video feed in the UI.
|
At this point you should be able to start Frigate and see the the video feed in the UI.
|
||||||
|
|
||||||
If you get a green image from the camera, this means ffmpeg was not able to get the video feed from your camera. Check the logs for error messages from ffmpeg. The default ffmpeg arguments are designed to work with RTSP cameras that support TCP connections. FFmpeg arguments for other types of cameras can be found [here](/configuration/camera_specific).
|
If you get a green image from the camera, this means ffmpeg was not able to get the video feed from your camera. Check the logs for error messages from ffmpeg. The default ffmpeg arguments are designed to work with H264 RTSP cameras that support TCP connections. If you do not have H264 cameras, make sure you have disabled RTMP. It is possible to enable it, but you must tell ffmpeg to re-encode the video with customized output args.
|
||||||
|
|
||||||
|
FFmpeg arguments for other types of cameras can be found [here](/configuration/camera_specific).
|
||||||
|
|
||||||
### Step 5: Configure hardware acceleration (optional)
|
### Step 5: Configure hardware acceleration (optional)
|
||||||
|
|
||||||
@@ -163,13 +167,17 @@ cameras:
|
|||||||
roles:
|
roles:
|
||||||
- detect
|
- detect
|
||||||
- rtmp
|
- rtmp
|
||||||
- record # <----- Add role
|
- path: rtsp://10.0.10.10:554/high_res_stream # <----- Add high res stream
|
||||||
|
roles:
|
||||||
|
- record
|
||||||
detect: ...
|
detect: ...
|
||||||
record: # <----- Enable recording
|
record: # <----- Enable recording
|
||||||
enabled: True
|
enabled: True
|
||||||
motion: ...
|
motion: ...
|
||||||
```
|
```
|
||||||
|
|
||||||
|
If you don't have separate streams for detect and record, you would just add the record role to the list on the first input.
|
||||||
|
|
||||||
By default, Frigate will retain video of all events for 10 days. The full set of options for recording can be found [here](/configuration/index#full-configuration-reference).
|
By default, Frigate will retain video of all events for 10 days. The full set of options for recording can be found [here](/configuration/index#full-configuration-reference).
|
||||||
|
|
||||||
### Step 8: Enable snapshots (optional)
|
### Step 8: Enable snapshots (optional)
|
||||||
|
|||||||
@@ -25,6 +25,30 @@ automation:
|
|||||||
when: '{{trigger.payload_json["after"]["start_time"]|int}}'
|
when: '{{trigger.payload_json["after"]["start_time"]|int}}'
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Note that iOS devices support live previews of cameras by adding a camera entity id to the message data.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
automation:
|
||||||
|
- alias: Security_Frigate_Notifications
|
||||||
|
description: ""
|
||||||
|
trigger:
|
||||||
|
- platform: mqtt
|
||||||
|
topic: frigate/events
|
||||||
|
payload: new
|
||||||
|
value_template: "{{ value_json.type }}"
|
||||||
|
action:
|
||||||
|
- service: notify.mobile_app_iphone
|
||||||
|
data:
|
||||||
|
message: 'A {{trigger.payload_json["after"]["label"]}} was detected.'
|
||||||
|
data:
|
||||||
|
image: >-
|
||||||
|
https://your.public.hass.address.com/api/frigate/notifications/{{trigger.payload_json["after"]["id"]}}/thumbnail.jpg
|
||||||
|
tag: '{{trigger.payload_json["after"]["id"]}}'
|
||||||
|
when: '{{trigger.payload_json["after"]["start_time"]|int}}'
|
||||||
|
entity_id: camera.{{trigger.payload_json["after"]["camera"]}}
|
||||||
|
mode: single
|
||||||
|
```
|
||||||
|
|
||||||
## Conditions
|
## Conditions
|
||||||
|
|
||||||
Conditions with the `before` and `after` values allow a high degree of customization for automations.
|
Conditions with the `before` and `after` values allow a high degree of customization for automations.
|
||||||
|
|||||||
@@ -177,6 +177,15 @@ HassOS users can install via the addon repository.
|
|||||||
6. Start the addon container
|
6. Start the addon container
|
||||||
7. (not for proxy addon) If you are using hardware acceleration for ffmpeg, you may need to disable "Protection mode"
|
7. (not for proxy addon) If you are using hardware acceleration for ffmpeg, you may need to disable "Protection mode"
|
||||||
|
|
||||||
|
There are several versions of the addon available:
|
||||||
|
|
||||||
|
| Addon Version | Description |
|
||||||
|
| ------------------------------ | ---------------------------------------------------------- |
|
||||||
|
| Frigate NVR | Current release with protection mode on |
|
||||||
|
| Frigate NVR (Full Access) | Current release with the option to disable protection mode |
|
||||||
|
| Frigate NVR Beta | Beta release with protection mode on |
|
||||||
|
| Frigate NVR Beta (Full Access) | Beta release with the option to disable protection mode |
|
||||||
|
|
||||||
## Home Assistant Supervised
|
## Home Assistant Supervised
|
||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
|
|||||||
@@ -45,11 +45,14 @@ that card.
|
|||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
When configuring the integration, you will be asked for the following parameters:
|
When configuring the integration, you will be asked for the `URL` of your frigate instance which is the URL you use to access Frigate in the browser. This may look like `http://<host>:5000/`. If you are using HassOS with the addon, the URL should be one of the following depending on which addon version you are using. Note that if you are using the Proxy Addon, you do NOT point the integration at the proxy URL. Just enter the URL used to access frigate directly from your network.
|
||||||
|
|
||||||
| Variable | Description |
|
| Addon Version | URL |
|
||||||
| -------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| ------------------------------ | -------------------------------------- |
|
||||||
| URL | The `URL` of your frigate instance, the URL you use to access Frigate in the browser. This may look like `http://<host>:5000/`. If you are using HassOS with the addon, the URL should be `http://ccab4aaf-frigate:5000` (or `http://ccab4aaf-frigate-beta:5000` if your are using the beta version of the addon). Live streams required port 1935, see [RTMP streams](#streams) |
|
| Frigate NVR | `http://ccab4aaf-frigate:5000` |
|
||||||
|
| Frigate NVR (Full Access) | `http://ccab4aaf-frigate-fa:5000` |
|
||||||
|
| Frigate NVR Beta | `http://ccab4aaf-frigate-beta:5000` |
|
||||||
|
| Frigate NVR Beta (Full Access) | `http://ccab4aaf-frigate-fa-beta:5000` |
|
||||||
|
|
||||||
<a name="options"></a>
|
<a name="options"></a>
|
||||||
|
|
||||||
|
|||||||
@@ -55,7 +55,10 @@ Message published for each changed event. The first message is published when th
|
|||||||
"entered_zones": ["yard", "driveway"],
|
"entered_zones": ["yard", "driveway"],
|
||||||
"thumbnail": null,
|
"thumbnail": null,
|
||||||
"has_snapshot": false,
|
"has_snapshot": false,
|
||||||
"has_clip": false
|
"has_clip": false,
|
||||||
|
"stationary": false, // whether or not the object is considered stationary
|
||||||
|
"motionless_count": 0, // number of frames the object has been motionless
|
||||||
|
"position_changes": 2 // number of times the object has moved from a stationary position
|
||||||
},
|
},
|
||||||
"after": {
|
"after": {
|
||||||
"id": "1607123955.475377-mxklsc",
|
"id": "1607123955.475377-mxklsc",
|
||||||
@@ -75,7 +78,10 @@ Message published for each changed event. The first message is published when th
|
|||||||
"entered_zones": ["yard", "driveway"],
|
"entered_zones": ["yard", "driveway"],
|
||||||
"thumbnail": null,
|
"thumbnail": null,
|
||||||
"has_snapshot": false,
|
"has_snapshot": false,
|
||||||
"has_clip": false
|
"has_clip": false,
|
||||||
|
"stationary": false, // whether or not the object is considered stationary
|
||||||
|
"motionless_count": 0, // number of frames the object has been motionless
|
||||||
|
"position_changes": 2 // number of times the object has changed position
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|||||||
14859
docs/package-lock.json
generated
14859
docs/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -12,13 +12,13 @@
|
|||||||
"clear": "docusaurus clear"
|
"clear": "docusaurus clear"
|
||||||
},
|
},
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
"@docusaurus/core": "^2.0.0-beta.6",
|
"@docusaurus/core": "^2.0.0-beta.15",
|
||||||
"@docusaurus/preset-classic": "^2.0.0-beta.6",
|
"@docusaurus/preset-classic": "^2.0.0-beta.15",
|
||||||
"@mdx-js/react": "^1.6.21",
|
"@mdx-js/react": "^1.6.22",
|
||||||
"clsx": "^1.1.1",
|
"clsx": "^1.1.1",
|
||||||
"raw-loader": "^4.0.2",
|
"raw-loader": "^4.0.2",
|
||||||
"react": "^16.8.4",
|
"react": "^16.14.0",
|
||||||
"react-dom": "^16.8.4"
|
"react-dom": "^16.14.0"
|
||||||
},
|
},
|
||||||
"browserslist": {
|
"browserslist": {
|
||||||
"production": [
|
"production": [
|
||||||
@@ -31,5 +31,8 @@
|
|||||||
"last 1 firefox version",
|
"last 1 firefox version",
|
||||||
"last 1 safari version"
|
"last 1 safari version"
|
||||||
]
|
]
|
||||||
|
},
|
||||||
|
"devDependencies": {
|
||||||
|
"@types/react": "^16.14.0"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -8,6 +8,7 @@ import threading
|
|||||||
from logging.handlers import QueueHandler
|
from logging.handlers import QueueHandler
|
||||||
from typing import Dict, List
|
from typing import Dict, List
|
||||||
|
|
||||||
|
import traceback
|
||||||
import yaml
|
import yaml
|
||||||
from peewee_migrate import Router
|
from peewee_migrate import Router
|
||||||
from playhouse.sqlite_ext import SqliteExtDatabase
|
from playhouse.sqlite_ext import SqliteExtDatabase
|
||||||
@@ -320,6 +321,7 @@ class FrigateApp:
|
|||||||
print("*** Config Validation Errors ***")
|
print("*** Config Validation Errors ***")
|
||||||
print("*************************************************************")
|
print("*************************************************************")
|
||||||
print(e)
|
print(e)
|
||||||
|
print(traceback.format_exc())
|
||||||
print("*************************************************************")
|
print("*************************************************************")
|
||||||
print("*** End Config Validation Errors ***")
|
print("*** End Config Validation Errors ***")
|
||||||
print("*************************************************************")
|
print("*************************************************************")
|
||||||
|
|||||||
@@ -162,6 +162,29 @@ class RuntimeMotionConfig(MotionConfig):
|
|||||||
extra = Extra.ignore
|
extra = Extra.ignore
|
||||||
|
|
||||||
|
|
||||||
|
class StationaryMaxFramesConfig(FrigateBaseModel):
|
||||||
|
default: Optional[int] = Field(title="Default max frames.", ge=1)
|
||||||
|
objects: Dict[str, int] = Field(
|
||||||
|
default_factory=dict, title="Object specific max frames."
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
class StationaryConfig(FrigateBaseModel):
|
||||||
|
interval: Optional[int] = Field(
|
||||||
|
default=0,
|
||||||
|
title="Frame interval for checking stationary objects.",
|
||||||
|
ge=0,
|
||||||
|
)
|
||||||
|
threshold: Optional[int] = Field(
|
||||||
|
title="Number of frames without a position change for an object to be considered stationary",
|
||||||
|
ge=1,
|
||||||
|
)
|
||||||
|
max_frames: StationaryMaxFramesConfig = Field(
|
||||||
|
default_factory=StationaryMaxFramesConfig,
|
||||||
|
title="Max frames for stationary objects.",
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class DetectConfig(FrigateBaseModel):
|
class DetectConfig(FrigateBaseModel):
|
||||||
height: int = Field(default=720, title="Height of the stream for the detect role.")
|
height: int = Field(default=720, title="Height of the stream for the detect role.")
|
||||||
width: int = Field(default=1280, title="Width of the stream for the detect role.")
|
width: int = Field(default=1280, title="Width of the stream for the detect role.")
|
||||||
@@ -172,10 +195,9 @@ class DetectConfig(FrigateBaseModel):
|
|||||||
max_disappeared: Optional[int] = Field(
|
max_disappeared: Optional[int] = Field(
|
||||||
title="Maximum number of frames the object can dissapear before detection ends."
|
title="Maximum number of frames the object can dissapear before detection ends."
|
||||||
)
|
)
|
||||||
stationary_interval: Optional[int] = Field(
|
stationary: StationaryConfig = Field(
|
||||||
default=0,
|
default_factory=StationaryConfig,
|
||||||
title="Frame interval for checking stationary objects.",
|
title="Stationary objects config.",
|
||||||
ge=0,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@@ -766,6 +788,11 @@ class FrigateConfig(FrigateBaseModel):
|
|||||||
if camera_config.detect.max_disappeared is None:
|
if camera_config.detect.max_disappeared is None:
|
||||||
camera_config.detect.max_disappeared = max_disappeared
|
camera_config.detect.max_disappeared = max_disappeared
|
||||||
|
|
||||||
|
# Default stationary_threshold configuration
|
||||||
|
stationary_threshold = camera_config.detect.fps * 10
|
||||||
|
if camera_config.detect.stationary.threshold is None:
|
||||||
|
camera_config.detect.stationary.threshold = stationary_threshold
|
||||||
|
|
||||||
# FFMPEG input substitution
|
# FFMPEG input substitution
|
||||||
for input in camera_config.ffmpeg.inputs:
|
for input in camera_config.ffmpeg.inputs:
|
||||||
input.path = input.path.format(**FRIGATE_ENV_VARS)
|
input.path = input.path.format(**FRIGATE_ENV_VARS)
|
||||||
@@ -836,14 +863,18 @@ class FrigateConfig(FrigateBaseModel):
|
|||||||
camera_config.record.retain.days = camera_config.record.retain_days
|
camera_config.record.retain.days = camera_config.record.retain_days
|
||||||
|
|
||||||
# warning if the higher level record mode is potentially more restrictive than the events
|
# warning if the higher level record mode is potentially more restrictive than the events
|
||||||
|
rank_map = {
|
||||||
|
RetainModeEnum.all: 0,
|
||||||
|
RetainModeEnum.motion: 1,
|
||||||
|
RetainModeEnum.active_objects: 2,
|
||||||
|
}
|
||||||
if (
|
if (
|
||||||
camera_config.record.retain.days != 0
|
camera_config.record.retain.days != 0
|
||||||
and camera_config.record.retain.mode != RetainModeEnum.all
|
and rank_map[camera_config.record.retain.mode]
|
||||||
and camera_config.record.events.retain.mode
|
> rank_map[camera_config.record.events.retain.mode]
|
||||||
!= camera_config.record.retain.mode
|
|
||||||
):
|
):
|
||||||
logger.warning(
|
logger.warning(
|
||||||
f"Recording retention is configured for {camera_config.record.retain.mode} and event retention is configured for {camera_config.record.events.retain.mode}. The more restrictive retention policy will be applied."
|
f"{name}: Recording retention is configured for {camera_config.record.retain.mode} and event retention is configured for {camera_config.record.events.retain.mode}. The more restrictive retention policy will be applied."
|
||||||
)
|
)
|
||||||
# generage the ffmpeg commands
|
# generage the ffmpeg commands
|
||||||
camera_config.create_ffmpeg_cmds()
|
camera_config.create_ffmpeg_cmds()
|
||||||
|
|||||||
@@ -15,6 +15,16 @@ from frigate.models import Event
|
|||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
|
def should_update_db(prev_event, current_event):
|
||||||
|
return (
|
||||||
|
prev_event["top_score"] != current_event["top_score"]
|
||||||
|
or prev_event["entered_zones"] != current_event["entered_zones"]
|
||||||
|
or prev_event["thumbnail"] != current_event["thumbnail"]
|
||||||
|
or prev_event["has_clip"] != current_event["has_clip"]
|
||||||
|
or prev_event["has_snapshot"] != current_event["has_snapshot"]
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
class EventProcessor(threading.Thread):
|
class EventProcessor(threading.Thread):
|
||||||
def __init__(
|
def __init__(
|
||||||
self, config, camera_processes, event_queue, event_processed_queue, stop_event
|
self, config, camera_processes, event_queue, event_processed_queue, stop_event
|
||||||
@@ -48,7 +58,9 @@ class EventProcessor(threading.Thread):
|
|||||||
if event_type == "start":
|
if event_type == "start":
|
||||||
self.events_in_process[event_data["id"]] = event_data
|
self.events_in_process[event_data["id"]] = event_data
|
||||||
|
|
||||||
elif event_type == "update":
|
elif event_type == "update" and should_update_db(
|
||||||
|
self.events_in_process[event_data["id"]], event_data
|
||||||
|
):
|
||||||
self.events_in_process[event_data["id"]] = event_data
|
self.events_in_process[event_data["id"]] = event_data
|
||||||
# TODO: this will generate a lot of db activity possibly
|
# TODO: this will generate a lot of db activity possibly
|
||||||
if event_data["has_clip"] or event_data["has_snapshot"]:
|
if event_data["has_clip"] or event_data["has_snapshot"]:
|
||||||
|
|||||||
@@ -249,7 +249,10 @@ def event_clip(id):
|
|||||||
clip_path = os.path.join(CLIPS_DIR, file_name)
|
clip_path = os.path.join(CLIPS_DIR, file_name)
|
||||||
|
|
||||||
if not os.path.isfile(clip_path):
|
if not os.path.isfile(clip_path):
|
||||||
return recording_clip(event.camera, event.start_time, event.end_time)
|
end_ts = (
|
||||||
|
datetime.now().timestamp() if event.end_time is None else event.end_time
|
||||||
|
)
|
||||||
|
return recording_clip(event.camera, event.start_time, end_ts)
|
||||||
|
|
||||||
response = make_response()
|
response = make_response()
|
||||||
response.headers["Content-Description"] = "File Transfer"
|
response.headers["Content-Description"] = "File Transfer"
|
||||||
@@ -364,7 +367,13 @@ def best(camera_name, label):
|
|||||||
box_size = 300
|
box_size = 300
|
||||||
box = best_object.get("box", (0, 0, box_size, box_size))
|
box = best_object.get("box", (0, 0, box_size, box_size))
|
||||||
region = calculate_region(
|
region = calculate_region(
|
||||||
best_frame.shape, box[0], box[1], box[2], box[3], box_size, multiplier=1.1
|
best_frame.shape,
|
||||||
|
box[0],
|
||||||
|
box[1],
|
||||||
|
box[2],
|
||||||
|
box[3],
|
||||||
|
box_size,
|
||||||
|
multiplier=1.1,
|
||||||
)
|
)
|
||||||
best_frame = best_frame[region[1] : region[3], region[0] : region[2]]
|
best_frame = best_frame[region[1] : region[3], region[0] : region[2]]
|
||||||
|
|
||||||
@@ -518,12 +527,17 @@ def recordings(camera_name):
|
|||||||
FROM C2
|
FROM C2
|
||||||
WHERE cnt = 0
|
WHERE cnt = 0
|
||||||
)
|
)
|
||||||
|
SELECT id, label, camera, top_score, start_time, end_time
|
||||||
|
FROM event
|
||||||
|
WHERE camera = ? AND end_time IS NULL
|
||||||
|
UNION ALL
|
||||||
SELECT MIN(id) as id, label, camera, MAX(top_score) as top_score, MIN(ts) AS start_time, max(ts) AS end_time
|
SELECT MIN(id) as id, label, camera, MAX(top_score) as top_score, MIN(ts) AS start_time, max(ts) AS end_time
|
||||||
FROM C3
|
FROM C3
|
||||||
GROUP BY label, grpnum
|
GROUP BY label, grpnum
|
||||||
ORDER BY start_time;""",
|
ORDER BY start_time;""",
|
||||||
camera_name,
|
camera_name,
|
||||||
camera_name,
|
camera_name,
|
||||||
|
camera_name,
|
||||||
)
|
)
|
||||||
|
|
||||||
event: Event
|
event: Event
|
||||||
@@ -711,7 +725,15 @@ def vod_event(id):
|
|||||||
end_ts = (
|
end_ts = (
|
||||||
datetime.now().timestamp() if event.end_time is None else event.end_time
|
datetime.now().timestamp() if event.end_time is None else event.end_time
|
||||||
)
|
)
|
||||||
return vod_ts(event.camera, event.start_time, end_ts)
|
vod_response = vod_ts(event.camera, event.start_time, end_ts)
|
||||||
|
# If the recordings are not found, set has_clip to false
|
||||||
|
if (
|
||||||
|
type(vod_response) == tuple
|
||||||
|
and len(vod_response) == 2
|
||||||
|
and vod_response[1] == 404
|
||||||
|
):
|
||||||
|
Event.update(has_clip=False).where(Event.id == id).execute()
|
||||||
|
return vod_response
|
||||||
|
|
||||||
duration = int((event.end_time - event.start_time) * 1000)
|
duration = int((event.end_time - event.start_time) * 1000)
|
||||||
return jsonify(
|
return jsonify(
|
||||||
|
|||||||
@@ -101,14 +101,13 @@ class TrackedObject:
|
|||||||
return median(scores)
|
return median(scores)
|
||||||
|
|
||||||
def update(self, current_frame_time, obj_data):
|
def update(self, current_frame_time, obj_data):
|
||||||
significant_update = False
|
thumb_update = False
|
||||||
zone_change = False
|
significant_change = False
|
||||||
self.obj_data.update(obj_data)
|
|
||||||
# if the object is not in the current frame, add a 0.0 to the score history
|
# if the object is not in the current frame, add a 0.0 to the score history
|
||||||
if self.obj_data["frame_time"] != current_frame_time:
|
if obj_data["frame_time"] != current_frame_time:
|
||||||
self.score_history.append(0.0)
|
self.score_history.append(0.0)
|
||||||
else:
|
else:
|
||||||
self.score_history.append(self.obj_data["score"])
|
self.score_history.append(obj_data["score"])
|
||||||
# only keep the last 10 scores
|
# only keep the last 10 scores
|
||||||
if len(self.score_history) > 10:
|
if len(self.score_history) > 10:
|
||||||
self.score_history = self.score_history[-10:]
|
self.score_history = self.score_history[-10:]
|
||||||
@@ -122,24 +121,24 @@ class TrackedObject:
|
|||||||
if not self.false_positive:
|
if not self.false_positive:
|
||||||
# determine if this frame is a better thumbnail
|
# determine if this frame is a better thumbnail
|
||||||
if self.thumbnail_data is None or is_better_thumbnail(
|
if self.thumbnail_data is None or is_better_thumbnail(
|
||||||
self.thumbnail_data, self.obj_data, self.camera_config.frame_shape
|
self.thumbnail_data, obj_data, self.camera_config.frame_shape
|
||||||
):
|
):
|
||||||
self.thumbnail_data = {
|
self.thumbnail_data = {
|
||||||
"frame_time": self.obj_data["frame_time"],
|
"frame_time": obj_data["frame_time"],
|
||||||
"box": self.obj_data["box"],
|
"box": obj_data["box"],
|
||||||
"area": self.obj_data["area"],
|
"area": obj_data["area"],
|
||||||
"region": self.obj_data["region"],
|
"region": obj_data["region"],
|
||||||
"score": self.obj_data["score"],
|
"score": obj_data["score"],
|
||||||
}
|
}
|
||||||
significant_update = True
|
thumb_update = True
|
||||||
|
|
||||||
# check zones
|
# check zones
|
||||||
current_zones = []
|
current_zones = []
|
||||||
bottom_center = (self.obj_data["centroid"][0], self.obj_data["box"][3])
|
bottom_center = (obj_data["centroid"][0], obj_data["box"][3])
|
||||||
# check each zone
|
# check each zone
|
||||||
for name, zone in self.camera_config.zones.items():
|
for name, zone in self.camera_config.zones.items():
|
||||||
# if the zone is not for this object type, skip
|
# if the zone is not for this object type, skip
|
||||||
if len(zone.objects) > 0 and not self.obj_data["label"] in zone.objects:
|
if len(zone.objects) > 0 and not obj_data["label"] in zone.objects:
|
||||||
continue
|
continue
|
||||||
contour = zone.contour
|
contour = zone.contour
|
||||||
# check if the object is in the zone
|
# check if the object is in the zone
|
||||||
@@ -150,12 +149,29 @@ class TrackedObject:
|
|||||||
if name not in self.entered_zones:
|
if name not in self.entered_zones:
|
||||||
self.entered_zones.append(name)
|
self.entered_zones.append(name)
|
||||||
|
|
||||||
# if the zones changed, signal an update
|
if not self.false_positive:
|
||||||
if not self.false_positive and set(self.current_zones) != set(current_zones):
|
# if the zones changed, signal an update
|
||||||
zone_change = True
|
if set(self.current_zones) != set(current_zones):
|
||||||
|
significant_change = True
|
||||||
|
|
||||||
|
# if the position changed, signal an update
|
||||||
|
if self.obj_data["position_changes"] != obj_data["position_changes"]:
|
||||||
|
significant_change = True
|
||||||
|
|
||||||
|
# if the motionless_count reaches the stationary threshold
|
||||||
|
if (
|
||||||
|
self.obj_data["motionless_count"]
|
||||||
|
== self.camera_config.detect.stationary.threshold
|
||||||
|
):
|
||||||
|
significant_change = True
|
||||||
|
|
||||||
|
# update at least once per minute
|
||||||
|
if self.obj_data["frame_time"] - self.previous["frame_time"] > 60:
|
||||||
|
significant_change = True
|
||||||
|
|
||||||
|
self.obj_data.update(obj_data)
|
||||||
self.current_zones = current_zones
|
self.current_zones = current_zones
|
||||||
return (significant_update, zone_change)
|
return (thumb_update, significant_change)
|
||||||
|
|
||||||
def to_dict(self, include_thumbnail: bool = False):
|
def to_dict(self, include_thumbnail: bool = False):
|
||||||
snapshot_time = (
|
snapshot_time = (
|
||||||
@@ -177,6 +193,8 @@ class TrackedObject:
|
|||||||
"box": self.obj_data["box"],
|
"box": self.obj_data["box"],
|
||||||
"area": self.obj_data["area"],
|
"area": self.obj_data["area"],
|
||||||
"region": self.obj_data["region"],
|
"region": self.obj_data["region"],
|
||||||
|
"stationary": self.obj_data["motionless_count"]
|
||||||
|
> self.camera_config.detect.stationary.threshold,
|
||||||
"motionless_count": self.obj_data["motionless_count"],
|
"motionless_count": self.obj_data["motionless_count"],
|
||||||
"position_changes": self.obj_data["position_changes"],
|
"position_changes": self.obj_data["position_changes"],
|
||||||
"current_zones": self.current_zones.copy(),
|
"current_zones": self.current_zones.copy(),
|
||||||
@@ -466,11 +484,11 @@ class CameraState:
|
|||||||
|
|
||||||
for id in updated_ids:
|
for id in updated_ids:
|
||||||
updated_obj = tracked_objects[id]
|
updated_obj = tracked_objects[id]
|
||||||
significant_update, zone_change = updated_obj.update(
|
thumb_update, significant_update = updated_obj.update(
|
||||||
frame_time, current_detections[id]
|
frame_time, current_detections[id]
|
||||||
)
|
)
|
||||||
|
|
||||||
if significant_update:
|
if thumb_update:
|
||||||
# ensure this frame is stored in the cache
|
# ensure this frame is stored in the cache
|
||||||
if (
|
if (
|
||||||
updated_obj.thumbnail_data["frame_time"] == frame_time
|
updated_obj.thumbnail_data["frame_time"] == frame_time
|
||||||
@@ -480,13 +498,13 @@ class CameraState:
|
|||||||
|
|
||||||
updated_obj.last_updated = frame_time
|
updated_obj.last_updated = frame_time
|
||||||
|
|
||||||
# if it has been more than 5 seconds since the last publish
|
# if it has been more than 5 seconds since the last thumb update
|
||||||
# and the last update is greater than the last publish or
|
# and the last update is greater than the last publish or
|
||||||
# the object has changed zones
|
# the object has changed significantly
|
||||||
if (
|
if (
|
||||||
frame_time - updated_obj.last_published > 5
|
frame_time - updated_obj.last_published > 5
|
||||||
and updated_obj.last_updated > updated_obj.last_published
|
and updated_obj.last_updated > updated_obj.last_published
|
||||||
) or zone_change:
|
) or significant_update:
|
||||||
# call event handlers
|
# call event handlers
|
||||||
for c in self.callbacks["update"]:
|
for c in self.callbacks["update"]:
|
||||||
c(self.name, updated_obj, frame_time)
|
c(self.name, updated_obj, frame_time)
|
||||||
|
|||||||
@@ -48,7 +48,7 @@ class ObjectTracker:
|
|||||||
del self.tracked_objects[id]
|
del self.tracked_objects[id]
|
||||||
del self.disappeared[id]
|
del self.disappeared[id]
|
||||||
|
|
||||||
# tracks the current position of the object based on the last 10 bounding boxes
|
# tracks the current position of the object based on the last N bounding boxes
|
||||||
# returns False if the object has moved outside its previous position
|
# returns False if the object has moved outside its previous position
|
||||||
def update_position(self, id, box):
|
def update_position(self, id, box):
|
||||||
position = self.positions[id]
|
position = self.positions[id]
|
||||||
@@ -93,19 +93,52 @@ class ObjectTracker:
|
|||||||
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
|
def is_expired(self, id):
|
||||||
|
obj = self.tracked_objects[id]
|
||||||
|
# get the max frames for this label type or the default
|
||||||
|
max_frames = self.detect_config.stationary.max_frames.objects.get(
|
||||||
|
obj["label"], self.detect_config.stationary.max_frames.default
|
||||||
|
)
|
||||||
|
|
||||||
|
# if there is no max_frames for this label type, continue
|
||||||
|
if max_frames is None:
|
||||||
|
return False
|
||||||
|
|
||||||
|
# if the object has exceeded the max_frames setting, deregister
|
||||||
|
if (
|
||||||
|
obj["motionless_count"] - self.detect_config.stationary.threshold
|
||||||
|
> max_frames
|
||||||
|
):
|
||||||
|
print(f"expired: {obj['motionless_count']}")
|
||||||
|
return True
|
||||||
|
|
||||||
def update(self, id, new_obj):
|
def update(self, id, new_obj):
|
||||||
self.disappeared[id] = 0
|
self.disappeared[id] = 0
|
||||||
# update the motionless count if the object has not moved to a new position
|
# update the motionless count if the object has not moved to a new position
|
||||||
if self.update_position(id, new_obj["box"]):
|
if self.update_position(id, new_obj["box"]):
|
||||||
self.tracked_objects[id]["motionless_count"] += 1
|
self.tracked_objects[id]["motionless_count"] += 1
|
||||||
|
if self.is_expired(id):
|
||||||
|
self.deregister(id)
|
||||||
|
return
|
||||||
else:
|
else:
|
||||||
|
# register the first position change and then only increment if
|
||||||
|
# the object was previously stationary
|
||||||
|
if (
|
||||||
|
self.tracked_objects[id]["position_changes"] == 0
|
||||||
|
or self.tracked_objects[id]["motionless_count"]
|
||||||
|
>= self.detect_config.stationary.threshold
|
||||||
|
):
|
||||||
|
self.tracked_objects[id]["position_changes"] += 1
|
||||||
self.tracked_objects[id]["motionless_count"] = 0
|
self.tracked_objects[id]["motionless_count"] = 0
|
||||||
self.tracked_objects[id]["position_changes"] += 1
|
|
||||||
self.tracked_objects[id].update(new_obj)
|
self.tracked_objects[id].update(new_obj)
|
||||||
|
|
||||||
def update_frame_times(self, frame_time):
|
def update_frame_times(self, frame_time):
|
||||||
for id in self.tracked_objects.keys():
|
for id in list(self.tracked_objects.keys()):
|
||||||
self.tracked_objects[id]["frame_time"] = frame_time
|
self.tracked_objects[id]["frame_time"] = frame_time
|
||||||
|
self.tracked_objects[id]["motionless_count"] += 1
|
||||||
|
if self.is_expired(id):
|
||||||
|
self.deregister(id)
|
||||||
|
|
||||||
def match_and_update(self, frame_time, new_objects):
|
def match_and_update(self, frame_time, new_objects):
|
||||||
# group by name
|
# group by name
|
||||||
|
|||||||
@@ -184,10 +184,7 @@ class BirdsEyeFrameManager:
|
|||||||
if self.mode == BirdseyeModeEnum.continuous:
|
if self.mode == BirdseyeModeEnum.continuous:
|
||||||
return True
|
return True
|
||||||
|
|
||||||
if (
|
if self.mode == BirdseyeModeEnum.motion and motion_box_count > 0:
|
||||||
self.mode == BirdseyeModeEnum.motion
|
|
||||||
and object_box_count + motion_box_count > 0
|
|
||||||
):
|
|
||||||
return True
|
return True
|
||||||
|
|
||||||
if self.mode == BirdseyeModeEnum.objects and object_box_count > 0:
|
if self.mode == BirdseyeModeEnum.objects and object_box_count > 0:
|
||||||
@@ -418,7 +415,7 @@ def output_frames(config: FrigateConfig, video_output_queue):
|
|||||||
):
|
):
|
||||||
if birdseye_manager.update(
|
if birdseye_manager.update(
|
||||||
camera,
|
camera,
|
||||||
len(current_tracked_objects),
|
len([o for o in current_tracked_objects if not o["stationary"]]),
|
||||||
len(motion_boxes),
|
len(motion_boxes),
|
||||||
frame_time,
|
frame_time,
|
||||||
frame,
|
frame,
|
||||||
|
|||||||
@@ -51,7 +51,6 @@ class RecordingMaintainer(threading.Thread):
|
|||||||
self.config = config
|
self.config = config
|
||||||
self.recordings_info_queue = recordings_info_queue
|
self.recordings_info_queue = recordings_info_queue
|
||||||
self.stop_event = stop_event
|
self.stop_event = stop_event
|
||||||
self.first_pass = True
|
|
||||||
self.recordings_info = defaultdict(list)
|
self.recordings_info = defaultdict(list)
|
||||||
self.end_time_cache = {}
|
self.end_time_cache = {}
|
||||||
|
|
||||||
@@ -230,7 +229,7 @@ class RecordingMaintainer(threading.Thread):
|
|||||||
[
|
[
|
||||||
o
|
o
|
||||||
for o in frame[1]
|
for o in frame[1]
|
||||||
if not o["false_positive"] and o["motionless_count"] > 0
|
if not o["false_positive"] and o["motionless_count"] == 0
|
||||||
]
|
]
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -285,6 +284,7 @@ class RecordingMaintainer(threading.Thread):
|
|||||||
end_time=end_time.timestamp(),
|
end_time=end_time.timestamp(),
|
||||||
duration=duration,
|
duration=duration,
|
||||||
motion=motion_count,
|
motion=motion_count,
|
||||||
|
# TODO: update this to store list of active objects at some point
|
||||||
objects=active_count,
|
objects=active_count,
|
||||||
)
|
)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
@@ -333,12 +333,6 @@ class RecordingMaintainer(threading.Thread):
|
|||||||
logger.error(e)
|
logger.error(e)
|
||||||
duration = datetime.datetime.now().timestamp() - run_start
|
duration = datetime.datetime.now().timestamp() - run_start
|
||||||
wait_time = max(0, 5 - duration)
|
wait_time = max(0, 5 - duration)
|
||||||
if wait_time == 0 and not self.first_pass:
|
|
||||||
logger.warning(
|
|
||||||
"Cache is taking longer than 5 seconds to clear. Your recordings disk may be too slow."
|
|
||||||
)
|
|
||||||
if self.first_pass:
|
|
||||||
self.first_pass = False
|
|
||||||
|
|
||||||
logger.info(f"Exiting recording maintenance...")
|
logger.info(f"Exiting recording maintenance...")
|
||||||
|
|
||||||
|
|||||||
@@ -567,6 +567,9 @@ class EventsPerSecond:
|
|||||||
# compute the (approximate) events in the last n seconds
|
# compute the (approximate) events in the last n seconds
|
||||||
now = datetime.datetime.now().timestamp()
|
now = datetime.datetime.now().timestamp()
|
||||||
seconds = min(now - self._start, last_n_seconds)
|
seconds = min(now - self._start, last_n_seconds)
|
||||||
|
# avoid divide by zero
|
||||||
|
if seconds == 0:
|
||||||
|
seconds = 1
|
||||||
return (
|
return (
|
||||||
len([t for t in self._timestamps if t > (now - last_n_seconds)]) / seconds
|
len([t for t in self._timestamps if t > (now - last_n_seconds)]) / seconds
|
||||||
)
|
)
|
||||||
@@ -601,6 +604,7 @@ def add_mask(mask, mask_img):
|
|||||||
)
|
)
|
||||||
cv2.fillPoly(mask_img, pts=[contour], color=(0))
|
cv2.fillPoly(mask_img, pts=[contour], color=(0))
|
||||||
|
|
||||||
|
|
||||||
def load_labels(path, encoding="utf-8"):
|
def load_labels(path, encoding="utf-8"):
|
||||||
"""Loads labels from file (with or without index numbers).
|
"""Loads labels from file (with or without index numbers).
|
||||||
Args:
|
Args:
|
||||||
@@ -620,6 +624,7 @@ def load_labels(path, encoding="utf-8"):
|
|||||||
else:
|
else:
|
||||||
return {index: line.strip() for index, line in enumerate(lines)}
|
return {index: line.strip() for index, line in enumerate(lines)}
|
||||||
|
|
||||||
|
|
||||||
class FrameManager(ABC):
|
class FrameManager(ABC):
|
||||||
@abstractmethod
|
@abstractmethod
|
||||||
def create(self, name, size) -> AnyStr:
|
def create(self, name, size) -> AnyStr:
|
||||||
|
|||||||
416
frigate/video.py
416
frigate/video.py
@@ -153,10 +153,10 @@ def capture_frames(
|
|||||||
try:
|
try:
|
||||||
frame_buffer[:] = ffmpeg_process.stdout.read(frame_size)
|
frame_buffer[:] = ffmpeg_process.stdout.read(frame_size)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logger.info(f"{camera_name}: ffmpeg sent a broken frame. {e}")
|
logger.error(f"{camera_name}: Unable to read frames from ffmpeg process.")
|
||||||
|
|
||||||
if ffmpeg_process.poll() != None:
|
if ffmpeg_process.poll() != None:
|
||||||
logger.info(
|
logger.error(
|
||||||
f"{camera_name}: ffmpeg process is not running. exiting capture thread..."
|
f"{camera_name}: ffmpeg process is not running. exiting capture thread..."
|
||||||
)
|
)
|
||||||
frame_manager.delete(frame_name)
|
frame_manager.delete(frame_name)
|
||||||
@@ -221,12 +221,11 @@ class CameraWatchdog(threading.Thread):
|
|||||||
|
|
||||||
if not self.capture_thread.is_alive():
|
if not self.capture_thread.is_alive():
|
||||||
self.logger.error(
|
self.logger.error(
|
||||||
f"FFMPEG process crashed unexpectedly for {self.camera_name}."
|
f"Ffmpeg process crashed unexpectedly for {self.camera_name}."
|
||||||
)
|
)
|
||||||
self.logger.error(
|
self.logger.error(
|
||||||
"The following ffmpeg logs include the last 100 lines prior to exit."
|
"The following ffmpeg logs include the last 100 lines prior to exit."
|
||||||
)
|
)
|
||||||
self.logger.error("You may have invalid args defined for this camera.")
|
|
||||||
self.logpipe.dump()
|
self.logpipe.dump()
|
||||||
self.start_ffmpeg_detect()
|
self.start_ffmpeg_detect()
|
||||||
elif now - self.capture_thread.current_frame.value > 20:
|
elif now - self.capture_thread.current_frame.value > 20:
|
||||||
@@ -492,212 +491,219 @@ def process_frames(
|
|||||||
logger.info(f"{camera_name}: frame {frame_time} is not in memory store.")
|
logger.info(f"{camera_name}: frame {frame_time} is not in memory store.")
|
||||||
continue
|
continue
|
||||||
|
|
||||||
if not detection_enabled.value:
|
|
||||||
fps.value = fps_tracker.eps()
|
|
||||||
object_tracker.match_and_update(frame_time, [])
|
|
||||||
detected_objects_queue.put(
|
|
||||||
(camera_name, frame_time, object_tracker.tracked_objects, [], [])
|
|
||||||
)
|
|
||||||
detection_fps.value = object_detector.fps.eps()
|
|
||||||
frame_manager.close(f"{camera_name}{frame_time}")
|
|
||||||
continue
|
|
||||||
|
|
||||||
# look for motion
|
# look for motion
|
||||||
motion_boxes = motion_detector.detect(frame)
|
motion_boxes = motion_detector.detect(frame)
|
||||||
|
|
||||||
# get stationary object ids
|
regions = []
|
||||||
# check every Nth frame for stationary objects
|
|
||||||
# disappeared objects are not stationary
|
|
||||||
# also check for overlapping motion boxes
|
|
||||||
stationary_object_ids = [
|
|
||||||
obj["id"]
|
|
||||||
for obj in object_tracker.tracked_objects.values()
|
|
||||||
# if there hasn't been motion for 10 frames
|
|
||||||
if obj["motionless_count"] >= 10
|
|
||||||
# and it isn't due for a periodic check
|
|
||||||
and (
|
|
||||||
detect_config.stationary_interval == 0
|
|
||||||
or obj["motionless_count"] % detect_config.stationary_interval != 0
|
|
||||||
)
|
|
||||||
# and it hasn't disappeared
|
|
||||||
and object_tracker.disappeared[obj["id"]] == 0
|
|
||||||
# and it doesn't overlap with any current motion boxes
|
|
||||||
and not intersects_any(obj["box"], motion_boxes)
|
|
||||||
]
|
|
||||||
|
|
||||||
# get tracked object boxes that aren't stationary
|
# if detection is disabled
|
||||||
tracked_object_boxes = [
|
if not detection_enabled.value:
|
||||||
obj["box"]
|
object_tracker.match_and_update(frame_time, [])
|
||||||
for obj in object_tracker.tracked_objects.values()
|
|
||||||
if not obj["id"] in stationary_object_ids
|
|
||||||
]
|
|
||||||
|
|
||||||
# combine motion boxes with known locations of existing objects
|
|
||||||
combined_boxes = reduce_boxes(motion_boxes + tracked_object_boxes)
|
|
||||||
|
|
||||||
region_min_size = max(model_shape[0], model_shape[1])
|
|
||||||
# compute regions
|
|
||||||
regions = [
|
|
||||||
calculate_region(
|
|
||||||
frame_shape,
|
|
||||||
a[0],
|
|
||||||
a[1],
|
|
||||||
a[2],
|
|
||||||
a[3],
|
|
||||||
region_min_size,
|
|
||||||
multiplier=random.uniform(1.2, 1.5),
|
|
||||||
)
|
|
||||||
for a in combined_boxes
|
|
||||||
]
|
|
||||||
|
|
||||||
# consolidate regions with heavy overlap
|
|
||||||
regions = [
|
|
||||||
calculate_region(
|
|
||||||
frame_shape, a[0], a[1], a[2], a[3], region_min_size, multiplier=1.0
|
|
||||||
)
|
|
||||||
for a in reduce_boxes(regions, 0.4)
|
|
||||||
]
|
|
||||||
|
|
||||||
# if starting up, get the next startup scan region
|
|
||||||
if startup_scan_counter < 9:
|
|
||||||
ymin = int(frame_shape[0] / 3 * startup_scan_counter / 3)
|
|
||||||
ymax = int(frame_shape[0] / 3 + ymin)
|
|
||||||
xmin = int(frame_shape[1] / 3 * startup_scan_counter / 3)
|
|
||||||
xmax = int(frame_shape[1] / 3 + xmin)
|
|
||||||
regions.append(
|
|
||||||
calculate_region(
|
|
||||||
frame_shape, xmin, ymin, xmax, ymax, region_min_size, multiplier=1.2
|
|
||||||
)
|
|
||||||
)
|
|
||||||
startup_scan_counter += 1
|
|
||||||
|
|
||||||
# resize regions and detect
|
|
||||||
# seed with stationary objects
|
|
||||||
detections = [
|
|
||||||
(
|
|
||||||
obj["label"],
|
|
||||||
obj["score"],
|
|
||||||
obj["box"],
|
|
||||||
obj["area"],
|
|
||||||
obj["region"],
|
|
||||||
)
|
|
||||||
for obj in object_tracker.tracked_objects.values()
|
|
||||||
if obj["id"] in stationary_object_ids
|
|
||||||
]
|
|
||||||
|
|
||||||
for region in regions:
|
|
||||||
detections.extend(
|
|
||||||
detect(
|
|
||||||
object_detector,
|
|
||||||
frame,
|
|
||||||
model_shape,
|
|
||||||
region,
|
|
||||||
objects_to_track,
|
|
||||||
object_filters,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
#########
|
|
||||||
# merge objects, check for clipped objects and look again up to 4 times
|
|
||||||
#########
|
|
||||||
refining = len(regions) > 0
|
|
||||||
refine_count = 0
|
|
||||||
while refining and refine_count < 4:
|
|
||||||
refining = False
|
|
||||||
|
|
||||||
# group by name
|
|
||||||
detected_object_groups = defaultdict(lambda: [])
|
|
||||||
for detection in detections:
|
|
||||||
detected_object_groups[detection[0]].append(detection)
|
|
||||||
|
|
||||||
selected_objects = []
|
|
||||||
for group in detected_object_groups.values():
|
|
||||||
|
|
||||||
# apply non-maxima suppression to suppress weak, overlapping bounding boxes
|
|
||||||
boxes = [
|
|
||||||
(o[2][0], o[2][1], o[2][2] - o[2][0], o[2][3] - o[2][1])
|
|
||||||
for o in group
|
|
||||||
]
|
|
||||||
confidences = [o[1] for o in group]
|
|
||||||
idxs = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)
|
|
||||||
|
|
||||||
for index in idxs:
|
|
||||||
obj = group[index[0]]
|
|
||||||
if clipped(obj, frame_shape):
|
|
||||||
box = obj[2]
|
|
||||||
# calculate a new region that will hopefully get the entire object
|
|
||||||
region = calculate_region(
|
|
||||||
frame_shape, box[0], box[1], box[2], box[3], region_min_size
|
|
||||||
)
|
|
||||||
|
|
||||||
regions.append(region)
|
|
||||||
|
|
||||||
selected_objects.extend(
|
|
||||||
detect(
|
|
||||||
object_detector,
|
|
||||||
frame,
|
|
||||||
model_shape,
|
|
||||||
region,
|
|
||||||
objects_to_track,
|
|
||||||
object_filters,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
refining = True
|
|
||||||
else:
|
|
||||||
selected_objects.append(obj)
|
|
||||||
# set the detections list to only include top, complete objects
|
|
||||||
# and new detections
|
|
||||||
detections = selected_objects
|
|
||||||
|
|
||||||
if refining:
|
|
||||||
refine_count += 1
|
|
||||||
|
|
||||||
## drop detections that overlap too much
|
|
||||||
consolidated_detections = []
|
|
||||||
|
|
||||||
# if detection was run on this frame, consolidate
|
|
||||||
if len(regions) > 0:
|
|
||||||
# group by name
|
|
||||||
detected_object_groups = defaultdict(lambda: [])
|
|
||||||
for detection in detections:
|
|
||||||
detected_object_groups[detection[0]].append(detection)
|
|
||||||
|
|
||||||
# loop over detections grouped by label
|
|
||||||
for group in detected_object_groups.values():
|
|
||||||
# if the group only has 1 item, skip
|
|
||||||
if len(group) == 1:
|
|
||||||
consolidated_detections.append(group[0])
|
|
||||||
continue
|
|
||||||
|
|
||||||
# sort smallest to largest by area
|
|
||||||
sorted_by_area = sorted(group, key=lambda g: g[3])
|
|
||||||
|
|
||||||
for current_detection_idx in range(0, len(sorted_by_area)):
|
|
||||||
current_detection = sorted_by_area[current_detection_idx][2]
|
|
||||||
overlap = 0
|
|
||||||
for to_check_idx in range(
|
|
||||||
min(current_detection_idx + 1, len(sorted_by_area)),
|
|
||||||
len(sorted_by_area),
|
|
||||||
):
|
|
||||||
to_check = sorted_by_area[to_check_idx][2]
|
|
||||||
# if 90% of smaller detection is inside of another detection, consolidate
|
|
||||||
if (
|
|
||||||
area(intersection(current_detection, to_check))
|
|
||||||
/ area(current_detection)
|
|
||||||
> 0.9
|
|
||||||
):
|
|
||||||
overlap = 1
|
|
||||||
break
|
|
||||||
if overlap == 0:
|
|
||||||
consolidated_detections.append(
|
|
||||||
sorted_by_area[current_detection_idx]
|
|
||||||
)
|
|
||||||
# now that we have refined our detections, we need to track objects
|
|
||||||
object_tracker.match_and_update(frame_time, consolidated_detections)
|
|
||||||
# else, just update the frame times for the stationary objects
|
|
||||||
else:
|
else:
|
||||||
object_tracker.update_frame_times(frame_time)
|
# get stationary object ids
|
||||||
|
# check every Nth frame for stationary objects
|
||||||
|
# disappeared objects are not stationary
|
||||||
|
# also check for overlapping motion boxes
|
||||||
|
stationary_object_ids = [
|
||||||
|
obj["id"]
|
||||||
|
for obj in object_tracker.tracked_objects.values()
|
||||||
|
# if there hasn't been motion for 10 frames
|
||||||
|
if obj["motionless_count"] >= 10
|
||||||
|
# and it isn't due for a periodic check
|
||||||
|
and (
|
||||||
|
detect_config.stationary.interval == 0
|
||||||
|
or obj["motionless_count"] % detect_config.stationary.interval != 0
|
||||||
|
)
|
||||||
|
# and it hasn't disappeared
|
||||||
|
and object_tracker.disappeared[obj["id"]] == 0
|
||||||
|
# and it doesn't overlap with any current motion boxes
|
||||||
|
and not intersects_any(obj["box"], motion_boxes)
|
||||||
|
]
|
||||||
|
|
||||||
|
# get tracked object boxes that aren't stationary
|
||||||
|
tracked_object_boxes = [
|
||||||
|
obj["box"]
|
||||||
|
for obj in object_tracker.tracked_objects.values()
|
||||||
|
if not obj["id"] in stationary_object_ids
|
||||||
|
]
|
||||||
|
|
||||||
|
# combine motion boxes with known locations of existing objects
|
||||||
|
combined_boxes = reduce_boxes(motion_boxes + tracked_object_boxes)
|
||||||
|
|
||||||
|
region_min_size = max(model_shape[0], model_shape[1])
|
||||||
|
# compute regions
|
||||||
|
regions = [
|
||||||
|
calculate_region(
|
||||||
|
frame_shape,
|
||||||
|
a[0],
|
||||||
|
a[1],
|
||||||
|
a[2],
|
||||||
|
a[3],
|
||||||
|
region_min_size,
|
||||||
|
multiplier=random.uniform(1.2, 1.5),
|
||||||
|
)
|
||||||
|
for a in combined_boxes
|
||||||
|
]
|
||||||
|
|
||||||
|
# consolidate regions with heavy overlap
|
||||||
|
regions = [
|
||||||
|
calculate_region(
|
||||||
|
frame_shape, a[0], a[1], a[2], a[3], region_min_size, multiplier=1.0
|
||||||
|
)
|
||||||
|
for a in reduce_boxes(regions, 0.4)
|
||||||
|
]
|
||||||
|
|
||||||
|
# if starting up, get the next startup scan region
|
||||||
|
if startup_scan_counter < 9:
|
||||||
|
ymin = int(frame_shape[0] / 3 * startup_scan_counter / 3)
|
||||||
|
ymax = int(frame_shape[0] / 3 + ymin)
|
||||||
|
xmin = int(frame_shape[1] / 3 * startup_scan_counter / 3)
|
||||||
|
xmax = int(frame_shape[1] / 3 + xmin)
|
||||||
|
regions.append(
|
||||||
|
calculate_region(
|
||||||
|
frame_shape,
|
||||||
|
xmin,
|
||||||
|
ymin,
|
||||||
|
xmax,
|
||||||
|
ymax,
|
||||||
|
region_min_size,
|
||||||
|
multiplier=1.2,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
startup_scan_counter += 1
|
||||||
|
|
||||||
|
# resize regions and detect
|
||||||
|
# seed with stationary objects
|
||||||
|
detections = [
|
||||||
|
(
|
||||||
|
obj["label"],
|
||||||
|
obj["score"],
|
||||||
|
obj["box"],
|
||||||
|
obj["area"],
|
||||||
|
obj["region"],
|
||||||
|
)
|
||||||
|
for obj in object_tracker.tracked_objects.values()
|
||||||
|
if obj["id"] in stationary_object_ids
|
||||||
|
]
|
||||||
|
|
||||||
|
for region in regions:
|
||||||
|
detections.extend(
|
||||||
|
detect(
|
||||||
|
object_detector,
|
||||||
|
frame,
|
||||||
|
model_shape,
|
||||||
|
region,
|
||||||
|
objects_to_track,
|
||||||
|
object_filters,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
#########
|
||||||
|
# merge objects, check for clipped objects and look again up to 4 times
|
||||||
|
#########
|
||||||
|
refining = len(regions) > 0
|
||||||
|
refine_count = 0
|
||||||
|
while refining and refine_count < 4:
|
||||||
|
refining = False
|
||||||
|
|
||||||
|
# group by name
|
||||||
|
detected_object_groups = defaultdict(lambda: [])
|
||||||
|
for detection in detections:
|
||||||
|
detected_object_groups[detection[0]].append(detection)
|
||||||
|
|
||||||
|
selected_objects = []
|
||||||
|
for group in detected_object_groups.values():
|
||||||
|
|
||||||
|
# apply non-maxima suppression to suppress weak, overlapping bounding boxes
|
||||||
|
boxes = [
|
||||||
|
(o[2][0], o[2][1], o[2][2] - o[2][0], o[2][3] - o[2][1])
|
||||||
|
for o in group
|
||||||
|
]
|
||||||
|
confidences = [o[1] for o in group]
|
||||||
|
idxs = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)
|
||||||
|
|
||||||
|
for index in idxs:
|
||||||
|
obj = group[index[0]]
|
||||||
|
if clipped(obj, frame_shape):
|
||||||
|
box = obj[2]
|
||||||
|
# calculate a new region that will hopefully get the entire object
|
||||||
|
region = calculate_region(
|
||||||
|
frame_shape,
|
||||||
|
box[0],
|
||||||
|
box[1],
|
||||||
|
box[2],
|
||||||
|
box[3],
|
||||||
|
region_min_size,
|
||||||
|
)
|
||||||
|
|
||||||
|
regions.append(region)
|
||||||
|
|
||||||
|
selected_objects.extend(
|
||||||
|
detect(
|
||||||
|
object_detector,
|
||||||
|
frame,
|
||||||
|
model_shape,
|
||||||
|
region,
|
||||||
|
objects_to_track,
|
||||||
|
object_filters,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
|
||||||
|
refining = True
|
||||||
|
else:
|
||||||
|
selected_objects.append(obj)
|
||||||
|
# set the detections list to only include top, complete objects
|
||||||
|
# and new detections
|
||||||
|
detections = selected_objects
|
||||||
|
|
||||||
|
if refining:
|
||||||
|
refine_count += 1
|
||||||
|
|
||||||
|
## drop detections that overlap too much
|
||||||
|
consolidated_detections = []
|
||||||
|
|
||||||
|
# if detection was run on this frame, consolidate
|
||||||
|
if len(regions) > 0:
|
||||||
|
# group by name
|
||||||
|
detected_object_groups = defaultdict(lambda: [])
|
||||||
|
for detection in detections:
|
||||||
|
detected_object_groups[detection[0]].append(detection)
|
||||||
|
|
||||||
|
# loop over detections grouped by label
|
||||||
|
for group in detected_object_groups.values():
|
||||||
|
# if the group only has 1 item, skip
|
||||||
|
if len(group) == 1:
|
||||||
|
consolidated_detections.append(group[0])
|
||||||
|
continue
|
||||||
|
|
||||||
|
# sort smallest to largest by area
|
||||||
|
sorted_by_area = sorted(group, key=lambda g: g[3])
|
||||||
|
|
||||||
|
for current_detection_idx in range(0, len(sorted_by_area)):
|
||||||
|
current_detection = sorted_by_area[current_detection_idx][2]
|
||||||
|
overlap = 0
|
||||||
|
for to_check_idx in range(
|
||||||
|
min(current_detection_idx + 1, len(sorted_by_area)),
|
||||||
|
len(sorted_by_area),
|
||||||
|
):
|
||||||
|
to_check = sorted_by_area[to_check_idx][2]
|
||||||
|
# if 90% of smaller detection is inside of another detection, consolidate
|
||||||
|
if (
|
||||||
|
area(intersection(current_detection, to_check))
|
||||||
|
/ area(current_detection)
|
||||||
|
> 0.9
|
||||||
|
):
|
||||||
|
overlap = 1
|
||||||
|
break
|
||||||
|
if overlap == 0:
|
||||||
|
consolidated_detections.append(
|
||||||
|
sorted_by_area[current_detection_idx]
|
||||||
|
)
|
||||||
|
# now that we have refined our detections, we need to track objects
|
||||||
|
object_tracker.match_and_update(frame_time, consolidated_detections)
|
||||||
|
# else, just update the frame times for the stationary objects
|
||||||
|
else:
|
||||||
|
object_tracker.update_frame_times(frame_time)
|
||||||
|
|
||||||
# add to the queue if not full
|
# add to the queue if not full
|
||||||
if detected_objects_queue.full():
|
if detected_objects_queue.full():
|
||||||
|
|||||||
14794
web/package-lock.json
generated
14794
web/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -1,6 +1,14 @@
|
|||||||
import { h } from 'preact';
|
import { h } from 'preact';
|
||||||
import { useState } from 'preact/hooks';
|
import { useState } from 'preact/hooks';
|
||||||
import { addSeconds, differenceInSeconds, fromUnixTime, format, parseISO, startOfHour } from 'date-fns';
|
import {
|
||||||
|
differenceInSeconds,
|
||||||
|
fromUnixTime,
|
||||||
|
format,
|
||||||
|
parseISO,
|
||||||
|
startOfHour,
|
||||||
|
differenceInMinutes,
|
||||||
|
differenceInHours,
|
||||||
|
} from 'date-fns';
|
||||||
import ArrowDropdown from '../icons/ArrowDropdown';
|
import ArrowDropdown from '../icons/ArrowDropdown';
|
||||||
import ArrowDropup from '../icons/ArrowDropup';
|
import ArrowDropup from '../icons/ArrowDropup';
|
||||||
import Link from '../components/Link';
|
import Link from '../components/Link';
|
||||||
@@ -21,25 +29,31 @@ export default function RecordingPlaylist({ camera, recordings, selectedDate, se
|
|||||||
events={recording.events}
|
events={recording.events}
|
||||||
selected={recording.date === selectedDate}
|
selected={recording.date === selectedDate}
|
||||||
>
|
>
|
||||||
{recording.recordings.slice().reverse().map((item, i) => (
|
{recording.recordings
|
||||||
<div className="mb-2 w-full">
|
.slice()
|
||||||
<div
|
.reverse()
|
||||||
className={`flex w-full text-md text-white px-8 py-2 mb-2 ${
|
.map((item, i) => (
|
||||||
i === 0 ? 'border-t border-white border-opacity-50' : ''
|
<div className="mb-2 w-full">
|
||||||
}`}
|
<div
|
||||||
>
|
className={`flex w-full text-md text-white px-8 py-2 mb-2 ${
|
||||||
<div className="flex-1">
|
i === 0 ? 'border-t border-white border-opacity-50' : ''
|
||||||
<Link href={`/recording/${camera}/${recording.date}/${item.hour}`} type="text">
|
}`}
|
||||||
{item.hour}:00
|
>
|
||||||
</Link>
|
<div className="flex-1">
|
||||||
|
<Link href={`/recording/${camera}/${recording.date}/${item.hour}`} type="text">
|
||||||
|
{item.hour}:00
|
||||||
|
</Link>
|
||||||
|
</div>
|
||||||
|
<div className="flex-1 text-right">{item.events.length} Events</div>
|
||||||
</div>
|
</div>
|
||||||
<div className="flex-1 text-right">{item.events.length} Events</div>
|
{item.events
|
||||||
|
.slice()
|
||||||
|
.reverse()
|
||||||
|
.map((event) => (
|
||||||
|
<EventCard camera={camera} event={event} delay={item.delay} />
|
||||||
|
))}
|
||||||
</div>
|
</div>
|
||||||
{item.events.slice().reverse().map((event) => (
|
))}
|
||||||
<EventCard camera={camera} event={event} delay={item.delay} />
|
|
||||||
))}
|
|
||||||
</div>
|
|
||||||
))}
|
|
||||||
</ExpandableList>
|
</ExpandableList>
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
@@ -83,8 +97,17 @@ export function ExpandableList({ title, events = 0, children, selected = false }
|
|||||||
export function EventCard({ camera, event, delay }) {
|
export function EventCard({ camera, event, delay }) {
|
||||||
const apiHost = useApiHost();
|
const apiHost = useApiHost();
|
||||||
const start = fromUnixTime(event.start_time);
|
const start = fromUnixTime(event.start_time);
|
||||||
const end = fromUnixTime(event.end_time);
|
let duration = 'In Progress';
|
||||||
const duration = addSeconds(new Date(0), differenceInSeconds(end, start));
|
if (event.end_time) {
|
||||||
|
const end = fromUnixTime(event.end_time);
|
||||||
|
const hours = differenceInHours(end, start);
|
||||||
|
const minutes = differenceInMinutes(end, start) - hours * 60;
|
||||||
|
const seconds = differenceInSeconds(end, start) - hours * 60 - minutes * 60;
|
||||||
|
duration = '';
|
||||||
|
if (hours) duration += `${hours}h `;
|
||||||
|
if (minutes) duration += `${minutes}m `;
|
||||||
|
duration += `${seconds}s`;
|
||||||
|
}
|
||||||
const position = differenceInSeconds(start, startOfHour(start));
|
const position = differenceInSeconds(start, startOfHour(start));
|
||||||
const offset = Object.entries(delay)
|
const offset = Object.entries(delay)
|
||||||
.map(([p, d]) => (position > p ? d : 0))
|
.map(([p, d]) => (position > p ? d : 0))
|
||||||
@@ -102,7 +125,7 @@ export function EventCard({ camera, event, delay }) {
|
|||||||
<div className="flex-1">
|
<div className="flex-1">
|
||||||
<div className="text-2xl text-white leading-tight capitalize">{event.label}</div>
|
<div className="text-2xl text-white leading-tight capitalize">{event.label}</div>
|
||||||
<div className="text-xs md:text-normal text-gray-300">Start: {format(start, 'HH:mm:ss')}</div>
|
<div className="text-xs md:text-normal text-gray-300">Start: {format(start, 'HH:mm:ss')}</div>
|
||||||
<div className="text-xs md:text-normal text-gray-300">Duration: {format(duration, 'mm:ss')}</div>
|
<div className="text-xs md:text-normal text-gray-300">Duration: {duration}</div>
|
||||||
</div>
|
</div>
|
||||||
<div className="text-lg text-white text-right leading-tight">{(event.top_score * 100).toFixed(1)}%</div>
|
<div className="text-lg text-white text-right leading-tight">{(event.top_score * 100).toFixed(1)}%</div>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
@@ -29,12 +29,8 @@ function Camera({ name, conf }) {
|
|||||||
const { payload: snapshotValue, send: sendSnapshots } = useSnapshotsState(name);
|
const { payload: snapshotValue, send: sendSnapshots } = useSnapshotsState(name);
|
||||||
const href = `/cameras/${name}`;
|
const href = `/cameras/${name}`;
|
||||||
const buttons = useMemo(() => {
|
const buttons = useMemo(() => {
|
||||||
const result = [{ name: 'Events', href: `/events?camera=${name}` }];
|
return [{ name: 'Events', href: `/events?camera=${name}` }, { name: 'Recordings', href: `/recording/${name}` }];
|
||||||
if (conf.record.enabled) {
|
}, [name]);
|
||||||
result.push({ name: 'Recordings', href: `/recording/${name}` });
|
|
||||||
}
|
|
||||||
return result;
|
|
||||||
}, [name, conf.record.enabled]);
|
|
||||||
const icons = useMemo(
|
const icons = useMemo(
|
||||||
() => [
|
() => [
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -66,6 +66,9 @@ export default function Recording({ camera, date, hour, seconds }) {
|
|||||||
this.player.currentTime(seconds);
|
this.player.currentTime(seconds);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
// Force playback rate to be correct
|
||||||
|
const playbackRate = this.player.playbackRate();
|
||||||
|
this.player.defaultPlaybackRate(playbackRate);
|
||||||
}
|
}
|
||||||
|
|
||||||
return (
|
return (
|
||||||
|
|||||||
@@ -46,7 +46,7 @@ describe('Cameras Route', () => {
|
|||||||
|
|
||||||
expect(screen.queryByLabelText('Loading…')).not.toBeInTheDocument();
|
expect(screen.queryByLabelText('Loading…')).not.toBeInTheDocument();
|
||||||
|
|
||||||
expect(screen.queryAllByText('Recordings')).toHaveLength(1);
|
expect(screen.queryAllByText('Recordings')).toHaveLength(2);
|
||||||
});
|
});
|
||||||
|
|
||||||
test('buttons toggle detect, clips, and snapshots', async () => {
|
test('buttons toggle detect, clips, and snapshots', async () => {
|
||||||
|
|||||||
Reference in New Issue
Block a user