forked from Github/frigate
Compare commits
13 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2eada219cd | ||
|
|
8dd367efa9 | ||
|
|
66dc8c772b | ||
|
|
68cdd9b94c | ||
|
|
65c211bb6d | ||
|
|
60ad38261b | ||
|
|
c02100ee6f | ||
|
|
8669c29e3d | ||
|
|
10783fec49 | ||
|
|
3bed4611f1 | ||
|
|
f0e836e5b6 | ||
|
|
a1ae5b67d8 | ||
|
|
53f7190d42 |
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
default_target: local
|
||||
|
||||
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
|
||||
VERSION = 0.11.0
|
||||
VERSION = 0.11.1
|
||||
CURRENT_UID := $(shell id -u)
|
||||
CURRENT_GID := $(shell id -g)
|
||||
|
||||
|
||||
@@ -4,8 +4,6 @@ title: Advanced Options
|
||||
sidebar_label: Advanced Options
|
||||
---
|
||||
|
||||
## Advanced configuration
|
||||
|
||||
### `logger`
|
||||
|
||||
Change the default log level for troubleshooting purposes.
|
||||
|
||||
@@ -12,3 +12,24 @@ Birdseye offers different modes to customize which cameras show under which circ
|
||||
### Custom Birdseye Icon
|
||||
|
||||
A custom icon can be added to the birdseye background by provided a file `custom.png` inside of the Frigate `media` folder. The file must be a png with the icon as transparent, any non-transparent pixels will be white when displayed in the birdseye view.
|
||||
|
||||
### Birdseye view override at camera level
|
||||
|
||||
If you want to include a camera in Birdseye view only for specific circumstances, or just don't include it at all, the Birdseye setting can be set at the camera level.
|
||||
|
||||
```yaml
|
||||
# Include all cameras by default in Birdseye view
|
||||
birdseye:
|
||||
enabled: True
|
||||
mode: continuous
|
||||
|
||||
cameras:
|
||||
front:
|
||||
# Only include the "front" camera in Birdseye view when objects are detected
|
||||
birdseye:
|
||||
mode: objects
|
||||
back:
|
||||
# Exclude the "back" camera from Birdseye view
|
||||
birdseye:
|
||||
enabled: False
|
||||
```
|
||||
|
||||
@@ -3,7 +3,7 @@ id: camera_specific
|
||||
title: Camera Specific Configurations
|
||||
---
|
||||
|
||||
### MJPEG Cameras
|
||||
## MJPEG Cameras
|
||||
|
||||
The input and output parameters need to be adjusted for MJPEG cameras
|
||||
|
||||
@@ -19,7 +19,7 @@ output_args:
|
||||
rtmp: -c:v libx264 -an -f flv
|
||||
```
|
||||
|
||||
### JPEG Stream Cameras
|
||||
## JPEG Stream Cameras
|
||||
|
||||
Cameras using a live changing jpeg image will need input parameters as below
|
||||
|
||||
@@ -47,7 +47,7 @@ input_args:
|
||||
|
||||
Outputting the stream will have the same args and caveats as per [MJPEG Cameras](#mjpeg-cameras)
|
||||
|
||||
### RTMP Cameras
|
||||
## RTMP Cameras
|
||||
|
||||
The input parameters need to be adjusted for RTMP cameras
|
||||
|
||||
@@ -56,8 +56,56 @@ ffmpeg:
|
||||
input_args: -avoid_negative_ts make_zero -fflags nobuffer -flags low_delay -strict experimental -fflags +genpts+discardcorrupt -rw_timeout 5000000 -use_wallclock_as_timestamps 1 -f live_flv
|
||||
```
|
||||
|
||||
## UDP Only Cameras
|
||||
|
||||
If your cameras do not support TCP connections for RTSP, you can use UDP.
|
||||
|
||||
```yaml
|
||||
ffmpeg:
|
||||
input_args: -avoid_negative_ts make_zero -fflags +genpts+discardcorrupt -rtsp_transport udp -timeout 5000000 -use_wallclock_as_timestamps 1
|
||||
```
|
||||
|
||||
## Model/vendor specific setup
|
||||
|
||||
### Annke C800
|
||||
This camera is H.265 only. To be able to play clips on some devices (like MacOs or iPhone) the H.265 stream has to be repackaged and the audio stream has to be converted to aac. Unfortunately direct playback of in the browser is not working (yet), but the downloaded clip can be played locally.
|
||||
|
||||
```yaml
|
||||
cameras:
|
||||
annkec800: # <------ Name the camera
|
||||
ffmpeg:
|
||||
output_args:
|
||||
record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -tag:v hvc1 -bsf:v hevc_mp4toannexb -c:a aac
|
||||
rtmp: -c:v copy -c:a aac -f flv
|
||||
|
||||
inputs:
|
||||
- path: rtsp://user:password@camera-ip:554/H264/ch1/main/av_stream # <----- Update for your camera
|
||||
roles:
|
||||
- detect
|
||||
- record
|
||||
- rtmp
|
||||
rtmp:
|
||||
enabled: False # <-- RTMP should be disabled if your stream is not H264
|
||||
detect:
|
||||
width: # <---- update for your camera's resolution
|
||||
height: # <---- update for your camera's resolution
|
||||
|
||||
|
||||
```
|
||||
|
||||
### Blue Iris RTSP Cameras
|
||||
|
||||
You will need to remove `nobuffer` flag for Blue Iris RTSP cameras
|
||||
|
||||
```yaml
|
||||
ffmpeg:
|
||||
input_args: -avoid_negative_ts make_zero -flags low_delay -strict experimental -fflags +genpts+discardcorrupt -rtsp_transport tcp -timeout 5000000 -use_wallclock_as_timestamps 1
|
||||
```
|
||||
|
||||
### Reolink 410/520 (possibly others)
|
||||
|
||||

|
||||
|
||||
According to [this discussion](https://github.com/blakeblackshear/frigate/issues/3235#issuecomment-1135876973), the http video streams seem to be the most reliable for Reolink.
|
||||
|
||||
```yaml
|
||||
@@ -93,26 +141,6 @@ cameras:
|
||||
fps: 7
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Blue Iris RTSP Cameras
|
||||
|
||||
You will need to remove `nobuffer` flag for Blue Iris RTSP cameras
|
||||
|
||||
```yaml
|
||||
ffmpeg:
|
||||
input_args: -avoid_negative_ts make_zero -flags low_delay -strict experimental -fflags +genpts+discardcorrupt -rtsp_transport tcp -timeout 5000000 -use_wallclock_as_timestamps 1
|
||||
```
|
||||
|
||||
### UDP Only Cameras
|
||||
|
||||
If your cameras do not support TCP connections for RTSP, you can use UDP.
|
||||
|
||||
```yaml
|
||||
ffmpeg:
|
||||
input_args: -avoid_negative_ts make_zero -fflags +genpts+discardcorrupt -rtsp_transport udp -timeout 5000000 -use_wallclock_as_timestamps 1
|
||||
```
|
||||
|
||||
### Unifi Protect Cameras
|
||||
|
||||
In the Unifi 2.0 update Unifi Protect Cameras had a change in audio sample rate which causes issues for ffmpeg. The input rate needs to be set for record and rtmp.
|
||||
@@ -122,4 +150,4 @@ ffmpeg:
|
||||
output_args:
|
||||
record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -ar 44100 -c:a aac
|
||||
rtmp: -c:v copy -f flv -ar 44100 -c:a aac
|
||||
```
|
||||
```
|
||||
@@ -43,3 +43,5 @@ cameras:
|
||||
front: ...
|
||||
side: ...
|
||||
```
|
||||
|
||||
For camera model specific settings check the [camera specific](/configuration/camera_specific) infos.
|
||||
@@ -21,7 +21,7 @@ ffmpeg:
|
||||
ffmpeg:
|
||||
hwaccel_args: -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format yuv420p
|
||||
```
|
||||
**NOTICE**: With some of the processors, like the J4125, the default driver `iHD` doesn't seem to work correctly for hardware acceleration. You may need to change the driver to `i965` by adding the following environment variable `LIBVA_DRIVER_NAME=i965` to your docker-compose file.
|
||||
**NOTICE**: With some of the processors, like the J4125, the default driver `iHD` doesn't seem to work correctly for hardware acceleration. You may need to change the driver to `i965` by adding the following environment variable `LIBVA_DRIVER_NAME=i965` to your docker-compose file or [in the frigate.yml for HA OS users](advanced.md#environment_vars).
|
||||
|
||||
### Intel-based CPUs (>=10th Generation) via Quicksync
|
||||
|
||||
@@ -46,19 +46,19 @@ ffmpeg:
|
||||
These instructions are based on the [jellyfin documentation](https://jellyfin.org/docs/general/administration/hardware-acceleration.html#nvidia-hardware-acceleration-on-docker-linux)
|
||||
|
||||
Add `--gpus all` to your docker run command or update your compose file.
|
||||
|
||||
If you have multiple Nvidia graphic card, you can add them with their ids obtained via `nvidia-smi` command
|
||||
```yaml
|
||||
services:
|
||||
frigate:
|
||||
...
|
||||
image: blakeblackshear/frigate:stable
|
||||
deploy: # <------------- Add this section
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
count: 1
|
||||
capabilities: [gpu]
|
||||
deploy: # <------------- Add this section
|
||||
resources:
|
||||
reservations:
|
||||
devices:
|
||||
- driver: nvidia
|
||||
device_ids: ['0'] # this is only needed when using multiple GPUs
|
||||
capabilities: [gpu]
|
||||
```
|
||||
|
||||
The decoder you need to pass in the `hwaccel_args` will depend on the input video.
|
||||
@@ -86,7 +86,7 @@ ffmpeg:
|
||||
```
|
||||
|
||||
If everything is working correctly, you should see a significant improvement in performance.
|
||||
Verify that hardware decoding is working by running `nvidia-smi`, which should show the ffmpeg
|
||||
Verify that hardware decoding is working by running `docker exec -it frigate nvidia-smi`, which should show the ffmpeg
|
||||
processes:
|
||||
|
||||
```
|
||||
|
||||
@@ -47,7 +47,7 @@ Now let's add the first camera:
|
||||
|
||||
:::caution
|
||||
|
||||
Note that passwords that contain special characters often cause issues with ffmpeg connecting to the cameara. If recieving `end-of-file` or `unauthorized` errors with a verified correct password, try changing the password to something simple to rule out the possibility that the password is the issue.
|
||||
Note that passwords that contain special characters often cause issues with ffmpeg connecting to the camera. If receiving `end-of-file` or `unauthorized` errors with a verified correct password, try changing the password to something simple to rule out the possibility that the password is the issue.
|
||||
|
||||
:::
|
||||
|
||||
|
||||
@@ -178,7 +178,7 @@ for how to set these.
|
||||
|
||||
When multiple Frigate instances are configured, [API](#api) URLs should include an
|
||||
identifier to tell Home Assistant which Frigate instance to refer to. The
|
||||
identifier used is the MQTT `client_id` paremeter included in the configuration,
|
||||
identifier used is the MQTT `client_id` parameter included in the configuration,
|
||||
and is used like so:
|
||||
|
||||
```
|
||||
|
||||
@@ -45,6 +45,7 @@ Message published for each changed event. The first message is published when th
|
||||
"frame_time": 1607123961.837752,
|
||||
"snapshot_time": 1607123961.837752,
|
||||
"label": "person",
|
||||
"sub_label": null,
|
||||
"top_score": 0.958984375,
|
||||
"false_positive": false,
|
||||
"start_time": 1607123955.475377,
|
||||
@@ -69,6 +70,7 @@ Message published for each changed event. The first message is published when th
|
||||
"frame_time": 1607123962.082975,
|
||||
"snapshot_time": 1607123961.837752,
|
||||
"label": "person",
|
||||
"sub_label": null,
|
||||
"top_score": 0.958984375,
|
||||
"false_positive": false,
|
||||
"start_time": 1607123955.475377,
|
||||
|
||||
@@ -28,6 +28,7 @@ from playhouse.shortcuts import model_to_dict
|
||||
|
||||
from frigate.const import CLIPS_DIR
|
||||
from frigate.models import Event, Recordings
|
||||
from frigate.object_processing import TrackedObject, TrackedObjectProcessor
|
||||
from frigate.stats import stats_snapshot
|
||||
from frigate.version import VERSION
|
||||
|
||||
@@ -211,7 +212,7 @@ def delete_retain(id):
|
||||
@bp.route("/events/<id>/sub_label", methods=("POST",))
|
||||
def set_sub_label(id):
|
||||
try:
|
||||
event = Event.get(Event.id == id)
|
||||
event: Event = Event.get(Event.id == id)
|
||||
except DoesNotExist:
|
||||
return make_response(
|
||||
jsonify({"success": False, "message": "Event " + id + " not found"}), 404
|
||||
@@ -234,6 +235,16 @@ def set_sub_label(id):
|
||||
400,
|
||||
)
|
||||
|
||||
if not event.end_time:
|
||||
tracked_obj: TrackedObject = (
|
||||
current_app.detected_frames_processor.camera_states[
|
||||
event.camera
|
||||
].tracked_objects.get(event.id)
|
||||
)
|
||||
|
||||
if tracked_obj:
|
||||
tracked_obj.obj_data["sub_label"] = new_sub_label
|
||||
|
||||
event.sub_label = new_sub_label
|
||||
event.save()
|
||||
return make_response(
|
||||
@@ -347,7 +358,6 @@ def label_thumbnail(camera_name, label):
|
||||
event_query = (
|
||||
Event.select()
|
||||
.where(Event.camera == camera_name)
|
||||
.where(Event.has_snapshot == True)
|
||||
.order_by(Event.start_time.desc())
|
||||
)
|
||||
else:
|
||||
@@ -355,7 +365,6 @@ def label_thumbnail(camera_name, label):
|
||||
Event.select()
|
||||
.where(Event.camera == camera_name)
|
||||
.where(Event.label == label)
|
||||
.where(Event.has_snapshot == True)
|
||||
.order_by(Event.start_time.desc())
|
||||
)
|
||||
|
||||
|
||||
@@ -84,6 +84,8 @@ def create_mqtt_client(config: FrigateConfig, camera_metrics):
|
||||
f"Turning on motion for {camera_name} due to detection being enabled."
|
||||
)
|
||||
camera_metrics[camera_name]["motion_enabled"].value = True
|
||||
state_topic = f"{message.topic[:-11]}/motion/state"
|
||||
client.publish(state_topic, payload, retain=True)
|
||||
elif payload == "OFF":
|
||||
if camera_metrics[camera_name]["detection_enabled"].value:
|
||||
logger.info(f"Turning off detection for {camera_name} via mqtt")
|
||||
|
||||
@@ -180,6 +180,7 @@ class TrackedObject:
|
||||
"frame_time": self.obj_data["frame_time"],
|
||||
"snapshot_time": snapshot_time,
|
||||
"label": self.obj_data["label"],
|
||||
"sub_label": self.obj_data.get("sub_label"),
|
||||
"top_score": self.top_score,
|
||||
"false_positive": self.false_positive,
|
||||
"start_time": self.obj_data["start_time"],
|
||||
|
||||
@@ -167,7 +167,7 @@ class RecordingMaintainer(threading.Thread):
|
||||
f"{cache_path}",
|
||||
]
|
||||
p = sp.run(ffprobe_cmd, capture_output=True)
|
||||
if p.returncode == 0:
|
||||
if p.returncode == 0 and p.stdout.decode():
|
||||
duration = float(p.stdout.decode().strip())
|
||||
end_time = start_time + datetime.timedelta(seconds=duration)
|
||||
self.end_time_cache[cache_path] = (end_time, duration)
|
||||
@@ -276,28 +276,31 @@ class RecordingMaintainer(threading.Thread):
|
||||
file_path = os.path.join(directory, file_name)
|
||||
|
||||
try:
|
||||
start_frame = datetime.datetime.now().timestamp()
|
||||
# copy then delete is required when recordings are stored on some network drives
|
||||
shutil.copyfile(cache_path, file_path)
|
||||
logger.debug(
|
||||
f"Copied {file_path} in {datetime.datetime.now().timestamp()-start_frame} seconds."
|
||||
)
|
||||
os.remove(cache_path)
|
||||
if not os.path.exists(file_path):
|
||||
start_frame = datetime.datetime.now().timestamp()
|
||||
# copy then delete is required when recordings are stored on some network drives
|
||||
shutil.copyfile(cache_path, file_path)
|
||||
logger.debug(
|
||||
f"Copied {file_path} in {datetime.datetime.now().timestamp()-start_frame} seconds."
|
||||
)
|
||||
|
||||
rand_id = "".join(
|
||||
random.choices(string.ascii_lowercase + string.digits, k=6)
|
||||
)
|
||||
Recordings.create(
|
||||
id=f"{start_time.timestamp()}-{rand_id}",
|
||||
camera=camera,
|
||||
path=file_path,
|
||||
start_time=start_time.timestamp(),
|
||||
end_time=end_time.timestamp(),
|
||||
duration=duration,
|
||||
motion=motion_count,
|
||||
# TODO: update this to store list of active objects at some point
|
||||
objects=active_count,
|
||||
)
|
||||
rand_id = "".join(
|
||||
random.choices(string.ascii_lowercase + string.digits, k=6)
|
||||
)
|
||||
Recordings.create(
|
||||
id=f"{start_time.timestamp()}-{rand_id}",
|
||||
camera=camera,
|
||||
path=file_path,
|
||||
start_time=start_time.timestamp(),
|
||||
end_time=end_time.timestamp(),
|
||||
duration=duration,
|
||||
motion=motion_count,
|
||||
# TODO: update this to store list of active objects at some point
|
||||
objects=active_count,
|
||||
)
|
||||
else:
|
||||
logger.warning(f"Ignoring segment because {file_path} already exists.")
|
||||
os.remove(cache_path)
|
||||
except Exception as e:
|
||||
logger.error(f"Unable to store recording segment {cache_path}")
|
||||
Path(cache_path).unlink(missing_ok=True)
|
||||
|
||||
@@ -22,7 +22,8 @@ logger = logging.getLogger(__name__)
|
||||
def get_latest_version() -> str:
|
||||
try:
|
||||
request = requests.get(
|
||||
"https://api.github.com/repos/blakeblackshear/frigate/releases/latest"
|
||||
"https://api.github.com/repos/blakeblackshear/frigate/releases/latest",
|
||||
timeout=10,
|
||||
)
|
||||
except:
|
||||
return "unknown"
|
||||
|
||||
Reference in New Issue
Block a user