forked from Github/frigate
Compare commits
5 Commits
v0.13.1
...
remove_cre
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6965d6e931 | ||
|
|
00804a0f81 | ||
|
|
a33f2f117e | ||
|
|
50563eef8d | ||
|
|
97a619eaf0 |
2
Makefile
2
Makefile
@@ -1,7 +1,7 @@
|
||||
default_target: local
|
||||
|
||||
COMMIT_HASH := $(shell git log -1 --pretty=format:"%h"|tail -1)
|
||||
VERSION = 0.13.1
|
||||
VERSION = 0.13.2
|
||||
IMAGE_REPO ?= ghcr.io/blakeblackshear/frigate
|
||||
GITHUB_REF_NAME ?= $(shell git rev-parse --abbrev-ref HEAD)
|
||||
CURRENT_UID := $(shell id -u)
|
||||
|
||||
@@ -7,10 +7,6 @@ title: FAQ
|
||||
|
||||
Frigate+ models are built by fine tuning a base model with the images you have annotated and verified. The base model is trained from scratch from a sampling of images across all Frigate+ user submissions and takes weeks of expensive GPU resources to train. If the models were built using your image uploads alone, you would need to provide tens of thousands of examples and it would take more than a week (and considerable cost) to train. Diversity helps the model generalize.
|
||||
|
||||
### What is a training credit and how do I use them?
|
||||
|
||||
Essentially, `1 training credit = 1 trained model`. When you have uploaded, annotated, and verified additional images and you are ready to train your model, you will submit a model request which will use one credit. The model that is trained will utilize all of the verified images in your account. When new base models are available, it will require the use of a training credit to generate a new user model on the new base model.
|
||||
|
||||
### Are my video feeds sent to the cloud for analysis when using Frigate+ models?
|
||||
|
||||
No. Frigate+ models are a drop in replacement for the default model. All processing is performed locally as always. The only images sent to Frigate+ are the ones you specifically submit via the `Send to Frigate+` button or upload directly.
|
||||
@@ -25,4 +21,4 @@ Yes. Models and metadata are stored in the `model_cache` directory within the co
|
||||
|
||||
### Can I keep using my Frigate+ models even if I do not renew my subscription?
|
||||
|
||||
Yes. Subscriptions to Frigate+ provide access to the infrastructure used to train the models. Models trained using the training credits that you purchased are yours to keep and use forever. However, do note that the terms and conditions prohibit you from sharing, reselling, or creating derivative products from the models.
|
||||
Yes. Subscriptions to Frigate+ provide access to the infrastructure used to train the models. Models trained with your subscription are yours to keep and use forever. However, do note that the terms and conditions prohibit you from sharing, reselling, or creating derivative products from the models.
|
||||
|
||||
@@ -13,7 +13,7 @@ For more detailed recommendations, you can refer to the docs on [improving your
|
||||
|
||||
## Step 2: Submit a model request
|
||||
|
||||
Once you have an initial set of verified images, you can request a model on the Models page. Each model request requires 1 of the training credits that you receive with your annual subscription. This model will support all [label types available](./index.md#available-label-types) even if you do not submit any examples for those labels. Model creation can take up to 36 hours.
|
||||
Once you have an initial set of verified images, you can request a model on the Models page. Each model request requires 1 of the 12 trainings that you receive with your annual subscription. This model will support all [label types available](./index.md#available-label-types) even if you do not submit any examples for those labels. Model creation can take up to 36 hours.
|
||||

|
||||
|
||||
## Step 3: Set your model id in the config
|
||||
|
||||
@@ -11,7 +11,7 @@ The baseline model isn't directly available after subscribing. This may change i
|
||||
|
||||
:::
|
||||
|
||||
With a subscription, and at each annual renewal, you will receive 12 model training credits. If you cancel your subscription, you will retain access to any trained models. An active subscription is required to submit model requests or purchase additional training credits.
|
||||
With a subscription, 12 model trainings per year are included. If you cancel your subscription, you will retain access to any trained models. An active subscription is required to submit model requests or purchase additional trainings.
|
||||
|
||||
Information on how to integrate Frigate+ with Frigate can be found in the [integration docs](../integrations/plus.md).
|
||||
|
||||
|
||||
@@ -38,6 +38,7 @@ class WebSocketClient(Communicator): # type: ignore[misc]
|
||||
|
||||
def __init__(self, config: FrigateConfig) -> None:
|
||||
self.config = config
|
||||
self.websocket_server = None
|
||||
|
||||
def subscribe(self, receiver: Callable) -> None:
|
||||
self._dispatcher = receiver
|
||||
@@ -98,6 +99,10 @@ class WebSocketClient(Communicator): # type: ignore[misc]
|
||||
logger.debug(f"payload for {topic} wasn't text. Skipping...")
|
||||
return
|
||||
|
||||
if self.websocket_server is None:
|
||||
logger.debug("Skipping message, websocket not connected yet")
|
||||
return
|
||||
|
||||
try:
|
||||
self.websocket_server.manager.broadcast(ws_message)
|
||||
except ConnectionResetError:
|
||||
|
||||
@@ -26,6 +26,10 @@ LABEL_CONSOLIDATION_MAP = {
|
||||
"face": 0.5,
|
||||
}
|
||||
LABEL_CONSOLIDATION_DEFAULT = 0.9
|
||||
LABEL_NMS_MAP = {
|
||||
"car": 0.6,
|
||||
}
|
||||
LABEL_NMS_DEFAULT = 0.4
|
||||
|
||||
# Audio Consts
|
||||
|
||||
|
||||
@@ -6,6 +6,7 @@ from enum import Enum
|
||||
|
||||
import numpy
|
||||
from onvif import ONVIFCamera, ONVIFError
|
||||
from zeep.exceptions import Fault, TransportError
|
||||
|
||||
from frigate.config import FrigateConfig, ZoomingModeEnum
|
||||
from frigate.types import PTZMetricsTypes
|
||||
@@ -68,16 +69,19 @@ class OnvifController:
|
||||
media = onvif.create_media_service()
|
||||
|
||||
try:
|
||||
# this will fire an exception if camera is not a ptz
|
||||
capabilities = onvif.get_definition("ptz")
|
||||
logger.debug(f"Onvif capabilities for {camera_name}: {capabilities}")
|
||||
profile = media.GetProfiles()[0]
|
||||
except ONVIFError as e:
|
||||
except (ONVIFError, Fault, TransportError) as e:
|
||||
logger.error(f"Unable to connect to camera: {camera_name}: {e}")
|
||||
return False
|
||||
|
||||
ptz = onvif.create_ptz_service()
|
||||
|
||||
request = ptz.create_type("GetConfigurations")
|
||||
configs = ptz.GetConfigurations(request)[0]
|
||||
logger.debug(f"Onvif configs for {camera_name}: {configs}")
|
||||
# get the PTZ config for the first onvif profile
|
||||
configs = profile.PTZConfiguration
|
||||
logger.debug(f"Onvif ptz config for media profile in {camera_name}: {configs}")
|
||||
|
||||
request = ptz.create_type("GetConfigurationOptions")
|
||||
request.ConfigurationToken = profile.PTZConfiguration.token
|
||||
@@ -187,19 +191,18 @@ class OnvifController:
|
||||
] = preset["token"]
|
||||
|
||||
# get list of supported features
|
||||
ptz_config = ptz.GetConfigurationOptions(request)
|
||||
supported_features = []
|
||||
|
||||
if ptz_config.Spaces and ptz_config.Spaces.ContinuousPanTiltVelocitySpace:
|
||||
if configs.DefaultContinuousPanTiltVelocitySpace:
|
||||
supported_features.append("pt")
|
||||
|
||||
if ptz_config.Spaces and ptz_config.Spaces.ContinuousZoomVelocitySpace:
|
||||
if configs.DefaultContinuousZoomVelocitySpace:
|
||||
supported_features.append("zoom")
|
||||
|
||||
if ptz_config.Spaces and ptz_config.Spaces.RelativePanTiltTranslationSpace:
|
||||
if configs.DefaultRelativePanTiltTranslationSpace:
|
||||
supported_features.append("pt-r")
|
||||
|
||||
if ptz_config.Spaces and ptz_config.Spaces.RelativeZoomTranslationSpace:
|
||||
if configs.DefaultRelativeZoomTranslationSpace:
|
||||
supported_features.append("zoom-r")
|
||||
try:
|
||||
# get camera's zoom limits from onvif config
|
||||
@@ -218,7 +221,7 @@ class OnvifController:
|
||||
f"Disabling autotracking zooming for {camera_name}: Relative zoom not supported"
|
||||
)
|
||||
|
||||
if ptz_config.Spaces and ptz_config.Spaces.AbsoluteZoomPositionSpace:
|
||||
if configs.DefaultAbsoluteZoomPositionSpace:
|
||||
supported_features.append("zoom-a")
|
||||
try:
|
||||
# get camera's zoom limits from onvif config
|
||||
@@ -236,7 +239,10 @@ class OnvifController:
|
||||
)
|
||||
|
||||
# set relative pan/tilt space for autotracker
|
||||
if fov_space_id is not None:
|
||||
if (
|
||||
fov_space_id is not None
|
||||
and configs.DefaultRelativePanTiltTranslationSpace is not None
|
||||
):
|
||||
supported_features.append("pt-r-fov")
|
||||
self.cams[camera_name][
|
||||
"relative_fov_range"
|
||||
|
||||
@@ -287,6 +287,15 @@ class TestObjectBoundingBoxes(unittest.TestCase):
|
||||
consolidated_detections = reduce_detections(frame_shape, detections)
|
||||
assert len(consolidated_detections) == len(detections)
|
||||
|
||||
def test_vert_stacked_cars_not_reduced(self):
|
||||
detections = [
|
||||
("car", 0.8, (954, 312, 1247, 475), 498512, 1.48, (800, 200, 1400, 600)),
|
||||
("car", 0.85, (970, 380, 1273, 610), 698752, 1.56, (800, 200, 1400, 700)),
|
||||
]
|
||||
frame_shape = (720, 1280)
|
||||
consolidated_detections = reduce_detections(frame_shape, detections)
|
||||
assert len(consolidated_detections) == len(detections)
|
||||
|
||||
|
||||
class TestRegionGrid(unittest.TestCase):
|
||||
def setUp(self) -> None:
|
||||
|
||||
@@ -10,7 +10,12 @@ import numpy as np
|
||||
from peewee import DoesNotExist
|
||||
|
||||
from frigate.config import DetectConfig, ModelConfig
|
||||
from frigate.const import LABEL_CONSOLIDATION_DEFAULT, LABEL_CONSOLIDATION_MAP
|
||||
from frigate.const import (
|
||||
LABEL_CONSOLIDATION_DEFAULT,
|
||||
LABEL_CONSOLIDATION_MAP,
|
||||
LABEL_NMS_DEFAULT,
|
||||
LABEL_NMS_MAP,
|
||||
)
|
||||
from frigate.detectors.detector_config import PixelFormatEnum
|
||||
from frigate.models import Event, Regions, Timeline
|
||||
from frigate.util.image import (
|
||||
@@ -466,6 +471,7 @@ def reduce_detections(
|
||||
|
||||
selected_objects = []
|
||||
for group in detected_object_groups.values():
|
||||
label = group[0][0]
|
||||
# o[2] is the box of the object: xmin, ymin, xmax, ymax
|
||||
# apply max/min to ensure values do not exceed the known frame size
|
||||
boxes = [
|
||||
@@ -483,7 +489,9 @@ def reduce_detections(
|
||||
# due to min score requirement of NMSBoxes
|
||||
confidences = [0.6 if clipped(o, frame_shape) else o[1] for o in group]
|
||||
|
||||
idxs = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)
|
||||
idxs = cv2.dnn.NMSBoxes(
|
||||
boxes, confidences, 0.5, LABEL_NMS_MAP.get(label, LABEL_NMS_DEFAULT)
|
||||
)
|
||||
|
||||
# add objects
|
||||
for index in idxs:
|
||||
|
||||
Reference in New Issue
Block a user