* Start working on bird processor
* Initial setup for bird processing
* Improvements to handling
* Get classification working
* Cleanup classification
* Add classification config
* Update sort
* Actually send result to face registration
* Define postprocessing api and move face processing to fit
* Standardize request handling
* Standardize handling of processors
* Rename processing metrics
* Cleanup
* Standardize object end
* Update to newer formatting
* One more
* One more
* Get stats for embeddings inferences
* cleanup embeddings inferences
* Enable UI for feature metrics
* Change threshold
* Fix check
* Update python for actions
* Set python version
* Ignore type for now
* Support downloading face models
* Handle download and loading correctly
* Add face dir creation
* Fix error
* Fix
* Formatting
* Move upload to button
* Show number of faces in library for each name
* Add text color for score
* Cleanup
* rockchip: update dependencies and add script for model conversion
* rockchip: update docs
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* use ruamel to parse and preserve line numbers for config validation
* maintain exception for non validation errors
* fix types
* include input in log messages
* Validate faces using cosine distance and SVC
* Formatting
* Use opencv instead of face embedding
* Update docs for training data
* Adjust to score system
* Set bounds
* remove face embeddings
* Update writing images
* Add face library page
* Add ability to select file
* Install opencv deps
* Cleanup
* Use different deps
* Move deps
* Cleanup
* Only show face library for desktop
* Implement deleting
* Add ability to upload image
* Add support for uploading images
* Add margin to detected faces for embeddings
* Standardize pixel values for face input
* Use SVC to classify faces
* Clear classifier when new face is added
* Formatting
* Add dependency
* Update version
* Face recognition backend (#14495)
* Add basic config and face recognition table
* Reconfigure updates processing to handle face
* Crop frame to face box
* Implement face embedding calculation
* Get matching face embeddings
* Add support face recognition based on existing faces
* Use arcface face embeddings instead of generic embeddings model
* Add apis for managing faces
* Implement face uploading API
* Build out more APIs
* Add min area config
* Handle larger images
* Add more debug logs
* fix calculation
* Reduce timeout
* Small tweaks
* Use webp images
* Use facenet model
* Improve face recognition (#14537)
* Increase requirements for face to be set
* Manage faces properly
* Add basic docs
* Simplify
* Separate out face recognition frome semantic search
* Update docs
* Formatting
* Fix access (#14540)
* Face detection (#14544)
* Add support for face detection
* Add support for detecting faces during registration
* Set body size to be larger
* Undo
* Update version
* Face recognition backend (#14495)
* Add basic config and face recognition table
* Reconfigure updates processing to handle face
* Crop frame to face box
* Implement face embedding calculation
* Get matching face embeddings
* Add support face recognition based on existing faces
* Use arcface face embeddings instead of generic embeddings model
* Add apis for managing faces
* Implement face uploading API
* Build out more APIs
* Add min area config
* Handle larger images
* Add more debug logs
* fix calculation
* Reduce timeout
* Small tweaks
* Use webp images
* Use facenet model
* Improve face recognition (#14537)
* Increase requirements for face to be set
* Manage faces properly
* Add basic docs
* Simplify
* Separate out face recognition frome semantic search
* Update docs
* Formatting
* Fix access (#14540)
* Face detection (#14544)
* Add support for face detection
* Add support for detecting faces during registration
* Set body size to be larger
* Undo
* initial foundation for alpr with paddleocr
* initial foundation for alpr with paddleocr
* initial foundation for alpr with paddleocr
* config
* config
* lpr maintainer
* clean up
* clean up
* fix processing
* don't process for stationary cars
* fix order
* fixes
* check for known plates
* improved length and character by character confidence
* model fixes and small tweaks
* docs
* placeholder for non frigate+ model lp detection
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Increase requirements for face to be set
* Manage faces properly
* Add basic docs
* Simplify
* Separate out face recognition frome semantic search
* Update docs
* Formatting
* Add basic config and face recognition table
* Reconfigure updates processing to handle face
* Crop frame to face box
* Implement face embedding calculation
* Get matching face embeddings
* Add support face recognition based on existing faces
* Use arcface face embeddings instead of generic embeddings model
* Add apis for managing faces
* Implement face uploading API
* Build out more APIs
* Add min area config
* Handle larger images
* Add more debug logs
* fix calculation
* Reduce timeout
* Small tweaks
* Use webp images
* Use facenet model
* GenAI: add ability to save JPGs sent to provider
* Remove mention from GenAI docs
* Change config name to debug_save_thumbnails
* Change folder structure to clips/genai-requests/{event_id}/{1.jpg}
* Organize api files
* Add more API definitions for events
* Add export select by ID
* Typing fixes
* Update openapi spec
* Change type
* Fix test
* Fix message
* Fix tests
* use id instead of index for object details and scrolling
* long press package and hook
* fix long press in review
* search action group
* multi select in explore
* add bulk deletion to backend api
* clean up
* mimic behavior of review
* don't open dialog on left click when mutli selecting
* context menu on container ref
* revert long press code
* clean up
The archive already has everything contained in a rootfs folder, extract
it as-is to the root folder. This also reverts changes from
33957e5360 which addressed the same issue
in a less optimal way.
* Fix audio events in explore section
Make sure that audio events are listed in the explore section
* Update audio.py
* Hide other submit options
Only allow submits for objects only
* Started unit tests for the review controller
* Revert "Started unit tests for the review controller"
This reverts commit 7746eb146f.
* Started unit tests for the review controller
* FIrst test
* Added test for review endpoint (time filter - after + before)
* Assert expected event
* Added more tests for review endpoint
* Added test for review endpoint with all filters
* Added test for review endpoint with limit
* Comment
* Renamed tests to increase readability
* fix regex for cookie_name to be general snake case
* Update frigate/config/auth.py
Co-authored-by: Blake Blackshear <blake.blackshear@gmail.com>
---------
Co-authored-by: Blake Blackshear <blake.blackshear@gmail.com>
* Don't track shared memory in frame tracker
* Don't track any instance
* Don't assign sub label to objects when multiple cars are overlapping
* Formatting
* Fix assignment
* Use custom body for the export recordings endpoint
* Fixed usage of ExportRecordingsBody
* Updated docs to reflect changes to export endpoint
* Fix friendly name and source
* Updated openAPI spec
* Remove extra spacing for next/prev carousel buttons
* Clarify ollama genai docs
* Clean up copied gpu info output
* Clean up copied gpu info output
* Better display when manually copying/pasting log data
* Home/End buttons for search input and max 8 search columns
* Fix lifecycle label
* remove video tab if tracked object has no clip
* hide object lifecycle if there is no clip
* add test for filter value to ensure only fully numeric values are set as numbers
* Ensure review and search item mobile pages reopen correctly
* disable pan/pinch/zoom when native browser video controls are displayed
* report 0 for storage usage when api returns null
* Updated documentation for the review endpoint
* Updated documentation for the review/summary endpoint
* Updated documentation for the review/summary endpoint
* Documentation for the review activity audio and motion endpoints
* Added responses for more review.py endpoints
* Added responses for more review.py endpoints
* Fixed review.py responses and proper path parameter names
* Added body model for /reviews/viewed and /reviews/delete
* Updated OpenAPI specification for the review controller endpoints
* Run ruff format frigate
* Drop significant_motion
* Updated frigate-api.yaml
* Deleted total_motion
* Combine 2 models into generic
* Add reindex progress to mobile bottom bar status alert
* move menu to new component
* actions component in search footer thumbnail
* context menu for explore summary thumbnail images
* readd top_score to search query for old events
* Add service manager infrastructure
The changes are (This will be a bit long):
- A ServiceManager class that spawns a background thread and deals with
service lifecycle management. The idea is that service lifecycle code
will run in async functions, so a single thread is enough to manage
any (reasonable) amount of services.
- A Service class, that offers start(), stop() and restart() methods
that simply notify the service manager to... well. Start, stop or
restart a service.
(!) Warning: Note that this differs from mp.Process.start/stop in that
the service commands are sent asynchronously and will complete
"eventually". This is good because it means that business logic is
fast when booting up and shutting down, but we need to make sure
that code does not rely on start() and stop() being instant
(Mainly pid assignments).
Subclasses of the Service class should use the on_start and on_stop
methods to monitor for service events. These will be run by the
service manager thread, so we need to be careful not to block
execution here. Standard async stuff.
(!) Note on service names: Service names should be unique within a
ServiceManager. Make sure that you pass the name you want to
super().__init__(name="...") if you plan to spawn multiple instances
of a service.
- A ServiceProcess class: A Service that wraps a multiprocessing.Process
into a Service. It offers a run() method subclasses can override and
can support in-place restarting using the service manager.
And finally, I lied a bit about this whole thing using a single thread.
I can't find any way to run python multiprocessing in async, so there is
a MultiprocessingWaiter thread that waits for multiprocessing events and
notifies any pending futures. This was uhhh... fun? No, not really.
But it works. Using this part of the code just involves calling the
provided wait method. See the implementation of ServiceProcess for more
details.
Mirror util.Process hooks onto service process
Remove Service.__name attribute
Do not serialize process object on ServiceProcess start.
asd
* Update frigate dictionary
* Convert AudioProcessor to service process
* only save a fixed number of thumbnails if genai is enabled
* disable cpu_mem_arena to save on memory until its actually needed
* fix search settings pane so it actually saves to the config
* Fix access
* Reorganize tracked object for imports
* Separate out rockchip build
* Formatting
* Use original ffmpeg build
* Fix build
* Update default search type value
* backend score filtering and sorting
* score filter frontend
* use input for score filtering
* use correct score on search thumbnail
* add popover to explain top_score
* revert sublabel score calc
* update filters logic
* fix rounding on score
* wait until default view is loaded
* don't turn button to selected style for similarity searches
* clarify language
* fix alert dialog buttons to use correct destructive variant
* use root level top_score for very old events
* better arrangement of thumbnail footer items on smaller screens
* Add time ago to explore summary view on desktop
* add search settings for columns and default view selection
* add descriptions
* clarify wording
* padding tweak
* padding tweaks for mobile
* fix size of activity indicator
* smaller
* fix search type switches
* select/unselect style for more filters button
* fix reset button
* fix labels scrollbar
* set min width and remove modal to allow scrolling with filters open
* hover colors
* better match of font size
* stop sheet from displaying console errors
* fix detail dialog behavior
* Handle Frigate+ submitted case
* Add search settings and rename general to ui settings
* Add platform aware sheet component
* use two columns on mobile view
* Add cameras page to more filters
* clean up search settings view
* Add time range to side filter
* better match with ui settings
* fix icon size
* use two columns on mobile view
* clean up search settings view
* Add zones and saving logic
* Add all filters to side panel
* better match with ui settings
* fix icon size
* Fix mobile fitler page
* Fix embeddings access
* Cleanup
* Fix scroll
* fix double scrollbars and add separators on mobile too
* two columns on mobile
* italics for emphasis
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* Publish model state and embeddings reindex in dispatcher onConnect
* remove unneeded from explore
* add embeddings reindex progress to statusbar
* don't allow right click or show similar button if semantic search is disabled
* fix status bar
* Convert peewee model to dict before formatting for genai description
* add embeddings reindex progress to statusbar
* fix status bar
* Convert peewee model to dict before formatting for genai description
* Publish model state and embeddings reindex in dispatcher onConnect
* remove unneeded from explore
* add embeddings reindex progress to statusbar
* don't allow right click or show similar button if semantic search is disabled
* fix status bar
* custom hook and generic video player component
* add export preview dialog
* export preview dialog when using timeline export
* refactor search detail dialog to use new generic video player component
* clean up
* Remove device config and use model size to configure device used
* Don't show Frigate+ submission when in progress
* Add docs link for bounding box colors
* Use cosine distance metric for vec tables
* Only apply normalization to multi modal searches
* Catch possible edge case in stddev calc
* Use sigmoid function for normalization for multi modal searches only
* Ensure we get model state on initial page load
* Only save stats for multi modal searches and only use cosine similarity for image -> image search
* Add config option to select fp16 or quantized jina vision model
* requires_fp16 for text and large models only
* fix model type check
* fix cpu
* pass model size
* refactor dispatcher
* add reindex to dictionary
* add circular progress bar component
* Add progress to UI when embeddings are reindexing
* readd comments to dispatcher for clarity
* Only report progress every 10 events so we don't spam the logs and websocket
* clean up
* add generic onnx model class and use jina ai clip models for all embeddings
* fix merge confligt
* add generic onnx model class and use jina ai clip models for all embeddings
* fix merge confligt
* preferred providers
* fix paths
* disable download progress bar
* remove logging of path
* drop and recreate tables on reindex
* use cache paths
* fix model name
* use trust remote code per transformers docs
* ensure tokenizer and feature extractor are correctly loaded
* revert
* manually download and cache feature extractor config
* remove unneeded
* remove old clip and minilm code
* docs update
* :Add support for nvidia driver info
* Don't show temperature if detector isn't called coral
* Add encoder and decoder info for Nvidia GPUs
* Fix device info
* Implement GPU info for nvidia GPU
* Update web/src/views/system/GeneralMetrics.tsx
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* Update web/src/views/system/GeneralMetrics.tsx
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* swap sqlite_vec for chroma in requirements
* load sqlite_vec in embeddings manager
* remove chroma and revamp Embeddings class for sqlite_vec
* manual minilm onnx inference
* remove chroma in clip model
* migrate api from chroma to sqlite_vec
* migrate event cleanup from chroma to sqlite_vec
* migrate embedding maintainer from chroma to sqlite_vec
* genai description for sqlite_vec
* load sqlite_vec in main thread db
* extend the SqliteQueueDatabase class and use peewee db.execute_sql
* search with Event type for similarity
* fix similarity search
* install and add comment about transformers
* fix normalization
* add id filter
* clean up
* clean up
* fully remove chroma and add transformers env var
* readd uvicorn for fastapi
* readd tokenizer parallelism env var
* remove chroma from docs
* remove chroma from UI
* try removing custom pysqlite3 build
* hard code limit
* optimize queries
* revert explore query
* fix query
* keep building pysqlite3
* single pass fetch and process
* remove unnecessary re-embed
* update deps
* move SqliteVecQueueDatabase to db directory
* make search thumbnail take up full size of results box
* improve typing
* improve model downloading and add status screen
* daemon downloading thread
* catch case when semantic search is disabled
* fix typing
* build sqlite_vec from source
* resolve conflict
* file permissions
* try build deps
* remove sources
* sources
* fix thread start
* include git in build
* reorder embeddings after detectors are started
* build with sqlite amalgamation
* non-platform specific
* use wget instead of curl
* remove unzip -d
* remove sqlite_vec from requirements and load the compiled version
* fix build
* avoid race in db connection
* add scale_factor and bias to description zscore normalization
* Updated documentation
* docusaurus.config and sidebars converted to Typescript to allow for typings
* Added type for sidebars.ts
* Replaced integrations/api.md with automatically generated openAPI specification. Make sidebar collapsible to increase readability
* Fix HTTP API links in the documentation
* Added rust as language in the openapi sidebar
* Make sure configuration/pwa is present
* Fix API slug
* Fix links
* Revert sidebarCollapsible configuration
* Make HTTP API sidebar collapsed by default. Added CSS for OpenAPI methods
* Proper localhost server path
* Proper localhost server path
* No introduction page
* Lint
* Added stop_event to util.Process
util.Process will take care of receiving signals when the stop_event is
accessed in the subclass. If it never is, SystemExit is raised instead.
This has the effect of still behaving like multiprocessing.Process when
stop_event is not accessed, while still allowing subclasses to not deal
with the hassle of setting it up.
* Give each util.Process their own logger
This will help to reduce boilerplate in subclasses.
* Give explicit types to util.Process.__init__
This gives better type hinting in the editor.
* Use util.Process facilities in AudioProcessor
Boilerplate begone!
* Removed pointless check in util.Process
The log_listener.queue should never be None, unless something has gone
extremely wrong in the log setup code. If we're that far gone, crashing
is better.
* Make sure faulthandler is enabled in all processes
This has no effect currently since we're using the fork start_method.
However, when we inevidably switch to forkserver (either by choice, or
by upgrading to python 3.14+) not having this makes for some really fun
failure modes :D
I just saw this, and I would be very surprised by that behaviour as a
user. Changing the db path would randomly move the database, and
changing it back (or to anything, really) would not. These kinds of
advanced settings are generally expected to do one thing: Change the
path frigate opens the database from. The end.
* fix squashed alert thumbnails in filmstrip
* add genai debug logs
* consistent themed image loading indicator background color
* improve image loading skeleton in object lifecycle pane
* less rounding when screen is smaller
* use browser back button to dismiss review pane
* initial state
* Allow embedding of snapshot for description via config option
* docs
* frontend button
* Backend
* crop snapshot to region
* only show dropdown when event has snapshot
* fix cursor on dropdown
* crop on initial generation as well
* use enum for type
* fix type
* Add loading indicator when explore view is revalidating
* Portal tooltip in object lifecycle pane
* Better config file handling
* Only manually set aspect ratio when using alert videos
* Update general support template
* Update camera support
* Update config-support.yml
* Update detector support
* Update general-support.yml
* Update hardware-acceleration-support.yml
* Create pull_request_template.md
* Subclass Process for audio_process
* Introduce custom mp.Process subclass
In preparation to switch the multiprocessing startup method away from
"fork", we cannot rely on os.fork cloning the log state at fork time.
Instead, we have to set up logging before we run the business logic of
each process.
* Make camera_metrics into a class
* Make ptz_metrics into a class
* Fixed PtzMotionEstimator.ptz_metrics type annotation
* Removed pointless variables
* Do not start audio processor when no audio cameras are configured
* Portal tooltips
* Add ability to time_range filter chroma searches
* centering and padding consistency
* add event id back to chroma metadata
* query sqlite first and pass those ids to chroma for embeddings search
* ensure we pass timezone to the api call
* remove object lifecycle from search details for non-object events
* simplify hour calculation
* fix query without filters
* bump chroma version
* chroma 0.5.7
* fix selecting camera group in cameras filter button
* Prevent keyboard shortcuts from running when input is focused
* fix reset button and update time pickers when using input
* simplify css
* consistent button order and spacing
* Add ability to filter by time range
* Cleanup
* Handle input with tags
* fix input for time_range filter
* fix before and after filters
* clean up
* Ensure the default value works as expected
* Handle time range in am/pm based on browser
* Fix arrow
* Fix text
* Handle midnight case
* fix width
* Fix bg
* Fix bg
* Fix mobile spacing
* y spacing
* remove left padding
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* Add ability to restrict genai to labels and zones at the camera level
* fix comment
* clarify docs
* use objects instead of labels
* docs
* object list
* POC: Added FastAPI with one endpoint (get /logs/service)
* POC: Revert error_log
* POC: Converted preview related endpoints to FastAPI
* POC: Converted two more endpoints to FastAPI
* POC: lint
* Convert all media endpoints to FastAPI. Added /media prefix (/media/camera && media/events && /media/preview)
* Convert all notifications API endpoints to FastAPI
* Convert first review API endpoints to FastAPI
* Convert remaining review API endpoints to FastAPI
* Convert export endpoints to FastAPI
* Fix path parameters
* Convert events endpoints to FastAPI
* Use body for multiple events endpoints
* Use body for multiple events endpoints (create and end event)
* Convert app endpoints to FastAPI
* Convert app endpoints to FastAPI
* Convert auth endpoints to FastAPI
* Removed flask app in favour of FastAPI app. Implemented FastAPI middleware to check CSRF, connect and disconnect from DB. Added middleware x-forwared-for headers
* Added starlette plugin to expose custom headers
* Use slowapi as the limiter
* Use query parameters for the frame latest endpoint
* Use query parameters for the media snapshot.jpg endpoint
* Use query parameters for the media MJPEG feed endpoint
* Revert initial nginx.conf change
* Added missing even_id for /events/search endpoint
* Removed left over comment
* Use FastAPI TestClient
* severity query parameter should be a string
* Use the same pattern for all tests
* Fix endpoint
* Revert media routers to old names. Order routes to make sure the dynamic ones from media.py are only used whenever there's no match on auth/etc
* Reverted paths for media on tsx files
* Deleted file
* Fix test_http to use TestClient
* Formatting
* Bind timeline to DB
* Fix http tests
* Replace filename with pathvalidate
* Fix latest.ext handling and disable uvicorn access logs
* Add cosntraints to api provided values
* Formatting
* Remove unused
* Remove unused
* Get rate limiter working
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Moved FrigateApp.init_config() into FrigateConfig.load()
* Move frigate config loading into main
* Store PlusApi in FrigateConfig
* Register SIGTERM handler in main
* Ensure logging is setup during config parsing
* Removed pointless try
* Moved config initialization out of FrigateApp
* Made FrigateApp.shm_frame_count into a function
* Removed log calls from signal handlers
python's logging calls are not re-entrant, which caused at least one of
these to deadlock randomly.
* Reopen stdout/err on process fork
This helps avoid deadlocks (https://github.com/python/cpython/issues/91776).
* Make mypy happy
* Whoops. I might have forgotten to save.
Truly an amateur mistake.
* Always call FrigateApp.stop()
* Ignore entire __pycache__ folder instead of individual *.pyc files
* Ignore .mypy_cache in git
* Rework config YAML parsing to use only ruamel.yaml
PyYAML silently overrides keys when encountering duplicates, but ruamel
raises and exception by default. Since we're already using it elsewhere,
dropping PyYAML is an easy choice to make.
* Added EnvString in config to slim down runtime_config()
* Added gitlens to devcontainer
* Automatically call FrigateConfig.runtime_config()
runtime_config needed to be called manually before. Now, it's been
removed, but the same code is run by a pydantic validator.
* Fix handling of missing -segment_time
* Removed type annotation on FrigateConfig's parse
I'd like to keep them, but then mypy complains about some fundamental
errors with how the pydantic model is structured. I'd like to fix it,
but I'd rather work towards moving some of this config to the database.
* add event_id param to api
* exclude query from filtertype
* update review pane link for similarity search
* update filter group for similarity param and fix switch bug
* unneeded prop
* update query and input for similarity search param
* use undefined instead of empty string for query with similarity search
* Implement ROCm detectors
* Cleanup tensor input
* Fixup image creation
* Add support for yolonas in onnx
* Get build working with onnx
* Update docs and simplify config
* Remove unused imports
* Revamp support discussion templates
* move text to description
* remove duplicate logs box
* ffprobe on camera support
* longer description on config support
* Jump to live when exceeding buffer time threshold in MSE player
* clean up
* Try adjusting playback rate instead of jumping to live
* clean up
* fallback to webrtc if enabled before jsmpeg
* baseline
* clean up
* remove comments
* adaptive playback rate and intelligent switching improvements
* increase logging and reset live mode after camera is no longer active on dashboard only
* jump to live on safari/iOS
* clean up
* clean up
* refactor camera live mode hook
* remove key listener
* resolve conflicts
* If recordings don't exist mark as no recordings
* Fix reloading recordings failing
* Fix mark items not clearing selected
* Cleanup
* Default to last full hour when error occurs
* Remove check
* Cleanup
* Handle empty recordings list case
* Ensure that the start time is within the time range
* Catch other reset cases
Ensure axios.defaults.baseURL is set when accessing login form.
Drop `/api` prefix in login form's `axios.post` call, since `/api` is
part of the baseURL.
Redirect to subpath on succesful authentication.
Prepend subpath to default logout url.
Fixes#12814
* Update live view docs with camera firmware settings recommendations
* video/audio
* capitalization
* Video only cams
* clarify higher iframes
* update wording
* fix wording
* Add note on camera specific page
* change note
sed -i -e '/AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31\/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi\/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==/d' ~/.ssh/known_hosts
sed -i -e '/AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31\/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi\/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==/d' ~/.ssh/known_hosts
description:Please copy and paste any relevant go2rtc log output. Include logs before and after your exact error when possible. Logs can be viewed via the Frigate UI, Docker, or the go2rtc dashboard. This will be automatically formatted into code, so no need for backticks.
@@ -174,7 +174,7 @@ NOTE: The folder that is set for the config needs to be the folder that contains
### Custom go2rtc version
Frigate currently includes go2rtc v1.9.4, there may be certain cases where you want to run a different version of go2rtc.
Frigate currently includes go2rtc v1.9.2, there may be certain cases where you want to run a different version of go2rtc.
To do this:
@@ -183,7 +183,7 @@ To do this:
3. Give `go2rtc` execute permission.
4. Restart Frigate and the custom version will be used, you can verify by checking go2rtc logs.
## Validating your config.yaml file updates
## Validating your config.yml file updates
When frigate starts up, it checks whether your config file is valid, and if it is not, the process exits. To minimize interruptions when updating your config, you have three options -- you can edit the config via the WebUI which has built in validation, use the config API, or you can validate on the command line using the frigate docker container.
@@ -26,7 +26,7 @@ In the event that you are locked out of your instance, you can tell Frigate to r
## Login failure rate limiting
In order to limit the risk of brute force attacks, rate limiting is available for login failures. This is implemented with Flask-Limiter, and the string notation for valid values is available in [the documentation](https://flask-limiter.readthedocs.io/en/stable/configuration.html#rate-limit-string-notation).
In order to limit the risk of brute force attacks, rate limiting is available for login failures. This is implemented with SlowApi, and the string notation for valid values is available in [the documentation](https://limits.readthedocs.io/en/stable/quickstart.html#examples).
For example, `1/second;5/minute;20/hour` will rate limit the login endpoint when failures occur more than:
@@ -9,6 +9,12 @@ This page makes use of presets of FFmpeg args. For more information on presets,
:::
:::note
Many cameras support encoding options which greatly affect the live view experience, see the [Live view](/configuration/live) page for more info.
:::
## MJPEG Cameras
Note that mjpeg cameras require encoding the video into h264 for recording, and restream roles. This will use significantly more CPU than if the cameras supported h264 feeds directly. It is recommended to use the restream role to create an h264 restream and then use that as the source for ffmpeg.
@@ -61,14 +67,15 @@ ffmpeg:
### Annke C800
This camera is H.265 only. To be able to play clips on some devices (like MacOs or iPhone) the H.265 stream has to be repackaged and the audio stream has to be converted to aac. Unfortunately direct playback of in the browser is not working (yet), but the downloaded clip can be played locally.
This camera is H.265 only. To be able to play clips on some devices (like MacOs or iPhone) the H.265 stream has to be adjusted using the `apple_compatibility` config.
```yaml
cameras:
annkec800:# <------ Name the camera
ffmpeg:
apple_compatibility:true# <- Adds compatibility with MacOS and iPhone
- path:rtsp://user:password@camera-ip:554/H264/ch1/main/av_stream# <----- Update for your camera
@@ -150,7 +157,9 @@ cameras:
#### Reolink Doorbell
The reolink doorbell supports 2-way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
The reolink doorbell supports two way audio via go2rtc and other applications. It is important that the http-flv stream is still used for stability, a secondary rtsp stream can be added that will be using for the two way audio only.
Ensure HTTP is enabled in the camera's advanced network settings. To use two way talk with Frigate, see the [Live view documentation](/configuration/live#two-way-talk).
```yaml
go2rtc:
@@ -175,7 +184,7 @@ go2rtc:
- rtspx://192.168.1.1:7441/abcdefghijk
```
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.4#source-rtsp)
[See the go2rtc docs for more information](https://github.com/AlexxIT/go2rtc/tree/v1.9.2#source-rtsp)
In the Unifi 2.0 update Unifi Protect Cameras had a change in audio sample rate which causes issues for ffmpeg. The input rate needs to be set for record if used directly with unifi protect.
If the ONVIF connection is successful, PTZ controls will be available in the camera's WebUI.
:::tip
If your ONVIF camera does not require authentication credentials, you may still need to specify an empty string for `user` and `password`, eg: `user: ""` and `password: ""`.
:::
An ONVIF-capable camera that supports relative movement within the field of view (FOV) can also be configured to automatically track moving objects and keep them in the center of the frame. For autotracking setup, see the [autotracking](autotracking.md) docs.
## ONVIF PTZ camera recommendations
This list of working and non-working PTZ cameras is based on user feedback.
| Brand or specific camera | PTZ Controls | Autotracking | Notes |
| Amcrest | ✅ | ✅ | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support autotracking |
| Amcrest ASH21 | ❌ | ❌ | No ONVIF support |
| Ctronics PTZ | ✅ | ❌ | |
| Dahua | ✅ | ✅ | |
| Foscam R5 | ✅ | ❌ | |
| Hanwha XNP-6550RH | ✅ | ❌ | |
| Hikvision | ✅ | ❌| Incomplete ONVIF support (MoveStatus won't update even on latest firmware) - reported with HWP-N4215IH-DE and DS-2DE3304W-DE, but likely others |
| Reolink 511WA | ✅ | ❌ | Zoom only |
| Reolink E1 Pro | ✅ | ❌ | |
| Reolink E1 Zoom | ✅ | ❌ | |
| Reolink RLC-823A 16x | ✅ | ❌ | |
| Sunba 405-D20X | ✅ | ❌ | |
| Tapo | ✅ | ❌ | Many models supported, ONVIF Service Port: 2020 |
| Uniview IPC672LR-AX4DUPK | ✅ | ❌ | Firmware says FOV relative movement is supported, but camera doesn't actually move when sending ONVIF commands |
| Amcrest IP4M-S2112EW-AI | ✅ | ❌ | FOV relative movement not supported. |
| Amcrest IP5M-1190EW | ✅ | ❌ | ONVIF Port: 80. FOV relative movement not supported. |
| Ctronics PTZ | ✅ | ❌ | |
| Dahua | ✅ | ✅ | |
| Dahua DH-SD2A500HB | ✅ | ❌ | |
| Foscam R5 | ✅ | ❌ | |
| Hanwha XNP-6550RH | ✅ | ❌ | |
| Hikvision | ✅ | ❌ | Incomplete ONVIF support (MoveStatus won't update even on latest firmware) - reported with HWP-N4215IH-DE and DS-2DE3304W-DE, but likely others |
| Hikvision DS-2DE3A404IWG-E/W | ✅ | ✅ | |
| Reolink 511WA | ✅ | ❌ | Zoom only |
| Reolink E1 Pro | ✅ | ❌ | |
| Reolink E1 Zoom | ✅ | ❌ | |
| Reolink RLC-823A 16x | ✅ | ❌ | |
| Speco O8P32X | ✅ | ❌ | |
| Sunba 405-D20X | ✅ | ❌ | Incomplete ONVIF support reported on original, and 4k models. All models are suspected incompatable. |
| Tapo | ✅ | ❌ | Many models supported, ONVIF Service Port: 2020 |
| Uniview IPC672LR-AX4DUPK | ✅ | ❌ | Firmware says FOV relative movement is supported, but camera doesn't actually move when sending ONVIF commands |
| Uniview IPC6612SR-X33-VG | ✅ | ✅ | Leave `calibrate_on_startup` as `False`. A user has reported that zooming with `absolute` is working. |
Face recognition allows people to be assigned names and when their face is recognized Frigate will assign the person's name as a sub label. This information is included in the UI, filters, as well as in notifications.
Frigate has support for FaceNet to create face embeddings, which runs locally. Embeddings are then saved to Frigate's database.
## Minimum System Requirements
Face recognition works by running a large AI model locally on your system. Systems without a GPU will not run Face Recognition reliably or at all.
## Configuration
Face recognition is disabled by default and requires semantic search to be enabled, face recognition must be enabled in your config file before it can be used. Semantic Search and face recognition are global configuration settings.
```yaml
face_recognition:
enabled:true
```
## Dataset
The number of images needed for a sufficient training set for face recognition varies depending on several factors:
- Complexity of the task: A simple task like recognizing faces of known individuals may require fewer images than a complex task like identifying unknown individuals in a large crowd.
- Diversity of the dataset: A dataset with diverse images, including variations in lighting, pose, and facial expressions, will require fewer images per person than a less diverse dataset.
- Desired accuracy: The higher the desired accuracy, the more images are typically needed.
However, here are some general guidelines:
- Minimum: For basic face recognition tasks, a minimum of 10-20 images per person is often recommended.
- Recommended: For more robust and accurate systems, 30-50 images per person is a good starting point.
- Ideal: For optimal performance, especially in challenging conditions, 100 or more images per person can be beneficial.
Generative AI can be used to automatically generate descriptions based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate by providing detailed text descriptions as a basis of the search query.
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
Semantic Search must be enabled to use Generative AI. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
Requests for a description are sent off automatically to your AI provider at the end of the tracked object's lifecycle. Descriptions can also be regenerated manually via the Frigate UI.
:::info
Semantic Search must be enabled to use Generative AI.
:::
## Configuration
@@ -29,11 +35,21 @@ cameras:
## Ollama
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance. Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [docker container](https://hub.docker.com/r/ollama/ollama) available.
:::warning
Using Ollama on CPU is not recommended, high inference times make using Generative AI impractical.
:::
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). At the time of writing, this includes `llava`, `llava-llama3`, `llava-phi3`, and `moondream`.
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). At the time of writing, this includes `llava`, `llava-llama3`, `llava-phi3`, and `moondream`. Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull llava:7b` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
:::note
@@ -48,7 +64,7 @@ genai:
enabled:True
provider:ollama
base_url:http://localhost:11434
model:llava
model:llava:7b
```
## Google Gemini
@@ -100,12 +116,44 @@ genai:
model:gpt-4o
```
## Azure OpenAI
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key and resource URL, which must include the `api-version` parameter (see the example below). The model field is not required in your configuration as the model is part of the deployment name you chose when deploying the resource.
Frigate's thumbnail search excels at identifying specific details about tracked objects – for example, using an "image caption" approach to find a "person wearing a yellow vest," "a white dog running across the lawn," or "a red car on a residential street." To enhance this further, Frigate’s default prompts are designed to ask your AI provider about the intent behind the object's actions, rather than just describing its appearance.
While generating simple descriptions of detected objects is useful, understanding intent provides a deeper layer of insight. Instead of just recognizing "what" is in a scene, Frigate’s default prompts aim to infer "why" it might be there or "what" it could do next. Descriptions tell you what’s happening, but intent gives context. For instance, a person walking toward a door might seem like a visitor, but if they’re moving quickly after hours, you can infer a potential break-in attempt. Detecting a person loitering near a door at night can trigger an alert sooner than simply noting "a person standing by the door," helping you respond based on the situation’s context.
### Using GenAI for notifications
Frigate provides an [MQTT topic](/integrations/mqtt), `frigate/tracked_object_update`, that is updated with a JSON payload containing `event_id` and `description` when your AI provider returns a description for a tracked object. This description could be used directly in notifications, such as sending alerts to your phone or making audio announcements. If additional details from the tracked object are needed, you can query the [HTTP API](/integrations/api/event-events-event-id-get) using the `event_id`, eg: `http://frigate_ip:5000/api/events/<event_id>`.
## Custom Prompts
Frigate sends multiple frames from the tracked object along with a prompt to your Generative AI provider asking it to generate a description. The default prompt is as follows:
```
Describe the {label} in the sequence of images with as much detail as possible. Do not describe the background.
Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.
```
:::tip
@@ -122,22 +170,30 @@ genai:
provider:ollama
base_url:http://localhost:11434
model:llava
prompt:"Describe the {label} in these images from the {camera} security camera."
prompt:"Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person:"Describe the main person in these images (gender, age, clothing, activity, etc). Do not include where the activity is occurring (sidewalk, concrete, driveway, etc)."
car:"Label the primary vehicle in these images with just the name of the company if it is a delivery vehicle, or the color make and model."
person:"Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car:"Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire.
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire. By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
```yaml
cameras:
front_door:
genai:
prompt:"Describe the {label} in these images from the {camera} security camera at the front door of a house, aimed outward toward the street."
use_snapshot:True
prompt:"Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person:"Describe the main person in these images (gender, age, clothing, activity, etc). Do not include where the activity is occurring (sidewalk, concrete, driveway, etc). If delivering a package, include the company the package is from."
cat:"Describe the cat in these images (color, size, tail). Indicate whether or not the cat is by the flower pots. If the cat is chasing a mouse, make up a name for the mouse."
person:"Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
cat:"Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
The default driver is `iHD`. You may need to change the driver to `i965` by adding the following environment variable `LIBVA_DRIVER_NAME=i965` to your docker-compose file or [in the `frigate.yaml` for HA OS users](advanced.md#environment_vars).
See [The Intel Docs](https://www.intel.com/content/www/us/en/support/articles/000005505/processors.html) to figure out what generation your CPU is.
:::
### Via VAAPI
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams. VAAPI is recommended for all generations of Intel-based CPUs.
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.
```yaml
ffmpeg:
hwaccel_args:preset-vaapi
```
:::note
With some of the processors, like the J4125, the default driver `iHD` doesn't seem to work correctly for hardware acceleration. You may need to change the driver to `i965` by adding the following environment variable `LIBVA_DRIVER_NAME=i965` to your docker-compose file or [in the `frigate.yaml` for HA OS users](advanced.md#environment_vars).
:::
### Via Quicksync (>=10th Generation only)
If VAAPI does not work for you, you can try QSV if your processor supports it. QSV must be set specifically based on the video encoding of the stream.
### Via Quicksync
#### H.264 streams
@@ -162,6 +175,16 @@ For more information on the various values across different distributions, see h
Depending on your OS and kernel configuration, you may need to change the `/proc/sys/kernel/perf_event_paranoid` kernel tunable. You can test the change by running `sudo sh -c 'echo 2 >/proc/sys/kernel/perf_event_paranoid'` which will persist until a reboot. Make it permanent by running `sudo sh -c 'echo kernel.perf_event_paranoid=2 >> /etc/sysctl.d/local.conf'`
#### Stats for SR-IOV devices
When using virtualized GPUs via SR-IOV, additional args are needed for GPU stats to function. This can be enabled with the following config:
```yaml
telemetry:
stats:
sriov:True
```
## AMD/ATI GPUs (Radeon HD 2000 and newer GPUs) via libva-mesa-driver
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.
@@ -218,28 +241,11 @@ docker run -d \
### Setup Decoder
The decoder you need to pass in the `hwaccel_args` will depend on the input video.
A list of supported codecs (you can use `ffmpeg -decoders | grep cuvid` in the container to get the ones your card supports)
For example, for H264 video, you'll select `preset-nvidia-h264`.
Using `preset-nvidia` ffmpeg will automatically select the necessary profile for the incoming video, and will log an error if the profile is not supported by your GPU.
```yaml
ffmpeg:
hwaccel_args:preset-nvidia-h264
hwaccel_args:preset-nvidia
```
If everything is working correctly, you should see a significant improvement in performance.
@@ -370,7 +376,7 @@ Make sure to follow the [Rockchip specific installation instructions](/frigate/i
### Configuration
Add one of the following FFmpeg presets to your `config.yaml` to enable hardware video processing:
Add one of the following FFmpeg presets to your `config.yml` to enable hardware video processing:
Frigate can recognize license plates on vehicles and automatically add the detected characters as a `sub_label` to objects that are of type `car`. A common use case may be to read the license plates of cars pulling into a driveway or cars passing by on a street with a dedicated LPR camera.
Users running a Frigate+ model should ensure that `license_plate` is added to the [list of objects to track](https://docs.frigate.video/plus/#available-label-types) either globally or for a specific camera. This will improve the accuracy and performance of the LPR model.
LPR is most effective when the vehicle’s license plate is fully visible to the camera. For moving vehicles, Frigate will attempt to read the plate continuously, refining its detection and keeping the most confident result. LPR will not run on stationary vehicles.
## Minimum System Requirements
License plate recognition works by running AI models locally on your system. The models are relatively lightweight and run on your CPU. At least 4GB of RAM is required.
## Configuration
License plate recognition is disabled by default. Enable it in your config file:
```yaml
lpr:
enabled:true
```
## Advanced Configuration
Several options are available to fine-tune the LPR feature. For example, you can adjust the `min_area` setting, which defines the minimum size in pixels a license plate must be before LPR runs. The default is 500 pixels.
Additionally, you can define `known_plates` as strings or regular expressions, allowing Frigate to label tracked vehicles with custom sub_labels when a recognized plate is detected. This information is then accessible in the UI, filters, and notifications.
```yaml
lpr:
enabled:true
min_area:500
known_plates:
Wife's Car:
- "ABC-1234"
- "ABC-I234"
Johnny:
- "J*N-*234"# Using wildcards for H/M and 1/I
Sally:
- "[S5]LL-1234"# Matches SLL-1234 and 5LL-1234
```
In this example, "Wife's Car" will appear as the label for any vehicle matching the plate "ABC-1234." The model might occasionally interpret the digit 1 as a capital I (e.g., "ABC-I234"), so both variations are listed. Similarly, multiple possible variations are specified for Johnny and Sally.
| jsmpeg | low | same as `detect -> fps`, capped at 10 | 720p | no | no | resolution is configurable, but go2rtc is recommended if you want higher resolutions |
| mse | low | native | native | yes (depends on audio codec) | yes | iPhone requires iOS 17.1+, Firefox is h.264 only |
| webrtc | lowest | native | native | yes (depends on audio codec) | yes | requires extra config, doesn't support h.265 |
| jsmpeg | same as `detect -> fps`, capped at 10 | 720p | no | no | Resolution is configurable, but go2rtc is recommended if you want higher resolutions and better frame rates. jsmpeg is Frigate's default without go2rtc configured. |
| mse | native | native | yes (depends on audio codec) | yes | iPhone requires iOS 17.1+, Firefox is h.264 only. This is Frigate's default when go2rtc is configured. |
| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration, doesn't support h.265. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. |
### Camera Settings Recommendations
If you are using go2rtc, you should adjust the following settings in your camera's firmware for the best experience with Live view:
- Video codec: **H.264** - provides the most compatible video codec with all Live view technologies and browsers. Avoid any kind of "smart codec" or "+" codec like _H.264+_ or _H.265+_. as these non-standard codecs remove keyframes (see below).
- Audio codec: **AAC** - provides the most compatible audio codec with all Live view technologies and browsers that support audio.
- I-frame interval (sometimes called the keyframe interval, the interframe space, or the GOP length): match your camera's frame rate, or choose "1x" (for interframe space on Reolink cameras). For example, if your stream outputs 20fps, your i-frame interval should be 20 (or 1x on Reolink). Values higher than the frame rate will cause the stream to take longer to begin playback. See [this page](https://gardinal.net/understanding-the-keyframe-interval/) for more on keyframes. For many users this may not be an issue, but it should be noted that that a 1x i-frame interval will cause more storage utilization if you are using the stream for the `record` role as well.
The default video and audio codec on your camera may not always be compatible with your browser, which is why setting them to H.264 and AAC is recommended. See the [go2rtc docs](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#codecs-madness) for codec support information.
### Audio Support
MSE Requires AAC audio, WebRTC requires PCMU/PCMA, or opus audio. If you want to support both MSE and WebRTC then your restream config needs to make sure both are enabled.
MSE Requires PCMA/PCMU or AAC audio, WebRTC requires PCMA/PCMU or opus audio. If you want to support both MSE and WebRTC then your restream config needs to make sure both are enabled.
```yaml
go2rtc:
@@ -32,6 +42,15 @@ go2rtc:
- "ffmpeg:http_cam#audio=opus"# <- copy of the stream which transcodes audio to the missing codec (usually will be opus)
```
If your camera does not have audio and you are having problems with Live view, you should have go2rtc send video only:
```yaml
go2rtc:
streams:
no_audio_camera:
- ffmpeg:rtsp://192.168.1.5:554/live0#video=copy
```
### Setting Stream For Live UI
There may be some cameras that you would prefer to use the sub stream for live view, but the main stream for recording. This can be done via `live -> stream_name`.
@@ -119,3 +138,13 @@ services:
:::
See [go2rtc WebRTC docs](https://github.com/AlexxIT/go2rtc/tree/v1.8.3#module-webrtc) for more information about this.
### Two way talk
For devices that support two way talk, Frigate can be configured to use the feature from the camera's Live view in the Web UI. You should:
- Set up go2rtc with [WebRTC](#webrtc-extra-configuration).
- Ensure you access Frigate via https (may require [opening port 8971](/frigate/installation/#ports)).
- For the Home Assistant Frigate card, [follow the docs](https://github.com/dermotduffy/frigate-hass-card?tab=readme-ov-file#using-2-way-audio) for the correct source.
To use the Reolink Doorbell with two way talk, you should use the [recommended Reolink configuration](/configuration/camera_specific#reolink-doorbell)
Some cameras like doorbell cameras may have missed detections when someone walks directly in front of the camera and the lightning_threshold causes motion detection to be re-calibrated. In this case, it may be desirable to increase the `lightning_threshold` to ensure these objects are not missed.
:::
:::note
Lightning threshold does not stop motion based recordings from being saved.
:::
Large changes in motion like PTZ moves and camera switches between Color and IR mode should result in no motion detection. This is done via the `lightning_threshold` configuration. It is defined as the percentage of the image used to detect lightning or other substantial changes where motion detection needs to recalibrate. Increasing this value will make motion detection more likely to consider lightning or IR mode changes as valid motion. Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching a doorbell camera.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.