* Organize api files
* Add more API definitions for events
* Add export select by ID
* Typing fixes
* Update openapi spec
* Change type
* Fix test
* Fix message
* Fix tests
* use id instead of index for object details and scrolling
* long press package and hook
* fix long press in review
* search action group
* multi select in explore
* add bulk deletion to backend api
* clean up
* mimic behavior of review
* don't open dialog on left click when mutli selecting
* context menu on container ref
* revert long press code
* clean up
The archive already has everything contained in a rootfs folder, extract
it as-is to the root folder. This also reverts changes from
33957e5360 which addressed the same issue
in a less optimal way.
* Fix audio events in explore section
Make sure that audio events are listed in the explore section
* Update audio.py
* Hide other submit options
Only allow submits for objects only
* Started unit tests for the review controller
* Revert "Started unit tests for the review controller"
This reverts commit 7746eb146f.
* Started unit tests for the review controller
* FIrst test
* Added test for review endpoint (time filter - after + before)
* Assert expected event
* Added more tests for review endpoint
* Added test for review endpoint with all filters
* Added test for review endpoint with limit
* Comment
* Renamed tests to increase readability
* fix regex for cookie_name to be general snake case
* Update frigate/config/auth.py
Co-authored-by: Blake Blackshear <blake.blackshear@gmail.com>
---------
Co-authored-by: Blake Blackshear <blake.blackshear@gmail.com>
* Don't track shared memory in frame tracker
* Don't track any instance
* Don't assign sub label to objects when multiple cars are overlapping
* Formatting
* Fix assignment
* Use custom body for the export recordings endpoint
* Fixed usage of ExportRecordingsBody
* Updated docs to reflect changes to export endpoint
* Fix friendly name and source
* Updated openAPI spec
* Remove extra spacing for next/prev carousel buttons
* Clarify ollama genai docs
* Clean up copied gpu info output
* Clean up copied gpu info output
* Better display when manually copying/pasting log data
* Home/End buttons for search input and max 8 search columns
* Fix lifecycle label
* remove video tab if tracked object has no clip
* hide object lifecycle if there is no clip
* add test for filter value to ensure only fully numeric values are set as numbers
* Ensure review and search item mobile pages reopen correctly
* disable pan/pinch/zoom when native browser video controls are displayed
* report 0 for storage usage when api returns null
* Updated documentation for the review endpoint
* Updated documentation for the review/summary endpoint
* Updated documentation for the review/summary endpoint
* Documentation for the review activity audio and motion endpoints
* Added responses for more review.py endpoints
* Added responses for more review.py endpoints
* Fixed review.py responses and proper path parameter names
* Added body model for /reviews/viewed and /reviews/delete
* Updated OpenAPI specification for the review controller endpoints
* Run ruff format frigate
* Drop significant_motion
* Updated frigate-api.yaml
* Deleted total_motion
* Combine 2 models into generic
* Add reindex progress to mobile bottom bar status alert
* move menu to new component
* actions component in search footer thumbnail
* context menu for explore summary thumbnail images
* readd top_score to search query for old events
* Add service manager infrastructure
The changes are (This will be a bit long):
- A ServiceManager class that spawns a background thread and deals with
service lifecycle management. The idea is that service lifecycle code
will run in async functions, so a single thread is enough to manage
any (reasonable) amount of services.
- A Service class, that offers start(), stop() and restart() methods
that simply notify the service manager to... well. Start, stop or
restart a service.
(!) Warning: Note that this differs from mp.Process.start/stop in that
the service commands are sent asynchronously and will complete
"eventually". This is good because it means that business logic is
fast when booting up and shutting down, but we need to make sure
that code does not rely on start() and stop() being instant
(Mainly pid assignments).
Subclasses of the Service class should use the on_start and on_stop
methods to monitor for service events. These will be run by the
service manager thread, so we need to be careful not to block
execution here. Standard async stuff.
(!) Note on service names: Service names should be unique within a
ServiceManager. Make sure that you pass the name you want to
super().__init__(name="...") if you plan to spawn multiple instances
of a service.
- A ServiceProcess class: A Service that wraps a multiprocessing.Process
into a Service. It offers a run() method subclasses can override and
can support in-place restarting using the service manager.
And finally, I lied a bit about this whole thing using a single thread.
I can't find any way to run python multiprocessing in async, so there is
a MultiprocessingWaiter thread that waits for multiprocessing events and
notifies any pending futures. This was uhhh... fun? No, not really.
But it works. Using this part of the code just involves calling the
provided wait method. See the implementation of ServiceProcess for more
details.
Mirror util.Process hooks onto service process
Remove Service.__name attribute
Do not serialize process object on ServiceProcess start.
asd
* Update frigate dictionary
* Convert AudioProcessor to service process
* only save a fixed number of thumbnails if genai is enabled
* disable cpu_mem_arena to save on memory until its actually needed
* fix search settings pane so it actually saves to the config
* Fix access
* Reorganize tracked object for imports
* Separate out rockchip build
* Formatting
* Use original ffmpeg build
* Fix build
* Update default search type value
* backend score filtering and sorting
* score filter frontend
* use input for score filtering
* use correct score on search thumbnail
* add popover to explain top_score
* revert sublabel score calc
* update filters logic
* fix rounding on score
* wait until default view is loaded
* don't turn button to selected style for similarity searches
* clarify language
* fix alert dialog buttons to use correct destructive variant
* use root level top_score for very old events
* better arrangement of thumbnail footer items on smaller screens
* Add time ago to explore summary view on desktop
* add search settings for columns and default view selection
* add descriptions
* clarify wording
* padding tweak
* padding tweaks for mobile
* fix size of activity indicator
* smaller
* fix search type switches
* select/unselect style for more filters button
* fix reset button
* fix labels scrollbar
* set min width and remove modal to allow scrolling with filters open
* hover colors
* better match of font size
* stop sheet from displaying console errors
* fix detail dialog behavior
* Handle Frigate+ submitted case
* Add search settings and rename general to ui settings
* Add platform aware sheet component
* use two columns on mobile view
* Add cameras page to more filters
* clean up search settings view
* Add time range to side filter
* better match with ui settings
* fix icon size
* use two columns on mobile view
* clean up search settings view
* Add zones and saving logic
* Add all filters to side panel
* better match with ui settings
* fix icon size
* Fix mobile fitler page
* Fix embeddings access
* Cleanup
* Fix scroll
* fix double scrollbars and add separators on mobile too
* two columns on mobile
* italics for emphasis
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* Publish model state and embeddings reindex in dispatcher onConnect
* remove unneeded from explore
* add embeddings reindex progress to statusbar
* don't allow right click or show similar button if semantic search is disabled
* fix status bar
* Convert peewee model to dict before formatting for genai description
* add embeddings reindex progress to statusbar
* fix status bar
* Convert peewee model to dict before formatting for genai description
* Publish model state and embeddings reindex in dispatcher onConnect
* remove unneeded from explore
* add embeddings reindex progress to statusbar
* don't allow right click or show similar button if semantic search is disabled
* fix status bar
* custom hook and generic video player component
* add export preview dialog
* export preview dialog when using timeline export
* refactor search detail dialog to use new generic video player component
* clean up
* Remove device config and use model size to configure device used
* Don't show Frigate+ submission when in progress
* Add docs link for bounding box colors
* Use cosine distance metric for vec tables
* Only apply normalization to multi modal searches
* Catch possible edge case in stddev calc
* Use sigmoid function for normalization for multi modal searches only
* Ensure we get model state on initial page load
* Only save stats for multi modal searches and only use cosine similarity for image -> image search
* Add config option to select fp16 or quantized jina vision model
* requires_fp16 for text and large models only
* fix model type check
* fix cpu
* pass model size
* refactor dispatcher
* add reindex to dictionary
* add circular progress bar component
* Add progress to UI when embeddings are reindexing
* readd comments to dispatcher for clarity
* Only report progress every 10 events so we don't spam the logs and websocket
* clean up
* add generic onnx model class and use jina ai clip models for all embeddings
* fix merge confligt
* add generic onnx model class and use jina ai clip models for all embeddings
* fix merge confligt
* preferred providers
* fix paths
* disable download progress bar
* remove logging of path
* drop and recreate tables on reindex
* use cache paths
* fix model name
* use trust remote code per transformers docs
* ensure tokenizer and feature extractor are correctly loaded
* revert
* manually download and cache feature extractor config
* remove unneeded
* remove old clip and minilm code
* docs update
* :Add support for nvidia driver info
* Don't show temperature if detector isn't called coral
* Add encoder and decoder info for Nvidia GPUs
* Fix device info
* Implement GPU info for nvidia GPU
* Update web/src/views/system/GeneralMetrics.tsx
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* Update web/src/views/system/GeneralMetrics.tsx
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* swap sqlite_vec for chroma in requirements
* load sqlite_vec in embeddings manager
* remove chroma and revamp Embeddings class for sqlite_vec
* manual minilm onnx inference
* remove chroma in clip model
* migrate api from chroma to sqlite_vec
* migrate event cleanup from chroma to sqlite_vec
* migrate embedding maintainer from chroma to sqlite_vec
* genai description for sqlite_vec
* load sqlite_vec in main thread db
* extend the SqliteQueueDatabase class and use peewee db.execute_sql
* search with Event type for similarity
* fix similarity search
* install and add comment about transformers
* fix normalization
* add id filter
* clean up
* clean up
* fully remove chroma and add transformers env var
* readd uvicorn for fastapi
* readd tokenizer parallelism env var
* remove chroma from docs
* remove chroma from UI
* try removing custom pysqlite3 build
* hard code limit
* optimize queries
* revert explore query
* fix query
* keep building pysqlite3
* single pass fetch and process
* remove unnecessary re-embed
* update deps
* move SqliteVecQueueDatabase to db directory
* make search thumbnail take up full size of results box
* improve typing
* improve model downloading and add status screen
* daemon downloading thread
* catch case when semantic search is disabled
* fix typing
* build sqlite_vec from source
* resolve conflict
* file permissions
* try build deps
* remove sources
* sources
* fix thread start
* include git in build
* reorder embeddings after detectors are started
* build with sqlite amalgamation
* non-platform specific
* use wget instead of curl
* remove unzip -d
* remove sqlite_vec from requirements and load the compiled version
* fix build
* avoid race in db connection
* add scale_factor and bias to description zscore normalization
* Updated documentation
* docusaurus.config and sidebars converted to Typescript to allow for typings
* Added type for sidebars.ts
* Replaced integrations/api.md with automatically generated openAPI specification. Make sidebar collapsible to increase readability
* Fix HTTP API links in the documentation
* Added rust as language in the openapi sidebar
* Make sure configuration/pwa is present
* Fix API slug
* Fix links
* Revert sidebarCollapsible configuration
* Make HTTP API sidebar collapsed by default. Added CSS for OpenAPI methods
* Proper localhost server path
* Proper localhost server path
* No introduction page
* Lint
* Added stop_event to util.Process
util.Process will take care of receiving signals when the stop_event is
accessed in the subclass. If it never is, SystemExit is raised instead.
This has the effect of still behaving like multiprocessing.Process when
stop_event is not accessed, while still allowing subclasses to not deal
with the hassle of setting it up.
* Give each util.Process their own logger
This will help to reduce boilerplate in subclasses.
* Give explicit types to util.Process.__init__
This gives better type hinting in the editor.
* Use util.Process facilities in AudioProcessor
Boilerplate begone!
* Removed pointless check in util.Process
The log_listener.queue should never be None, unless something has gone
extremely wrong in the log setup code. If we're that far gone, crashing
is better.
* Make sure faulthandler is enabled in all processes
This has no effect currently since we're using the fork start_method.
However, when we inevidably switch to forkserver (either by choice, or
by upgrading to python 3.14+) not having this makes for some really fun
failure modes :D
I just saw this, and I would be very surprised by that behaviour as a
user. Changing the db path would randomly move the database, and
changing it back (or to anything, really) would not. These kinds of
advanced settings are generally expected to do one thing: Change the
path frigate opens the database from. The end.
* fix squashed alert thumbnails in filmstrip
* add genai debug logs
* consistent themed image loading indicator background color
* improve image loading skeleton in object lifecycle pane
* less rounding when screen is smaller
* use browser back button to dismiss review pane
* initial state
* Allow embedding of snapshot for description via config option
* docs
* frontend button
* Backend
* crop snapshot to region
* only show dropdown when event has snapshot
* fix cursor on dropdown
* crop on initial generation as well
* use enum for type
* fix type
* Add loading indicator when explore view is revalidating
* Portal tooltip in object lifecycle pane
* Better config file handling
* Only manually set aspect ratio when using alert videos
* Update general support template
* Update camera support
* Update config-support.yml
* Update detector support
* Update general-support.yml
* Update hardware-acceleration-support.yml
* Create pull_request_template.md
* Subclass Process for audio_process
* Introduce custom mp.Process subclass
In preparation to switch the multiprocessing startup method away from
"fork", we cannot rely on os.fork cloning the log state at fork time.
Instead, we have to set up logging before we run the business logic of
each process.
* Make camera_metrics into a class
* Make ptz_metrics into a class
* Fixed PtzMotionEstimator.ptz_metrics type annotation
* Removed pointless variables
* Do not start audio processor when no audio cameras are configured
* Portal tooltips
* Add ability to time_range filter chroma searches
* centering and padding consistency
* add event id back to chroma metadata
* query sqlite first and pass those ids to chroma for embeddings search
* ensure we pass timezone to the api call
* remove object lifecycle from search details for non-object events
* simplify hour calculation
* fix query without filters
* bump chroma version
* chroma 0.5.7
* fix selecting camera group in cameras filter button
* Prevent keyboard shortcuts from running when input is focused
* fix reset button and update time pickers when using input
* simplify css
* consistent button order and spacing
* Add ability to filter by time range
* Cleanup
* Handle input with tags
* fix input for time_range filter
* fix before and after filters
* clean up
* Ensure the default value works as expected
* Handle time range in am/pm based on browser
* Fix arrow
* Fix text
* Handle midnight case
* fix width
* Fix bg
* Fix bg
* Fix mobile spacing
* y spacing
* remove left padding
---------
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* Add ability to restrict genai to labels and zones at the camera level
* fix comment
* clarify docs
* use objects instead of labels
* docs
* object list
* POC: Added FastAPI with one endpoint (get /logs/service)
* POC: Revert error_log
* POC: Converted preview related endpoints to FastAPI
* POC: Converted two more endpoints to FastAPI
* POC: lint
* Convert all media endpoints to FastAPI. Added /media prefix (/media/camera && media/events && /media/preview)
* Convert all notifications API endpoints to FastAPI
* Convert first review API endpoints to FastAPI
* Convert remaining review API endpoints to FastAPI
* Convert export endpoints to FastAPI
* Fix path parameters
* Convert events endpoints to FastAPI
* Use body for multiple events endpoints
* Use body for multiple events endpoints (create and end event)
* Convert app endpoints to FastAPI
* Convert app endpoints to FastAPI
* Convert auth endpoints to FastAPI
* Removed flask app in favour of FastAPI app. Implemented FastAPI middleware to check CSRF, connect and disconnect from DB. Added middleware x-forwared-for headers
* Added starlette plugin to expose custom headers
* Use slowapi as the limiter
* Use query parameters for the frame latest endpoint
* Use query parameters for the media snapshot.jpg endpoint
* Use query parameters for the media MJPEG feed endpoint
* Revert initial nginx.conf change
* Added missing even_id for /events/search endpoint
* Removed left over comment
* Use FastAPI TestClient
* severity query parameter should be a string
* Use the same pattern for all tests
* Fix endpoint
* Revert media routers to old names. Order routes to make sure the dynamic ones from media.py are only used whenever there's no match on auth/etc
* Reverted paths for media on tsx files
* Deleted file
* Fix test_http to use TestClient
* Formatting
* Bind timeline to DB
* Fix http tests
* Replace filename with pathvalidate
* Fix latest.ext handling and disable uvicorn access logs
* Add cosntraints to api provided values
* Formatting
* Remove unused
* Remove unused
* Get rate limiter working
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Moved FrigateApp.init_config() into FrigateConfig.load()
* Move frigate config loading into main
* Store PlusApi in FrigateConfig
* Register SIGTERM handler in main
* Ensure logging is setup during config parsing
* Removed pointless try
* Moved config initialization out of FrigateApp
* Made FrigateApp.shm_frame_count into a function
* Removed log calls from signal handlers
python's logging calls are not re-entrant, which caused at least one of
these to deadlock randomly.
* Reopen stdout/err on process fork
This helps avoid deadlocks (https://github.com/python/cpython/issues/91776).
* Make mypy happy
* Whoops. I might have forgotten to save.
Truly an amateur mistake.
* Always call FrigateApp.stop()
* Ignore entire __pycache__ folder instead of individual *.pyc files
* Ignore .mypy_cache in git
* Rework config YAML parsing to use only ruamel.yaml
PyYAML silently overrides keys when encountering duplicates, but ruamel
raises and exception by default. Since we're already using it elsewhere,
dropping PyYAML is an easy choice to make.
* Added EnvString in config to slim down runtime_config()
* Added gitlens to devcontainer
* Automatically call FrigateConfig.runtime_config()
runtime_config needed to be called manually before. Now, it's been
removed, but the same code is run by a pydantic validator.
* Fix handling of missing -segment_time
* Removed type annotation on FrigateConfig's parse
I'd like to keep them, but then mypy complains about some fundamental
errors with how the pydantic model is structured. I'd like to fix it,
but I'd rather work towards moving some of this config to the database.
* add event_id param to api
* exclude query from filtertype
* update review pane link for similarity search
* update filter group for similarity param and fix switch bug
* unneeded prop
* update query and input for similarity search param
* use undefined instead of empty string for query with similarity search
* Implement ROCm detectors
* Cleanup tensor input
* Fixup image creation
* Add support for yolonas in onnx
* Get build working with onnx
* Update docs and simplify config
* Remove unused imports
* create input with tags component
* tweaks
* only show filters pane when there are actual filters
* special case for similarity searches
* similarity search tweaks
* populate suggestions values
* scrollbar on outer div
* clean up
* separate custom hook
* use command component
* tooltips
* regex tweaks
* saved searches with confirmation dialogs
* better date handling
* fix filters
* filter capitalization
* filter instructions
* replace underscore in filter type
* alert dialog button color
* toaster on success
* Ignore entire __pycache__ folder instead of individual *.pyc files
* Rewrite the yaml loader to match PyYAML
The old implementation would fail in weird ways with configs that were
incorrect in just the right way. The new implementation just does what
PyYAML would do, only diverging in case of duplicate keys.
* Clarify duplicate yaml key ValueError message
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Set caching options for hardware providers
* Always use CPU for searching
* Use new install strategy to remove onnxruntime and then install post wheels
* Make logging code self-contained.
Rewrite logging code to use python's builting QueueListener, effectively
moving the logging process into a thread of the Frigate app.
Also, wrap this behaviour in a easy-to-use context manager to encourage
some consistency.
* Fixed typing errors
* Remove todo note from log filter
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Do not access log record's msg directly
* Clear all root handlers before starting app
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Format makefiles
* Handle all errors in rocm makefile
* Remove CURRENT_UID and GID from makefile as they are unused
* Removed unused vite.svg asset
* Sort frigate-dictionary
* Add support for yolonas in onnx
* Add correct deps
* Set ld library path
* Refactor cudnn to only be used in amd64
* Add onnx to docs and add explainer at the top
* Undo change
* Update comment
* Remove uneccesary
* Remove line change
* Install multiple ffmpeg versions and add config to make it configurable
* Update docs
* Run ffprobe too
* Cleanup
* Apply config to go2rtc as well
* Fix ffmpeg bin
* Docs
* Restore path
* Cleanup env var
* Fix ffmpeg path for encoding
* Fix export
* Formatting
* mobile page component
* object lifecycle pane tweaks
* use mobile page component for review and search detail
* fix frigate+ dialog when using mobile page component
* small tweaks
* Improve image loading by not loading when off screen
* Add share menu to export
* Add share button and tidy up review detail lists
* Fix missing key
* Use query args for review filter
* Add object lifecycle to explore dialog
* Adjust sizing
* Simplify share button
* Always show snapshot but hide buttons for frigate+ if not applicable
* Handle case when user switches to element missing the previously selected tab
* Handle cases where share is not available
* Fix logic
* Always enable search page
* Always show eents when searching
* No default search background
* Center and show all filters when semantic search is not enabled
* Limit number of default items shown
* Adjust search options
* Add support for sub label filtering
* Separate out filters and clean up detail pane
* Tablet cleanup
* Fix current hour search preview
* Handle single lists
* Cleanup api search
* Object lifecycle pane
* fix thumbnails and annotation offset math
* snapshot endpoint height and format, yaml types, bugfixes
* clean up for new type
* use get_image_from_recording in recordings snapshot api
* make height optional
* Initial implementation of active object counters. Need to clean up a bit more and examine reuse of stationary/active logic in neighboring modules.
* A bit more cleanup for references to active, referencing the tracked object method rather than duplicating logic.
* Minor formatting and readability cleanup
* Update docs with the new active mqtt metric definition.
* Move the check for a change in active status into the code block protected by a false positive check.
* - Add 'active' to the tracked object dictionary, use the previous object for active comparison.
- I also missed emitting updates when a tracked object is no longer tracked, and added handling for emitting zeros on object types.
* Refactor recordings config to be based off of review items
* Update object processing logic for when an event is created
* Migrate to deciding recording retention based on review items
* Refactor recording expiration to be based off of review items
* Remove remainder of recording events access
* Handle migration automatically
* Update version and cleanup
* Update docs
* Clarify docs
* Cleanup
* Target camera config
* Safely access all fields
* Handle case where camera is offline when generating previews
* Don't rely on slow system
* Simplify checks to rely on other cameras
* Formatting
* Cleanup
* Initial support for Hailo-8L
Added file for Hailo-8L detector including dockerfile, h8l.mk, h8l.hcl, hailo8l.py, ci.yml and ssd_mobilenat_v1.hef as the inference network.
Added files to help with the installation of Hailo-8L dependences like generate_wheel_conf.py, requirements-wheel-h8l.txt and modified setup.py to try and work with any hardware.
Updated docs to reflect Initial Hailo-8L support including oject_detectors.md, hardware.md and installation.md.
* Update .github/workflows/ci.yml
typo h8l not arm64
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update docs/docs/configuration/object_detectors.md
Clarity for the end user and correct uses of words
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* Update docs/docs/frigate/installation.md
typo
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* update Installation.md to clarify Hailo-8L installation process.
* Update docs/docs/frigate/hardware.md
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* Update hardware.md add Inference time.
* Oops no new line at the end of the file.
* Update docs/docs/frigate/hardware.md typo
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* Update dockerfile to download the ssd_modilenet_v1 model instead of having it in the repo.
* Updated dockerfile so it dose not download the model file.
add function to download it at runtime.
update model path.
* fix formatting according to ruff and removed unnecessary functions.
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* Add basic search page
* Abstract filters to separate components
* Make searching functional
* Add loading and no results indicators
* Implement searching
* Combine account and settings menus on mobile
* Support using thumbnail for in progress detections
* Fetch previews
* Move recordings view and open recordings when search is selected
* Implement detail pane
* Implement saving of description
* Implement similarity search
* Fix clicking
* Add date range picker
* Fix
* Fix iOS zoom bug
* Mobile fixes
* Use text area
* Fix spacing for drawer
* Fix fetching previews incorrectly
* Initial re-implementation of semantic search
* put docker-compose back and make reindex match docs
* remove debug code and fix import
* fix docs
* manually build pysqlite3 as binaries are only available for x86-64
* update comment in build_pysqlite3.sh
* only embed objects
* better error handling when genai fails
* ask ollama to pull requested model at startup
* update ollama docs
* address some PR review comments
* fix lint
* use IPC to write description, update docs for reindex
* remove gemini-pro-vision from docs as it will be unavailable soon
* fix OpenAI doc available models
* fix api error in gemini and metadata for embeddings
* Revamp support discussion templates
* move text to description
* remove duplicate logs box
* ffprobe on camera support
* longer description on config support
* Jump to live when exceeding buffer time threshold in MSE player
* clean up
* Try adjusting playback rate instead of jumping to live
* clean up
* fallback to webrtc if enabled before jsmpeg
* baseline
* clean up
* remove comments
* adaptive playback rate and intelligent switching improvements
* increase logging and reset live mode after camera is no longer active on dashboard only
* jump to live on safari/iOS
* clean up
* clean up
* refactor camera live mode hook
* remove key listener
* resolve conflicts
* If recordings don't exist mark as no recordings
* Fix reloading recordings failing
* Fix mark items not clearing selected
* Cleanup
* Default to last full hour when error occurs
* Remove check
* Cleanup
* Handle empty recordings list case
* Ensure that the start time is within the time range
* Catch other reset cases
Ensure axios.defaults.baseURL is set when accessing login form.
Drop `/api` prefix in login form's `axios.post` call, since `/api` is
part of the baseURL.
Redirect to subpath on succesful authentication.
Prepend subpath to default logout url.
Fixes#12814
* Update live view docs with camera firmware settings recommendations
* video/audio
* capitalization
* Video only cams
* clarify higher iframes
* update wording
* fix wording
* Add note on camera specific page
* change note
* Revamp support discussion templates
* move text to description
* remove duplicate logs box
* ffprobe on camera support
* longer description on config support
* Jump to live when exceeding buffer time threshold in MSE player
* clean up
* Try adjusting playback rate instead of jumping to live
* clean up
* fallback to webrtc if enabled before jsmpeg
* baseline
* clean up
* remove comments
* adaptive playback rate and intelligent switching improvements
* increase logging and reset live mode after camera is no longer active on dashboard only
* jump to live on safari/iOS
* clean up
* clean up
* refactor camera live mode hook
* remove key listener
* resolve conflicts
* If recordings don't exist mark as no recordings
* Fix reloading recordings failing
* Fix mark items not clearing selected
* Cleanup
* Default to last full hour when error occurs
* Remove check
* Cleanup
* Handle empty recordings list case
* Ensure that the start time is within the time range
* Catch other reset cases
Ensure axios.defaults.baseURL is set when accessing login form.
Drop `/api` prefix in login form's `axios.post` call, since `/api` is
part of the baseURL.
Redirect to subpath on succesful authentication.
Prepend subpath to default logout url.
Fixes#12814
* Display message in desktop events list when no events exist
* Add message for when no events are found on plus view
* validating check
* activity indicator check
* clarify error message
* Improve export handling when errors occur
* Fix mobile zooming
* Handle recordings buffering
* Cleanup
* Url encode export name
* Start with actual name in input
* Fix buffering
* Camera settings view for alerts/detections
* flxes, beautifying, zone renaming, clean up
* replace underscores with spaces in zone names
* replace underscores with spaces in labels
* adds adjust_time which allows users to fix an issue with onvif authentication where time is not syncrhonized
* updated adjust_time to ignore_time_mismatch to make it clearer what this option does
* adds ignore_time_mismatch to the reference.md and adds a line about the security risk this can introduce as well as the recommendation to setup NTP for both ends.
* fix format error
* happy now linter?
* white space
* Update docs/docs/configuration/reference.md
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
---------
Co-authored-by: Stephen Butler <stephen.butler@ni.com>
Co-authored-by: Josh Hawkins <32435876+hawkeye217@users.noreply.github.com>
* Only keep 2x detect fps frames in SHM
* Don't delete previous shm frames in output
* Catch case where images do not exist
* Ensure files are closed
* Clear out all frames when shutting down
* Correct the number of frames saved
* Simplify empty shm error handling
* Improve frame safety
* Use full resolution aspect for main camera style in history view
* Only check for offline cameras after 60s of uptime
* only call onPlaying when loadeddata is fired or after timeout
* revert to inline funcs
* Portal frigate plus alert dialog
* remove duplicated logic
* increase onplaying timeout
* Use a ref instead of a state and clear timeout in AutoUpdatingCameraImage
* default to the selected month for selectedDay
* Use buffered time instead of timeout
* Use default cursor when not editing polygons
* Revert "Use latest 5.1 ffmpeg update (#12243)"
This reverts commit 93e08688be.
* Revert "Change qsv device arg to standard hwaccel arg (#12249)"
This reverts commit 56b4a551dc.
* Use different repo for build
* Manually set current time when selecting event
* Make it clear which camera has no preview
* Make it clear which camera has no preview
* Format camera name
* Show number of items instead of dot
* Don't call error when connection has been closed on purpose
* Use motion icon for motion
* Show text on tablets as well
* Update segment even when number of active objects is the same
* add score to frigate+ chip
* Add support for selecting zones
* Add api support for filtering on zones
* Adjust UI
* Update filtering logic
* Clean up
* Only set stalled error when player is visible
* Show activity indicator before live player starts playing
* remove comment
* keep gradients when still image is showing
* fix chips
* red dot and outline
* intentionally handle queues during shutdown and carefully manage shutdown order
* more carefully manage shutdown to avoid threadlocks
* use debug for signal logging
* ensure disabled cameras dont break shutdown
* typo
* Allow deleting failed in progress exports
* Fix comparison and preview retrieval
* Fix stretching of event cards
* Reset edit state when group changes
* Allow specifying group
* Update authentication.md to note port 8080 vs 5000
* Update docs/docs/configuration/authentication.md
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
---------
Co-authored-by: Nicolas Mowen <nickmowen213@gmail.com>
* implement self signed cert and monitor/reload
* move go2rtc upstream to separate file
* add directory for ACME challenges
* make certsync more resilient
* add TLS docs
* add jwt secret info to docs
When serving Frigate at a subpath, the paths that show in the URL bar
and that wind up in your browser history are anchored at the web root.
I.e. you go to `https://example.com/frigate/`, it changes to
`https://example.com/`, and clicking around works as expected, but the
`frigate/` prefix is gone.
It's confusing if you don't know that the URL's are entirely virtual.
Also, your browser history is useless, since the URL's point to e.g.
`https://example.com/#kitchen`, but visiting that URL will not hit
`/frigate/` at all.
Most of the work is already done. Nginx injects javascript to set
`window.baseURL` based on the X-Ingress-Path header. This change passes
that to BrowserRouter, so that it'll be part of the URL's it shows.
Fixes#4526
* Restrict nginx to 4 processes if more are available
* Fix bash
* Different sed structure
* Limit ffmpeg thread counts for secondary ffmpeg processes
* Add up / down keyboard shortcut
* Update config version to be stored inside of the config
* Don't remove items from list when navigating back
* Use video api instead of webps for live current hour filmstrip
* Check that the config file is writable
* Show camera name when camera is offline
* Show camera name when offline
* Cleanup
sed -i -e '/AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31\/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi\/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==/d' ~/.ssh/known_hosts
sed -i -e '/AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31\/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi\/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==/d' ~/.ssh/known_hosts
Use this form for support or questions for an issue with your cameras.
Before submitting your support request, please [search the discussions][discussions], read the [official Frigate documentation][docs], and read the [Frigate FAQ][faq] pinned at the Discussion page to see if your question has already been answered by the community.
description:Visible on the System page in the Web UI
description:Visible on the System page in the Web UI. Please include the full version including the build identifier (eg. 0.14.0-ea36ds1)
validations:
required:true
- type:input
attributes:
label:What browser(s) are you using?
placeholder:Google Chrome 88.0.4324.150
description:>
Provide the full name and don't forget to add the version!
- type:textarea
id:config
attributes:
@@ -23,10 +39,18 @@ body:
validations:
required:true
- type:textarea
id:logs
id:frigatelogs
attributes:
label:Relevant log output
description:Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
label:Relevant Frigate log output
description:Please copy and paste any relevant Frigate log output. Include logs before and after your exact error when possible. This will be automatically formatted into code, so no need for backticks.
render:shell
validations:
required:true
- type:textarea
id:go2rtclogs
attributes:
label:Relevant go2rtc log output
description:Please copy and paste any relevant go2rtc log output. Include logs before and after your exact error when possible. Logs can be viewed via the Frigate UI, Docker, or the go2rtc dashboard. This will be automatically formatted into code, so no need for backticks.
render:shell
validations:
required:true
@@ -34,7 +58,7 @@ body:
id:ffprobe
attributes:
label:FFprobe output from your camera
description:Run `ffprobe <camera_url>` and provide output below
description:Run `ffprobe <camera_url>` from within the Frigate container if possible, and provide output below
render:shell
validations:
required:true
@@ -66,17 +90,20 @@ body:
- HassOS Addon
- Docker Compose
- Docker CLI
- Proxmox via Docker
- Proxmox via TTeck Script
- Windows WSL2
validations:
required:true
- type:dropdown
id:coral
id:object-detector
attributes:
label:Coral version
label:Object Detector
options:
- USB
- PCIe
- M.2
- Dev Board
- Coral
- OpenVino
- TensorRT
- RKNN
- Other
- CPU (no coral)
validations:
@@ -98,6 +125,13 @@ body:
description:Dahua, hikvision, amcrest, reolink, etc and model number
validations:
required:true
- type:textarea
id:screenshots
attributes:
label:Screenshots of the Frigate UI's System metrics pages
description:Drag and drop for images is possible in this field. Please post screenshots of at least General and Cameras tabs.
Use this form for support or questions related to Frigate's configuration and config file.
Before submitting your support request, please [search the discussions][discussions], read the [official Frigate documentation][docs], and read the [Frigate FAQ][faq] pinned at the Discussion page to see if your question has already been answered by the community.
description:Visible on the System page in the Web UI
description:Visible on the System page in the Web UI. Please include the full version including the build identifier (eg. 0.14.0-ea36ds1)
validations:
required:true
- type:textarea
@@ -23,10 +33,18 @@ body:
validations:
required:true
- type:textarea
id:logs
id:frigatelogs
attributes:
label:Relevant log output
description:Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
label:Relevant Frigate log output
description:Please copy and paste any relevant Frigate log output. Include logs before and after your exact error when possible. This will be automatically formatted into code, so no need for backticks.
render:shell
validations:
required:true
- type:textarea
id:go2rtclogs
attributes:
label:Relevant go2rtc log output
description:Please copy and paste any relevant go2rtc log output. Include logs before and after your exact error when possible. This will be automatically formatted into code, so no need for backticks.
render:shell
validations:
required:true
@@ -58,21 +76,37 @@ body:
- HassOS Addon
- Docker Compose
- Docker CLI
- Proxmox via Docker
- Proxmox via TTeck Script
- Windows WSL2
validations:
required:true
- type:textarea
id:docker
attributes:
label:docker-compose file or Docker CLI command
description:This will be automatically formatted into code, so no need for backticks.
render:yaml
validations:
required:true
- type:dropdown
id:coral
id:object-detector
attributes:
label:Coral version
label:Object Detector
options:
- USB
- PCIe
- M.2
- Dev Board
- Coral
- OpenVino
- TensorRT
- RKNN
- Other
- CPU (no coral)
validations:
required:true
- type:textarea
id:screenshots
attributes:
label:Screenshots of the Frigate UI's System metrics pages
description:Drag and drop or simple cut/paste is possible in this field
Use this form for support or questions related to Frigate's object detectors.
Before submitting your support request, please [search the discussions][discussions], read the [official Frigate documentation][docs], and read the [Frigate FAQ][faq] pinned at the Discussion page to see if your question has already been answered by the community.
description:Visible on the System page in the Web UI
description:Visible on the System page in the Web UI. Please include the full version including the build identifier (eg. 0.14.0-ea36ds1)
validations:
required:true
- type:textarea
@@ -31,27 +41,13 @@ body:
validations:
required:true
- type:textarea
id:logs
id:frigatelogs
attributes:
label:Relevant log output
description:Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
label:Relevant Frigate log output
description:Please copy and paste any relevant Frigate log output. Include logs before and after your exact error when possible. This will be automatically formatted into code, so no need for backticks.
render:shell
validations:
required:true
- type:dropdown
id:os
attributes:
label:Operating system
options:
- HassOS
- Debian
- Other Linux
- Proxmox
- UNRAID
- Windows
- Other
validations:
required:true
- type:dropdown
id:install-method
attributes:
@@ -60,21 +56,31 @@ body:
- HassOS Addon
- Docker Compose
- Docker CLI
- Proxmox via Docker
- Proxmox via TTeck Script
- Windows WSL2
validations:
required:true
- type:dropdown
id:coral
id:object-detector
attributes:
label:Coral version
label:Object Detector
options:
- USB
- PCIe
- M.2
- Dev Board
- Coral
- OpenVino
- TensorRT
- RKNN
- Other
- CPU (no coral)
validations:
required:true
- type:textarea
id:screenshots
attributes:
label:Screenshots of the Frigate UI's System metrics pages
description:Drag and drop for images is possible in this field. Please post screenshots of at least General and Cameras tabs.
Use this form for support for issues that don't fall into any specific category.
Before submitting your support request, please [search the discussions][discussions], read the [official Frigate documentation][docs], and read the [Frigate FAQ][faq] pinned at the Discussion page to see if your question has already been answered by the community.
description:Visible on the System page in the Web UI
description:Visible on the System page in the Web UI. Please include the full version including the build identifier (eg. 0.14.0-ea36ds1)
validations:
required:true
- type:input
attributes:
label:What browser(s) are you using?
placeholder:Google Chrome 88.0.4324.150
description:>
Provide the full name and don't forget to add the version!
- type:textarea
id:config
attributes:
@@ -23,10 +39,18 @@ body:
validations:
required:true
- type:textarea
id:logs
id:frigatelogs
attributes:
label:Relevant log output
description:Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
label:Relevant Frigate log output
description:Please copy and paste any relevant Frigate log output. Include logs before and after your exact error when possible. This will be automatically formatted into code, so no need for backticks.
render:shell
validations:
required:true
- type:textarea
id:go2rtclogs
attributes:
label:Relevant go2rtc log output
description:Please copy and paste any relevant go2rtc log output. Include logs before and after your exact error when possible. Logs can be viewed via the Frigate UI, Docker, or the go2rtc dashboard. This will be automatically formatted into code, so no need for backticks.
render:shell
validations:
required:true
@@ -34,7 +58,7 @@ body:
id:ffprobe
attributes:
label:FFprobe output from your camera
description:Run `ffprobe <camera_url>` and provide output below
description:Run `ffprobe <camera_url>` from within the Frigate container if possible, and provide output below
render:shell
validations:
required:true
@@ -44,20 +68,6 @@ body:
label:Frigate stats
description:Output from frigate's /api/stats endpoint
render:json
- type:dropdown
id:os
attributes:
label:Operating system
options:
- HassOS
- Debian
- Other Linux
- Proxmox
- UNRAID
- Windows
- Other
validations:
required:true
- type:dropdown
id:install-method
attributes:
@@ -66,17 +76,28 @@ body:
- HassOS Addon
- Docker Compose
- Docker CLI
- Proxmox via Docker
- Proxmox via TTeck Script
- Windows WSL2
validations:
required:true
- type:textarea
id:docker
attributes:
label:docker-compose file or Docker CLI command
description:This will be automatically formatted into code, so no need for backticks.
render:yaml
validations:
required:true
- type:dropdown
id:coral
id:object-detector
attributes:
label:Coral version
label:Object Detector
options:
- USB
- PCIe
- M.2
- Dev Board
- Coral
- OpenVino
- TensorRT
- RKNN
- Other
- CPU (no coral)
validations:
@@ -98,6 +119,11 @@ body:
description:Dahua, hikvision, amcrest, reolink, etc and model number
validations:
required:true
- type:textarea
id:screenshots
attributes:
label:Screenshots of the Frigate UI's System metrics pages
description:Drag and drop for images is possible in this field
Use this form to submit a support request for hardware acceleration issues.
Before submitting your support request, please [search the discussions][discussions], read the [official Frigate documentation][docs], and read the [Frigate FAQ][faq] pinned at the Discussion page to see if your question has already been answered by the community.
description:Visible on the System page in the Web UI
description:Visible on the System page in the Web UI. Please include the full version including the build identifier (eg. 0.14.0-ea36ds1)
validations:
required:true
- type:textarea
@@ -31,10 +41,18 @@ body:
validations:
required:true
- type:textarea
id:logs
id:frigatelogs
attributes:
label:Relevant log output
description:Please copy and paste any relevant log output. This will be automatically formatted into code, so no need for backticks.
label:Relevant Frigate log output
description:Please copy and paste any relevant Frigate log output. Include logs before and after your exact error when possible. This will be automatically formatted into code, so no need for backticks.
render:shell
validations:
required:true
- type:textarea
id:go2rtclogs
attributes:
label:Relevant go2rtc log output
description:Please copy and paste any relevant go2rtc log output. Include logs before and after your exact error when possible. Logs can be viewed via the Frigate UI, Docker, or the go2rtc dashboard. This will be automatically formatted into code, so no need for backticks.
render:shell
validations:
required:true
@@ -42,24 +60,10 @@ body:
id:ffprobe
attributes:
label:FFprobe output from your camera
description:Run `ffprobe <camera_url>` and provide output below
description:Run `ffprobe <camera_url>` from within the Frigate container if possible, and provide output below
render:shell
validations:
required:true
- type:dropdown
id:os
attributes:
label:Operating system
options:
- HassOS
- Debian
- Other Linux
- Proxmox
- UNRAID
- Windows
- Other
validations:
required:true
- type:dropdown
id:install-method
attributes:
@@ -68,6 +72,22 @@ body:
- HassOS Addon
- Docker Compose
- Docker CLI
- Proxmox via Docker
- Proxmox via TTeck Script
- Windows WSL2
validations:
required:true
- type:dropdown
id:object-detector
attributes:
label:Object Detector
options:
- Coral
- OpenVino
- TensorRT
- RKNN
- Other
- CPU (no coral)
validations:
required:true
- type:dropdown
@@ -87,6 +107,13 @@ body:
description:Dahua, hikvision, amcrest, reolink, etc and model number
validations:
required:true
- type:textarea
id:screenshots
attributes:
label:Screenshots of the Frigate UI's System metrics pages
description:Drag and drop for images is possible in this field. Please post screenshots of at least General and Cameras tabs.
Use this form for questions you have about Frigate.
Before submitting your question, please [search the discussions][discussions], read the [official Frigate documentation][docs], and read the [Frigate FAQ][faq] pinned at the Discussion page to see if your question has already been answered by the community.
**If you are looking for support, start a new discussion and use a support category.**
Use this form to submit a reproducible bug in Frigate or Frigate's UI.
Before submitting your bug report, please [search the discussions][discussions], look at recent open and closed [pull requests][prs], read the [official Frigate documentation][docs], and read the [Frigate FAQ][faq] pinned at the Discussion page to see if your bug has already been fixed by the developers or reported by the community.
**If you are unsure if your issue is actually a bug or not, please submit a support request first.**
description:Please verify that you've followed these steps
options:
- label:I have updated to the latest available Frigate version.
required:true
- label:I have cleared the cache of my browser.
required:true
- label:I have tried a different browser to see if it is related to my browser.
required:true
- label:I have tried reproducing the issue in [incognito mode](https://www.computerworld.com/article/1719851/how-to-go-incognito-in-chrome-firefox-safari-and-edge.html) to rule out problems with any third party extensions or plugins I have installed.
- type:textarea
id:description
attributes:
label:Describe the problem you are having
description:Provide a clear and concise description of what the bug is.
validations:
required:true
- type:textarea
id:steps
attributes:
label:Steps to reproduce
description:|
Please tell us exactly how to reproduce your issue.
Provide clear and concise step by step instructions and add code snippets if needed.
value:|
1.
2.
3.
...
validations:
required:true
- type:input
id:version
attributes:
label:Version
description:Visible on the System page in the Web UI. Please include the full version including the build identifier (eg. 0.14.0-ea36ds1)
validations:
required:true
- type:input
attributes:
label:In which browser(s) are you experiencing the issue with?
placeholder:Google Chrome 88.0.4324.150
description:>
Provide the full name and don't forget to add the version!
- type:textarea
id:config
attributes:
label:Frigate config file
description:This will be automatically formatted into code, so no need for backticks.
render:yaml
validations:
required:true
- type:textarea
id:docker
attributes:
label:docker-compose file or Docker CLI command
description:This will be automatically formatted into code, so no need for backticks.
render:yaml
validations:
required:true
- type:textarea
id:frigatelogs
attributes:
label:Relevant Frigate log output
description:Please copy and paste any relevant Frigate log output. Include logs before and after your exact error when possible. This will be automatically formatted into code, so no need for backticks.
render:shell
validations:
required:true
- type:textarea
id:go2rtclogs
attributes:
label:Relevant go2rtc log output
description:Please copy and paste any relevant go2rtc log output. Include logs before and after your exact error when possible. Logs can be viewed via the Frigate UI, Docker, or the go2rtc dashboard. This will be automatically formatted into code, so no need for backticks.
render:shell
validations:
required:true
- type:dropdown
id:os
attributes:
label:Operating system
options:
- HassOS
- Debian
- Other Linux
- Proxmox
- UNRAID
- Windows
- Other
validations:
required:true
- type:dropdown
id:install-method
attributes:
label:Install method
options:
- HassOS Addon
- Docker Compose
- Docker CLI
validations:
required:true
- type:dropdown
id:network
attributes:
label:Network connection
options:
- Wired
- Wireless
- Mixed
validations:
required:true
- type:input
id:camera
attributes:
label:Camera make and model
description:Dahua, hikvision, amcrest, reolink, etc and model number
validations:
required:true
- type:textarea
id:screenshots
attributes:
label:Screenshots of the Frigate UI's System metrics pages
description:Drag and drop for images is possible in this field. Please post screenshots of all tabs.
Event and recording information is managed in a sqlite database at `/config/frigate.db`. If that database is deleted, recordings will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant.
Tracked object and recording information is managed in a sqlite database at `/config/frigate.db`. If that database is deleted, recordings will be orphaned and will need to be cleaned up manually. They also won't show up in the Media Browser within Home Assistant.
If you are storing your database on a network share (SMB, NFS, etc), you may get a `database is locked` error message on startup. You can customize the location of the database in the config if necessary.
@@ -80,6 +80,14 @@ model:
input_pixel_format:"bgr"
```
#### `labelmap`
:::warning
If the labelmap is customized then the labels used for alerts will need to be adjusted as well. See [alert labels](../configuration/review.md#restricting-alerts-to-specific-labels) for more info.
:::
The labelmap can be customized to your needs. A common reason to do this is to combine multiple object types that are easily confused when you don't need to be as granular such as car/truck. By default, truck is renamed to car because they are often confused. You cannot add new object types, but you can change the names of existing objects in the model.
```yaml
@@ -106,21 +114,67 @@ Some labels have special handling and modifications can disable functionality.
:::
## Custom ffmpeg build
## Network Configuration
Included with Frigate is a build of ffmpeg that works for the vast majority of users. However, there exists some hardware setups which have incompatibilities with the included build. In this case, a docker volume mapping can be used to overwrite the included ffmpeg build with an ffmpeg build that works for your specific hardware setup.
Changes to Frigate's internal network configuration can be made by bind mounting nginx.conf into the container. For example:
IPv6 is disabled by default, to enable IPv6 listen.gotmpl needs to be bind mounted with IPv6 enabled. For example:
```
{{ if not .enabled }}
# intended for external traffic, protected by auth
listen 8971;
{{ else }}
# intended for external traffic, protected by auth
listen 8971 ssl;
# intended for internal traffic, not protected by auth
listen 5000;
```
becomes
```
{{ if not .enabled }}
# intended for external traffic, protected by auth
listen [::]:8971 ipv6only=off;
{{ else }}
# intended for external traffic, protected by auth
listen [::]:8971 ipv6only=off ssl;
# intended for internal traffic, not protected by auth
listen [::]:5000 ipv6only=off;
```
## Custom Dependencies
### Custom ffmpeg build
Included with Frigate is a build of ffmpeg that works for the vast majority of users. However, there exists some hardware setups which have incompatibilities with the included build. In this case, statically built ffmpeg binary can be downloaded to /config and used.
To do this:
1. Download your ffmpeg build and uncompress to a folder on the host (let's use `/home/appdata/frigate/custom-ffmpeg` for this example).
1. Download your ffmpeg build and uncompress to the Frigate config folder.
2. Update your docker-compose or docker CLI to include `'/home/appdata/frigate/custom-ffmpeg':'/usr/lib/btbn-ffmpeg':'ro'` in the volume mappings.
3. Restart Frigate and the custom version will be used if the mapping was done correctly.
NOTE: The folder that is mapped from the host needs to be the folder that contains `/bin`. So if the full structure is `/home/appdata/frigate/custom-ffmpeg/bin/ffmpeg` then `/home/appdata/frigate/custom-ffmpeg` needs to be mapped to `/usr/lib/btbn-ffmpeg`.
NOTE: The folder that is set for the config needs to be the folder that contains `/bin`. So if the full structure is `/home/appdata/frigate/custom-ffmpeg/bin/ffmpeg` then the `ffmpeg -> path` field should be `/config/custom-ffmpeg/bin`.
## Custom go2rtc version
### Custom go2rtc version
Frigate currently includes go2rtc v1.9.2, there may be certain cases where you want to run a different version of go2rtc.
Frigate currently includes go2rtc v1.9.4, there may be certain cases where you want to run a different version of go2rtc.
To do this:
@@ -129,7 +183,7 @@ To do this:
3. Give `go2rtc` execute permission.
4. Restart Frigate and the custom version will be used, you can verify by checking go2rtc logs.
## Validating your config.yaml file updates
## Validating your config.yml file updates
When frigate starts up, it checks whether your config file is valid, and if it is not, the process exits. To minimize interruptions when updating your config, you have three options -- you can edit the config via the WebUI which has built in validation, use the config API, or you can validate on the command line using the frigate docker container.
| `native` | (default) Use this mode if you don't implement authentication with a proxy in front of Frigate. |
| `proxy` | Use this mode if you have an existing proxy for authentication. Supports passing authenticated user downstream to Frigate for role-based authorization (future implementation). |
### Native mode
Frigate stores user information in its database. Password hashes are generated using industry standard PBKDF2-SHA256 with 600,000 iterations. Upon successful login, a JWT token is issued with an expiration date and set as a cookie. The cookie is refreshed as needed automatically. This JWT token can also be passed in the Authorization header as a bearer token.
Users are managed in the UI under Settings > Users.
#### Onboarding
The following ports are available to access the Frigate web UI.
| `8971` | Authenticated UI and API. Reverse proxies should use this port. |
| `5000` | Internal unauthenticated UI and API access. Access to this port should be limited. Intended to be used within the docker network for services that integrate with Frigate and do not support authentication. |
## Onboarding
On startup, an admin user and password are generated and printed in the logs. It is recommended to set a new password for the admin account after logging in for the first time under Settings > Users.
#### Resetting admin password
## Resetting admin password
In the event that you are locked out of your instance, you can tell Frigate to reset the admin password and print it in the logs on next startup using the `reset_admin_password` setting in your config file.
#### Login failure rate limiting
## Login failure rate limiting
In order to limit the risk of brute force attacks, rate limiting is available for login failures. This is implemented with Flask-Limiter, and the string notation for valid values is available in [the documentation](https://flask-limiter.readthedocs.io/en/stable/configuration.html#rate-limit-string-notation).
In order to limit the risk of brute force attacks, rate limiting is available for login failures. This is implemented with SlowApi, and the string notation for valid values is available in [the documentation](https://limits.readthedocs.io/en/stable/quickstart.html#examples).
For example, `1/second;5/minute;20/hour` will rate limit the login endpoint when failures occur more than:
@@ -46,22 +42,60 @@ If you are running a reverse proxy in the same docker compose file as Frigate, h
- 172.18.0.0/16# <---- this is the subnet for the internal docker compose network
```
### Proxy mode
## JWT Token Secret
Proxy mode is designed to complement common upstream authentication proxies such as Authelia, Authentik, oauth2_proxy, or traefik-forward-auth.
The JWT token secret needs to be kept secure. Anyone with this secret can generate valid JWT tokens to authenticate with Frigate. This should be a cryptographically random string of at least 64 characters.
#### Header mapping
You can generate a token using the Python secret library with the following command:
If your proxy supports passing a header with the authenticated username, you can use the `header_map` config to specify the header name so it is passed to Frigate. For example, the following will map the `X-Forwarded-User` value. Header names are not case sensitive.
Frigate looks for a JWT token secret in the following order:
1. An environment variable named `FRIGATE_JWT_SECRET`
2. A docker secret named `FRIGATE_JWT_SECRET` in `/run/secrets/`
3. A `jwt_secret` option from the Home Assistant Addon options
4. A `.jwt_secret` file in the config directory
If no secret is found on startup, Frigate generates one and stores it in a `.jwt_secret` file in the config directory.
Changing the secret will invalidate current tokens.
## Proxy configuration
Frigate can be configured to leverage features of common upstream authentication proxies such as Authelia, Authentik, oauth2_proxy, or traefik-forward-auth.
If you are leveraging the authentication of an upstream proxy, you likely want to disable Frigate's authentication. Optionally, if communication between the reverse proxy and Frigate is over an untrusted network, you should set an `auth_secret` in the `proxy` config and configure the proxy to send the secret value as a header named `X-Proxy-Secret`. Assuming this is an untrusted network, you will also want to [configure a real TLS certificate](tls.md) to ensure the traffic can't simply be sniffed to steal the secret.
Here is an example of how to disable Frigate's authentication and also ensure the requests come only from your known proxy.
```yaml
auth:
enabled:False
proxy:
auth_secret:<some random long string>
```
You can use the following code to generate a random secret.
If you have disabled Frigate's authentication and your proxy supports passing a header with the authenticated username, you can use the `header_map` config to specify the header name so it is passed to Frigate. For example, the following will map the `X-Forwarded-User` value. Header names are not case sensitive.
```yaml
proxy:
...
header_map:
user:x-forwarded-user
@@ -89,10 +123,10 @@ If you would like to add more options, you can overwrite the default file with a
Future versions of Frigate may leverage group and role headers for authorization in Frigate as well.
#### Login page redirection
### Login page redirection
Frigate gracefully performs login page redirection that should work with most authentication proxies. If your reverse proxy returns a `Location` header on `401`, `302`, or `307` unauthorized responses, Frigate's frontend will automatically detect it and redirect to that URL.
#### Custom logout url
### Custom logout url
If your reverse proxy has a dedicated logout url, you can specify using the `logout_url` config option. This will update the link for the `Logout` link in the UI.
@@ -9,6 +9,12 @@ This page makes use of presets of FFmpeg args. For more information on presets,
:::
:::note
Many cameras support encoding options which greatly affect the live view experience, see the [Live view](/configuration/live) page for more info.
:::
## MJPEG Cameras
Note that mjpeg cameras require encoding the video into h264 for recording, and restream roles. This will use significantly more CPU than if the cameras supported h264 feeds directly. It is recommended to use the restream role to create an h264 restream and then use that as the source for ffmpeg.
@@ -187,4 +193,4 @@ ffmpeg:
### TP-Link VIGI Cameras
TP-Link VIGI cameras need some adjustments to the main stream settings on the camera itself to avoid issues. The stream needs to be configured as `H264` with `Smart Coding` set to `off`. Without these settings you may have problems when trying to watch recorded events. For example Firefox will stop playback after a few seconds and show the following error message: `The media playback was aborted due to a corruption problem or because the media used features your browser did not support.`.
TP-Link VIGI cameras need some adjustments to the main stream settings on the camera itself to avoid issues. The stream needs to be configured as `H264` with `Smart Coding` set to `off`. Without these settings you may have problems when trying to watch recorded footage. For example Firefox will stop playback after a few seconds and show the following error message: `The media playback was aborted due to a corruption problem or because the media used features your browser did not support.`.
Several inputs can be configured for each camera and the role of each input can be mixed and matched based on your needs. This allows you to use a lower resolution stream for object detection, but create recordings from a higher resolution stream, or vice versa.
A camera is enabled by default but can be temporarily disabled by using `enabled: False`. Existing events and recordings can still be accessed. Live streams, recording and detecting are not working. Camera specific configurations will be used.
A camera is enabled by default but can be temporarily disabled by using `enabled: False`. Existing tracked objects and recordings can still be accessed. Live streams, recording and detecting are not working. Camera specific configurations will be used.
Each role can only be assigned to one input per camera. The options for roles are as follows:
@@ -46,6 +46,14 @@ cameras:
side:...
```
:::note
If you only define one stream in your `inputs` and do not assign a `detect` role to it, Frigate will automatically assign it the `detect` role. Frigate will always decode a stream to support motion detection, Birdseye, the API image endpoints, and other features, even if you have disabled object detection with `enabled: False` in your config's `detect` section.
If you plan to use Frigate for recording only, it is still recommended to define a `detect` role for a low resolution stream to minimize resource usage from the required stream decoding.
:::
For camera model specific settings check the [camera specific](camera_specific.md) infos.
## Setting up camera PTZ controls
@@ -71,33 +79,41 @@ cameras:
If the ONVIF connection is successful, PTZ controls will be available in the camera's WebUI.
:::tip
If your ONVIF camera does not require authentication credentials, you may still need to specify an empty string for `user` and `password`, eg: `user: ""` and `password: ""`.
:::
An ONVIF-capable camera that supports relative movement within the field of view (FOV) can also be configured to automatically track moving objects and keep them in the center of the frame. For autotracking setup, see the [autotracking](autotracking.md) docs.
## ONVIF PTZ camera recommendations
This list of working and non-working PTZ cameras is based on user feedback.
| Brand or specific camera | PTZ Controls | Autotracking | Notes |
| Amcrest | ✅ | ✅ | ⛔️ Generally, Amcrest should work, but some older models (like the common IP2M-841) don't support autotracking |
| Amcrest ASH21 | ❌ | ❌ | No ONVIF support |
| Ctronics PTZ | ✅ | ❌ | |
| Dahua | ✅ | ✅ | |
| Foscam R5 | ✅ | ❌ | |
| Hanwha XNP-6550RH | ✅ | ❌ | |
| Hikvision | ✅ | ❌| Incomplete ONVIF support (MoveStatus won't update even on latest firmware) - reported with HWP-N4215IH-DE and DS-2DE3304W-DE, but likely others |
| Reolink 511WA | ✅ | ❌ | Zoom only |
| Reolink E1 Pro | ✅ | ❌ | |
| Reolink E1 Zoom | ✅ | ❌ | |
| Reolink RLC-823A 16x | ✅ | ❌ | |
| Sunba 405-D20X | ✅ | ❌ | |
| Tapo C200 | ✅ | ❌ | Incomplete ONVIF support |
| Tapo C210 | ✅ | ❌ | Incomplete ONVIF support, ONVIF Service Port: 2020 |
| Tapo C220 | ✅ | ❌ | Incomplete ONVIF support, ONVIF Service Port: 2020 |
| Tapo C225 | ✅ | ❌ | Incomplete ONVIF support, ONVIF Service Port: 2020 |
| Tapo C520WS | ✅ | ❌ | Incomplete ONVIF support, ONVIF Service Port: 2020 |
| Uniview IPC672LR-AX4DUPK | ✅ | ❌ | Firmware says FOV relative movement is supported, but camera doesn't actually move when sending ONVIF commands |
| Amcrest IP4M-S2112EW-AI | ✅ | ❌ | FOV relative movement not supported. |
| Amcrest IP5M-1190EW | ✅ | ❌ | ONVIF Port: 80. FOV relative movement not supported. |
| Ctronics PTZ | ✅ | ❌ | |
| Dahua | ✅ | ✅ | |
| Dahua DH-SD2A500HB | ✅ | ❌ | |
| Foscam R5 | ✅ | ❌ | |
| Hanwha XNP-6550RH | ✅ | ❌ | |
| Hikvision | ✅ | ❌ | Incomplete ONVIF support (MoveStatus won't update even on latest firmware) - reported with HWP-N4215IH-DE and DS-2DE3304W-DE, but likely others |
| Hikvision DS-2DE3A404IWG-E/W | ✅ | ✅ | |
| Reolink 511WA | ✅ | ❌ | Zoom only |
| Reolink E1 Pro | ✅ | ❌ | |
| Reolink E1 Zoom | ✅ | ❌ | |
| Reolink RLC-823A 16x | ✅ | ❌ | |
| Speco O8P32X | ✅ | ❌ | |
| Sunba 405-D20X | ✅ | ❌ | Incomplete ONVIF support reported on original, and 4k models. All models are suspected incompatable. |
| Tapo | ✅ | ❌ | Many models supported, ONVIF Service Port: 2020 |
| Uniview IPC672LR-AX4DUPK | ✅ | ❌ | Firmware says FOV relative movement is supported, but camera doesn't actually move when sending ONVIF commands |
| Uniview IPC6612SR-X33-VG | ✅ | ✅ | Leave `calibrate_on_startup` as `False`. A user has reported that zooming with `absolute` is working. |
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
:::info
Semantic Search must be enabled to use Generative AI.
:::
## Configuration
Generative AI can be enabled for all cameras or only for specific cameras. There are currently 3 providers available to integrate with Frigate.
If the provider you choose requires an API key, you may either directly paste it in your configuration, or store it in an environment variable prefixed with `FRIGATE_`.
```yaml
genai:
enabled:True
provider:gemini
api_key:"{FRIGATE_GEMINI_API_KEY}"
model:gemini-1.5-flash
cameras:
front_camera:...
indoor_camera:
genai:# <- disable GenAI for your indoor camera
enabled:False
```
## Ollama
:::warning
Using Ollama on CPU is not recommended, high inference times make using Generative AI impractical.
:::
[Ollama](https://ollama.com/) allows you to self-host large language models and keep everything running locally. It provides a nice API over [llama.cpp](https://github.com/ggerganov/llama.cpp). It is highly recommended to host this server on a machine with an Nvidia graphics card, or on a Apple silicon Mac for best performance.
Most of the 7b parameter 4-bit vision models will fit inside 8GB of VRAM. There is also a [Docker container](https://hub.docker.com/r/ollama/ollama) available.
Parallel requests also come with some caveats. You will need to set `OLLAMA_NUM_PARALLEL=1` and choose a `OLLAMA_MAX_QUEUE` and `OLLAMA_MAX_LOADED_MODELS` values that are appropriate for your hardware and preferences. See the [Ollama documentation](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-does-ollama-handle-concurrent-requests).
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their model library](https://ollama.com/library). At the time of writing, this includes `llava`, `llava-llama3`, `llava-phi3`, and `moondream`. Note that Frigate will not automatically download the model you specify in your config, you must download the model to your local instance of Ollama first i.e. by running `ollama pull llava:7b` on your Ollama server/Docker container. Note that the model specified in Frigate's config must match the downloaded model tag.
:::note
You should have at least 8 GB of RAM available (or VRAM if running on GPU) to run the 7B models, 16 GB to run the 13B models, and 32 GB to run the 33B models.
:::
### Configuration
```yaml
genai:
enabled:True
provider:ollama
base_url:http://localhost:11434
model:llava:7b
```
## Google Gemini
Google Gemini has a free tier allowing [15 queries per minute](https://ai.google.dev/pricing) to the API, which is more than sufficient for standard Frigate usage.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://ai.google.dev/gemini-api/docs/models/gemini). At the time of writing, this includes `gemini-1.5-pro` and `gemini-1.5-flash`.
### Get API Key
To start using Gemini, you must first get an API key from [Google AI Studio](https://aistudio.google.com).
1. Accept the Terms of Service
2. Click "Get API Key" from the right hand navigation
3. Click "Create API key in new project"
4. Copy the API key for use in your config
### Configuration
```yaml
genai:
enabled:True
provider:gemini
api_key:"{FRIGATE_GEMINI_API_KEY}"
model:gemini-1.5-flash
```
## OpenAI
OpenAI does not have a free tier for their API. With the release of gpt-4o, pricing has been reduced and each generation should cost fractions of a cent if you choose to go this route.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://platform.openai.com/docs/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
### Get API Key
To start using OpenAI, you must first [create an API key](https://platform.openai.com/api-keys) and [configure billing](https://platform.openai.com/settings/organization/billing/overview).
### Configuration
```yaml
genai:
enabled:True
provider:openai
api_key:"{FRIGATE_OPENAI_API_KEY}"
model:gpt-4o
```
## Azure OpenAI
Microsoft offers several vision models through Azure OpenAI. A subscription is required.
### Supported Models
You must use a vision capable model with Frigate. Current model variants can be found [in their documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). At the time of writing, this includes `gpt-4o` and `gpt-4-turbo`.
### Create Resource and Get API Key
To start using Azure OpenAI, you must first [create a resource](https://learn.microsoft.com/azure/cognitive-services/openai/how-to/create-resource?pivots=web-portal#create-a-resource). You'll need your API key and resource URL, which must include the `api-version` parameter (see the example below). The model field is not required in your configuration as the model is part of the deployment name you chose when deploying the resource.
Frigate's thumbnail search excels at identifying specific details about tracked objects – for example, using an "image caption" approach to find a "person wearing a yellow vest," "a white dog running across the lawn," or "a red car on a residential street." To enhance this further, Frigate’s default prompts are designed to ask your AI provider about the intent behind the object's actions, rather than just describing its appearance.
While generating simple descriptions of detected objects is useful, understanding intent provides a deeper layer of insight. Instead of just recognizing "what" is in a scene, Frigate’s default prompts aim to infer "why" it might be there or "what" it could do next. Descriptions tell you what’s happening, but intent gives context. For instance, a person walking toward a door might seem like a visitor, but if they’re moving quickly after hours, you can infer a potential break-in attempt. Detecting a person loitering near a door at night can trigger an alert sooner than simply noting "a person standing by the door," helping you respond based on the situation’s context.
### Using GenAI for notifications
Frigate provides an [MQTT topic](/integrations/mqtt), `frigate/tracked_object_update`, that is updated with a JSON payload containing `event_id` and `description` when your AI provider returns a description for a tracked object. This description could be used directly in notifications, such as sending alerts to your phone or making audio announcements. If additional details from the tracked object are needed, you can query the [HTTP API](/integrations/api/event-events-event-id-get) using the `event_id`, eg: `http://frigate_ip:5000/api/events/<event_id>`.
## Custom Prompts
Frigate sends multiple frames from the tracked object along with a prompt to your Generative AI provider asking it to generate a description. The default prompt is as follows:
```
Analyze the sequence of images containing the {label}. Focus on the likely intent or behavior of the {label} based on its actions and movement, rather than describing its appearance or the surroundings. Consider what the {label} is doing, why, and what it might do next.
```
:::tip
Prompts can use variable replacements like `{label}`, `{sub_label}`, and `{camera}` to substitute information from the tracked object as part of the prompt.
:::
You are also able to define custom prompts in your configuration.
```yaml
genai:
enabled:True
provider:ollama
base_url:http://localhost:11434
model:llava
prompt:"Analyze the {label} in these images from the {camera} security camera. Focus on the actions, behavior, and potential intent of the {label}, rather than just describing its appearance."
object_prompts:
person:"Examine the main person in these images. What are they doing and what might their actions suggest about their intent (e.g., approaching a door, leaving an area, standing still)? Do not describe the surroundings or static details."
car:"Observe the primary vehicle in these images. Focus on its movement, direction, or purpose (e.g., parking, approaching, circling). If it's a delivery vehicle, mention the company."
```
Prompts can also be overriden at the camera level to provide a more detailed prompt to the model about your specific camera, if you desire. By default, descriptions will be generated for all tracked objects and all zones. But you can also optionally specify `objects` and `required_zones` to only generate descriptions for certain tracked objects or zones.
Optionally, you can generate the description using a snapshot (if enabled) by setting `use_snapshot` to `True`. By default, this is set to `False`, which sends the uncompressed images from the `detect` stream collected over the object's lifetime to the model. Once the object lifecycle ends, only a single compressed and cropped thumbnail is saved with the tracked object. Using a snapshot might be useful when you want to _regenerate_ a tracked object's description as it will provide the AI with a higher-quality image (typically downscaled by the AI itself) than the cropped/compressed thumbnail. Using a snapshot otherwise has a trade-off in that only a single image is sent to your provider, which will limit the model's ability to determine object movement or direction.
```yaml
cameras:
front_door:
genai:
use_snapshot:True
prompt:"Analyze the {label} in these images from the {camera} security camera at the front door. Focus on the actions and potential intent of the {label}."
object_prompts:
person:"Examine the person in these images. What are they doing, and how might their actions suggest their purpose (e.g., delivering something, approaching, leaving)? If they are carrying or interacting with a package, include details about its source or destination."
cat:"Observe the cat in these images. Focus on its movement and intent (e.g., wandering, hunting, interacting with objects). If the cat is near the flower pots or engaging in any specific actions, mention it."
objects:
- person
- cat
required_zones:
- steps
```
### Experiment with prompts
Many providers also have a public facing chat interface for their models. Download a couple of different thumbnails or snapshots from Frigate and try new things in the playground to get descriptions to your liking before updating the prompt in Frigate.
- OpenAI - [ChatGPT](https://chatgpt.com)
- Gemini - [Google AI Studio](https://aistudio.google.com)
The default driver is `iHD`. You may need to change the driver to `i965` by adding the following environment variable `LIBVA_DRIVER_NAME=i965` to your docker-compose file or [in the `frigate.yaml` for HA OS users](advanced.md#environment_vars).
See [The Intel Docs](https://www.intel.com/content/www/us/en/support/articles/000005505/processors.html) to figure out what generation your CPU is.
:::
### Via VAAPI
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams. VAAPI is recommended for all generations of Intel-based CPUs if QSV does not work.
VAAPI supports automatic profile selection so it will work automatically with both H.264 and H.265 streams.
```yaml
ffmpeg:
hwaccel_args:preset-vaapi
```
:::note
With some of the processors, like the J4125, the default driver `iHD` doesn't seem to work correctly for hardware acceleration. You may need to change the driver to `i965` by adding the following environment variable `LIBVA_DRIVER_NAME=i965` to your docker-compose file or [in the `frigate.yaml` for HA OS users](advanced.md#environment_vars).
:::
### Via Quicksync (>=10th Generation only)
QSV must be set specifically based on the video encoding of the stream.
### Via Quicksync
#### H.264 streams
@@ -218,28 +231,11 @@ docker run -d \
### Setup Decoder
The decoder you need to pass in the `hwaccel_args` will depend on the input video.
A list of supported codecs (you can use `ffmpeg -decoders | grep cuvid` in the container to get the ones your card supports)
For example, for H264 video, you'll select `preset-nvidia-h264`.
Using `preset-nvidia` ffmpeg will automatically select the necessary profile for the incoming video, and will log an error if the profile is not supported by your GPU.
```yaml
ffmpeg:
hwaccel_args:preset-nvidia-h264
hwaccel_args:preset-nvidia
```
If everything is working correctly, you should see a significant improvement in performance.
@@ -362,43 +358,15 @@ that NVDEC/NVDEC1 are in use.
## Rockchip platform
Hardware accelerated video de-/encoding is supported on all Rockchip SoCs using [Nyanmisaka's FFmpeg Fork](https://github.com/nyanmisaka/ffmpeg-rockchip) based on [Rockchip's mpp library](https://github.com/rockchip-linux/mpp).
Hardware accelerated video de-/encoding is supported on all Rockchip SoCs using [Nyanmisaka's FFmpeg 6.1 Fork](https://github.com/nyanmisaka/ffmpeg-rockchip) based on [Rockchip's mpp library](https://github.com/rockchip-linux/mpp).
### Prerequisites
Make sure that you use a linux distribution that comes with the rockchip BSP kernel 5.10 or 6.1 and rkvdec2 driver. To check, enter the following commands:
```
$ uname -r
5.10.xxx-rockchip # or 6.1.xxx; the -rockchip suffix is important
$ ls /dev/dri
by-path card0 card1 renderD128 renderD129 # should list renderD128
```
I recommend [Joshua Riek's Ubuntu for Rockchip](https://github.com/Joshua-Riek/ubuntu-rockchip), if your board is supported.
### Setup
Follow Frigate's default installation instructions, but use a docker image with `-rk` suffix for example `ghcr.io/blakeblackshear/frigate:stable-rk`.
Next, you need to grant docker permissions to access your hardware:
- During the configuration process, you should run docker in privileged mode to avoid any errors due to insufficient permissions. To do so, add `privileged: true` to your `docker-compose.yml` file or the `--privileged` flag to your docker run command.
- After everything works, you should only grant necessary permissions to increase security. Add the lines below to your `docker-compose.yml` file or the following options to your docker run command: `--security-opt systempaths=unconfined --security-opt apparmor=unconfined --device /dev/dri:/dev/dri --device /dev/dma_heap:/dev/dma_heap --device /dev/rga:/dev/rga --device /dev/mpp_service:/dev/mpp_service`:
```yaml
security_opt:
- apparmor=unconfined
- systempaths=unconfined
devices:
- /dev/dri:/dev/dri
- /dev/dma_heap:/dev/dma_heap
- /dev/rga:/dev/rga
- /dev/mpp_service:/dev/mpp_service
```
Make sure to follow the [Rockchip specific installation instructions](/frigate/installation#rockchip-platform).
### Configuration
Add one of the following FFmpeg presets to your `config.yaml` to enable hardware video processing:
Add one of the following FFmpeg presets to your `config.yml` to enable hardware video processing:
Here are some common starter configuration examples. Refer to the [reference config](./reference.md) for detailed information about all the config values.
@@ -67,7 +72,7 @@ Here are some common starter configuration examples. Refer to the [reference con
- Hardware acceleration for decoding video
- USB Coral detector
- Save all video with any detectable motion for 7 days regardless of whether any objects were detected or not
- Continue to keep all video if it was during any event for 30 days
- Continue to keep all video if it qualified as an alert or detection for 30 days
- Save snapshots for 30 days
- Motion mask for the camera timestamp
@@ -90,10 +95,12 @@ record:
retain:
days:7
mode:motion
events:
alerts:
retain:
default:30
mode:motion
days:30
detections:
retain:
days:30
snapshots:
enabled:True
@@ -123,7 +130,7 @@ cameras:
- VAAPI hardware acceleration for decoding video
- USB Coral detector
- Save all video with any detectable motion for 7 days regardless of whether any objects were detected or not
- Continue to keep all video if it was during any event for 30 days
- Continue to keep all video if it qualified as an alert or detection for 30 days
- Save snapshots for 30 days
- Motion mask for the camera timestamp
@@ -144,10 +151,12 @@ record:
retain:
days:7
mode:motion
events:
alerts:
retain:
default:30
mode:motion
days:30
detections:
retain:
days:30
snapshots:
enabled:True
@@ -177,7 +186,7 @@ cameras:
- VAAPI hardware acceleration for decoding video
- OpenVino detector
- Save all video with any detectable motion for 7 days regardless of whether any objects were detected or not
- Continue to keep all video if it was during any event for 30 days
- Continue to keep all video if it qualified as an alert or detection for 30 days
@@ -7,13 +7,25 @@ Frigate intelligently displays your camera streams on the Live view dashboard. Y
## Live View technologies
Frigate intelligently uses three different streaming technologies to display your camera streams. The highest quality and fluency of the Live view requires the bundled `go2rtc` to be configured as shown in the [step by step guide](/guides/configuring_go2rtc).
Frigate intelligently uses three different streaming technologies to display your camera streams on the dashboard and the single camera view, switching between available modes based on network bandwidth, player errors, or required features like two-way talk. The highest quality and fluency of the Live view requires the bundled `go2rtc` to be configured as shown in the [step by step guide](/guides/configuring_go2rtc).
| jsmpeg | same as `detect -> fps`, capped at 10 | 720p | no| no | Resolution is configurable, but go2rtc is recommended if you want higher resolutions and better frame rates. jsmpeg is Frigate's default without go2rtc configured. |
| mse | native | native | yes (depends on audio codec) | yes | iPhone requires iOS 17.1+, Firefox is h.264 only. This is Frigate's default when go2rtc is configured. |
| webrtc | native | native | yes (depends on audio codec) | yes | Requires extra configuration, doesn't support h.265. Frigate attempts to use WebRTC when MSE fails or when using a camera's two-way talk feature. |
### Camera Settings Recommendations
If you are using go2rtc, you should adjust the following settings in your camera's firmware for the best experience with Live view:
- Video codec: **H.264** - provides the most compatible video codec with all Live view technologies and browsers. Avoid any kind of "smart codec" or "+" codec like _H.264+_ or _H.265+_. as these non-standard codecs remove keyframes (see below).
- Audio codec: **AAC** - provides the most compatible audio codec with all Live view technologies and browsers that support audio.
- I-frame interval (sometimes called the keyframe interval, the interframe space, or the GOP length): match your camera's frame rate, or choose "1x" (for interframe space on Reolink cameras). For example, if your stream outputs 20fps, your i-frame interval should be 20 (or 1x on Reolink). Values higher than the frame rate will cause the stream to take longer to begin playback. See [this page](https://gardinal.net/understanding-the-keyframe-interval/) for more on keyframes.
The default video and audio codec on your camera may not always be compatible with your browser, which is why setting them to H.264 and AAC is recommended. See the [go2rtc docs](https://github.com/AlexxIT/go2rtc?tab=readme-ov-file#codecs-madness) for codec support information.
### Audio Support
@@ -30,6 +42,15 @@ go2rtc:
- "ffmpeg:http_cam#audio=opus"# <- copy of the stream which transcodes audio to the missing codec (usually will be opus)
```
If your camera does not have audio and you are having problems with Live view, you should have go2rtc send video only:
```yaml
go2rtc:
streams:
no_audio_camera:
- ffmpeg:rtsp://192.168.1.5:554/live0#video=copy
```
### Setting Stream For Live UI
There may be some cameras that you would prefer to use the sub stream for live view, but the main stream for recording. This can be done via `live -> stream_name`.
@@ -78,7 +78,7 @@ It is, but the definition of "unnecessary" varies. I want to ignore areas of mot
> For me, giving my masks ANY padding results in a lot of people detection I'm not interested in. I live in the city and catch a lot of the sidewalk on my camera. People walk by my front door all the time and the margin between the sidewalk and actually walking onto my stoop is very thin, so I basically have everything but the exact contours of my stoop masked out. This results in very tidy detections but this info keeps throwing me off. Am I just overthinking it?
This is what `required_zones` are for. You should define a zone (remember this is evaluated based on the bottom center of the bounding box) and make it required to save snapshots and clips (now events in 0.9.0). You can also use this in your conditions for a notification.
This is what `required_zones` are for. You should define a zone (remember this is evaluated based on the bottom center of the bounding box) and make it required to save snapshots and clips (previously events in 0.9.0 to 0.13.0 and review items in 0.14.0 and later). You can also use this in your conditions for a notification.
> Maybe my specific situation just warrants this. I've just been having a hard time understanding the relevance of this information - it seems to be that it's exactly what would be expected when "masking out" an area of ANY image.
@@ -13,11 +13,11 @@ Once motion is detected, it tries to group up nearby areas of motion together in
The default motion settings should work well for the majority of cameras, however there are cases where tuning motion detection can lead to better and more optimal results. Each camera has its own environment with different variables that affect motion, this means that the same motion settings will not fit all of your cameras.
Before tuning motion it is important to understand the goal. In an optimal configuration, motion from people and cars would be detected, but not grass moving, lighting changes, timestamps, etc. If your motion detection is too sensitive, you will experience higher CPU loads and greater false positives from the increased rate of object detection. If it is not sensitive enough, you will miss events.
Before tuning motion it is important to understand the goal. In an optimal configuration, motion from people and cars would be detected, but not grass moving, lighting changes, timestamps, etc. If your motion detection is too sensitive, you will experience higher CPU loads and greater false positives from the increased rate of object detection. If it is not sensitive enough, you will miss objects that you want to track.
## Create Motion Masks
First, mask areas with regular motion not caused by the objects you want to detect. The best way to find candidates for motion masks is by watching the debug stream with motion boxes enabled. Good use cases for motion masks are timestamps or tree limbs and large bushes that regularly move due to wind. When possible, avoid creating motion masks that would block motion detection for objects you want to track **even if they are in locations where you don't want events**. Motion masks should not be used to avoid detecting objects in specific areas. More details can be found [in the masks docs.](/configuration/masks.md).
First, mask areas with regular motion not caused by the objects you want to detect. The best way to find candidates for motion masks is by watching the debug stream with motion boxes enabled. Good use cases for motion masks are timestamps or tree limbs and large bushes that regularly move due to wind. When possible, avoid creating motion masks that would block motion detection for objects you want to track **even if they are in locations where you don't want alerts or detections**. Motion masks should not be used to avoid detecting objects in specific areas. More details can be found [in the masks docs.](/configuration/masks.md).
## Prepare For Testing
@@ -29,7 +29,7 @@ Now that things are set up, find a time to tune that represents normal circumsta
:::note
Remember that motion detection is just used to determine when object detection should be used. You should aim to have motion detection sensitive enough that you won't miss events from objects you want to detect with object detection. The goal is to prevent object detection from running constantly for every small pixel change in the image. Windy days are still going to result in lots of motion being detected.
Remember that motion detection is just used to determine when object detection should be used. You should aim to have motion detection sensitive enough that you won't miss objects you want to detect with object detection. The goal is to prevent object detection from running constantly for every small pixel change in the image. Windy days are still going to result in lots of motion being detected.
:::
@@ -92,9 +92,15 @@ motion:
lightning_threshold:0.8
```
:::tip
:::warning
Some cameras like doorbell cameras may have missed detections when someone walks directly in front of the camera and the lightning_threshold causes motion detection to be re-calibrated. In this case, it may be desirable to increase the `lightning_threshold` to ensure these events are not missed.
Some cameras like doorbell cameras may have missed detections when someone walks directly in front of the camera and the lightning_threshold causes motion detection to be re-calibrated. In this case, it may be desirable to increase the `lightning_threshold` to ensure these objects are not missed.
:::
:::note
Lightning threshold does not stop motion based recordings from being saved.
Frigate offers native notifications using the [WebPush Protocol](https://web.dev/articles/push-notifications-web-push-protocol) which uses the [VAPID spec](https://tools.ietf.org/html/draft-thomson-webpush-vapid) to deliver notifications to web apps using encryption.
## Setting up Notifications
In order to use notifications the following requirements must be met:
- Frigate must be accessed via a secure https connection
- A supported browser must be used. Currently Chrome, Firefox, and Safari are known to be supported.
- In order for notifications to be usable externally, Frigate must be accessible externally
### Configuration
To configure notifications, go to the Frigate WebUI -> Settings -> Notifications and enable, then fill out the fields and save.
### Registration
Once notifications are enabled, press the `Register for Notifications` button on all devices that you would like to receive notifications on. This will register the background worker. After this Frigate must be restarted and then notifications will begin to be sent.
## Supported Notifications
Currently notifications are only supported for review alerts. More notifications will be supported in the future.
:::note
Currently, only Chrome supports images in notifications. Safari and Firefox will only show a title and message in the notification.
:::
## Reduce Notification Latency
Different platforms handle notifications differently, some settings changes may be required to get optimal notification delivery.
### Android
Most Android phones have battery optimization settings. To get reliable Notification delivery the browser (Chrome, Firefox) should have battery optimizations disabled. If Frigate is running as a PWA then the Frigate app should have battery optimizations disabled as well.
Frigate provides the following builtin detector types: `cpu`, `edgetpu`, `openvino`, `tensorrt`, and `rknn`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
:::info
## CPU Detector (not recommended)
Frigate supports multiple different detectors that work on different types of hardware:
The CPU detector type runs a TensorFlow Lite model utilizing the CPU without hardware acceleration. It is recommended to use a hardware accelerated detector type instead for better performance. To configure a CPU based detector, set the `"type"` attribute to `"cpu"`.
**Most Hardware**
- [Coral EdgeTPU](#edge-tpu-detector): The Google Coral EdgeTPU is available in USB and m.2 format allowing for a wide range of compatibility with devices.
- [Hailo](#hailo-8l): The Hailo8 AI Acceleration module is available in m.2 format with a HAT for RPi devices, offering a wide range of compatibility with devices.
:::tip
**AMD**
- [ROCm](#amdrocm-gpu-detector): ROCm can run on AMD Discrete GPUs to provide efficient object detection.
- [ONNX](#onnx): ROCm will automatically be detected and used as a detector in the `-rocm` Frigate image when a supported ONNX model is configured.
If you do not have GPU or Edge TPU hardware, using the [OpenVINO Detector](#openvino-detector) is often more efficient than using the CPU detector.
**Intel**
- [OpenVino](#openvino-detector): OpenVino can run on Intel Arc GPUs, Intel integrated GPUs, and Intel CPUs to provide efficient object detection.
- [ONNX](#onnx): OpenVINO will automatically be detected and used as a detector in the default Frigate image when a supported ONNX model is configured.
**Nvidia**
- [TensortRT](#nvidia-tensorrt-detector): TensorRT can run on Nvidia GPUs and Jetson devices, using one of many default models.
- [ONNX](#onnx): TensorRT will automatically be detected and used as a detector in the `-tensorrt` or `-tensorrt-jp(4/5)` Frigate images when a supported ONNX model is configured.
**Rockchip**
- [RKNN](#rockchip-platform): RKNN models can run on Rockchip devices with included NPUs.
**For Testing**
- [CPU Detector (not recommended for actual use](#cpu-detector-not-recommended): Use a CPU to run tflite model, this is not recommended and in most cases OpenVINO can be used in CPU mode with better results.
:::
The number of threads used by the interpreter can be specified using the `"num_threads"` attribute, and defaults to `3.`
# Officially Supported Detectors
A TensorFlow Lite model is provided in the container at`/cpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
```yaml
detectors:
cpu1:
type:cpu
num_threads:3
model:
path:"/custom_model.tflite"
cpu2:
type:cpu
num_threads:3
```
When using CPU detectors, you can add one CPU detector per camera. Adding more detectors than the number of cameras should not improve performance.
Frigate provides the following builtin detector types:`cpu`, `edgetpu`, `hailo8l`, `onnx`, `openvino`, `rknn`, `rocm`, and `tensorrt`. By default, Frigate will use a single CPU detector. Other detectors may require additional configuration as described below. When using multiple detectors they will run in dedicated processes, but pull from a common queue of detection requests from across all cameras.
## Edge TPU Detector
@@ -81,6 +83,15 @@ detectors:
device:""
```
### Single PCIE/M.2 Coral
```yaml
detectors:
coral:
type:edgetpu
device:pci
```
### Multiple PCIE/M.2 Corals
```yaml
@@ -109,9 +120,29 @@ detectors:
The OpenVINO detector type runs an OpenVINO IR model on AMD and Intel CPUs, Intel GPUs and Intel VPU hardware. To configure an OpenVINO detector, set the `"type"` attribute to `"openvino"`.
The OpenVINO device to be used is specified using the `"device"` attribute according to the naming conventions in the [Device Documentation](https://docs.openvino.ai/latest/openvino_docs_OV_UG_Working_with_devices.html). Other supported devices could be `AUTO`,`CPU`,`GPU`, `MYRIAD`, etc. If not specified, the default OpenVINO device will be selected by the `AUTO` plugin.
The OpenVINO device to be used is specified using the `"device"` attribute according to the naming conventions in the [Device Documentation](https://docs.openvino.ai/2024/openvino-workflow/running-inference/inference-devices-and-modes.html). The most common devices are`CPU` and`GPU`. Currently, there is a known issue with using `AUTO`. For backwards compatibility, Frigate will attempt to use `GPU` if `AUTO` is set in your configuration.
OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. It will also run on AMD CPUs despite having no official support for it. A supported Intel platform is required to use the `GPU` device with OpenVINO. The `MYRIAD` device may be run on any platform, including Arm devices. For detailed system requirements, see [OpenVINO System Requirements](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html)
OpenVINO is supported on 6th Gen Intel platforms (Skylake) and newer. It will also run on AMD CPUs despite having no official support for it. A supported Intel platform is required to use the `GPU` device with OpenVINO. For detailed system requirements, see [OpenVINO System Requirements](https://docs.openvino.ai/2024/about-openvino/release-notes-openvino/system-requirements.html)
:::tip
When using many cameras one detector may not be enough to keep up. Multiple detectors can be defined assuming GPU resources are available. An example configuration would be:
```yaml
detectors:
ov_0:
type:openvino
device:GPU
ov_1:
type:openvino
device:GPU
```
:::
### Supported Models
#### SSDLite MobileNet v2
An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobilenet_v2.xml` and is used by this detector type by default. The model comes from Intel's Open Model Zoo [SSDLite MobileNet V2](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/ssdlite_mobilenet_v2) and is converted to an FP16 precision IR model. Use the model configuration shown below when using the OpenVINO detector with the default model.
@@ -119,66 +150,52 @@ An OpenVINO model is provided in the container at `/openvino-model/ssdlite_mobil
detectors:
ov:
type:openvino
device:AUTO
model:
path:/openvino-model/ssdlite_mobilenet_v2.xml
device:GPU
model:
width:300
height:300
input_tensor:nhwc
input_pixel_format:bgr
path:/openvino-model/ssdlite_mobilenet_v2.xml
labelmap_path:/openvino-model/coco_91cl_bkgr.txt
```
This detector also supports YOLOX. Other YOLO variants are not officially supported/tested. Frigate does not come with any yolo models preloaded, so you will need to supply your own models. This detector has been verified to work with the [yolox_tiny](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolox-tiny) model from Intel's Open Model Zoo. You can follow [these instructions](https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/public/yolox-tiny#download-a-model-and-convert-it-into-openvino-ir-format) to retrieve the OpenVINO-compatible `yolox_tiny` model. Make sure that the model input dimensions match the `width` and `height` parameters, and `model_type` is set accordingly. See [Full Configuration Reference](/configuration/reference.md) for a list of possible `model_type` options. Below is an example of how `yolox_tiny` can be used in Frigate:
#### YOLOX
This detector also supports YOLOX. Frigate does not come with any YOLOX models preloaded, so you will need to supply your own models.
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
detectors:
ov:
type:openvino
device:AUTO
model:
path:/path/to/yolox_tiny.xml
device:GPU
model:
width:416
height:416
model_type:yolonas
width:320# <--- should match whatever was set in notebook
height:320# <--- should match whatever was set in notebook
input_tensor:nchw
input_pixel_format:bgr
model_type:yolox
labelmap_path:/path/to/coco_80cl.txt
path:/config/yolo_nas_s.onnx
labelmap_path:/labelmap/coco-80.txt
```
### Intel NCS2 VPU and Myriad X Setup
Intel produces a neural net inference acceleration chip called Myriad X. This chip was sold in their Neural Compute Stick 2 (NCS2) which has been discontinued. If intending to use the MYRIAD device for acceleration, additional setup is required to pass through the USB device. The host needs a udev rule installed to handle the NCS2 device.
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## NVidia TensorRT Detector
@@ -206,7 +223,7 @@ The model used for TensorRT must be preprocessed on the same hardware platform t
The Frigate image will generate model files during startup if the specified model is not found. Processed models are stored in the `/config/model_cache` folder. Typically the `/config` path is mapped to a directory on the host already and the `model_cache` does not need to be mapped separately unless the user wants to store it in a different location on the host.
By default, the `yolov7-320` model will be generated, but this can be overridden by specifying the `YOLO_MODELS` environment variable in Docker. One or more models may be listed in a comma-separated format, and each one will be generated. To select no model generation, set the variable to an empty string, `YOLO_MODELS=""`. Models will only be generated if the corresponding `{model}.trt` file is not present in the `model_cache` folder, so you can force a model to be regenerated by deleting it from your Frigate data folder.
By default, no models will be generated, but this can be overridden by specifying the `YOLO_MODELS` environment variable in Docker. One or more models may be listed in a comma-separated format, and each one will be generated. Models will only be generated if the corresponding `{model}.trt` file is not present in the `model_cache` folder, so you can force a model to be regenerated by deleting it from your Frigate data folder.
If you have a Jetson device with DLAs (Xavier or Orin), you can generate a model that will run on the DLA by appending `-dla` to your model name, e.g. specify `YOLO_MODELS=yolov7-320-dla`. The model will run on DLA0 (Frigate does not currently support DLA1). DLA-incompatible layers will fall back to running on the GPU.
@@ -247,7 +264,7 @@ An example `docker-compose.yml` fragment that converts the `yolov4-608` and `yol
```yml
frigate:
environment:
- YOLO_MODELS=yolov4-608,yolov7x-640
- YOLO_MODELS=yolov7-320,yolov7x-640
- USE_FP16=false
```
@@ -279,6 +296,220 @@ model:
height:320
```
## AMD/ROCm GPU detector
### Setup
The `rocm` detector supports running YOLO-NAS models on AMD GPUs. Use a frigate docker image with `-rocm` suffix, for example `ghcr.io/blakeblackshear/frigate:stable-rocm`.
### Docker settings for GPU access
ROCm needs access to the `/dev/kfd` and `/dev/dri` devices. When docker or frigate is not run under root then also `video` (and possibly `render` and `ssl/_ssl`) groups should be added.
When running docker directly the following flags should be added for device access:
```bash
$ docker run --device=/dev/kfd --device=/dev/dri \
...
```
When using docker compose:
```yaml
services:
frigate:
---
devices:
- /dev/dri
- /dev/kfd
```
For reference on recommended settings see [running ROCm/pytorch in Docker](https://rocm.docs.amd.com/projects/install-on-linux/en/develop/how-to/3rd-party/pytorch-install.html#using-docker-with-pytorch-pre-installed).
### Docker settings for overriding the GPU chipset
Your GPU might work just fine without any special configuration but in many cases they need manual settings. AMD/ROCm software stack comes with a limited set of GPU drivers and for newer or missing models you will have to override the chipset version to an older/generic version to get things working.
Also AMD/ROCm does not "officially" support integrated GPUs. It still does work with most of them just fine but requires special settings. One has to configure the `HSA_OVERRIDE_GFX_VERSION` environment variable. See the [ROCm bug report](https://github.com/ROCm/ROCm/issues/1743) for context and examples.
For the rocm frigate build there is some automatic detection:
- gfx90c -> 9.0.0
- gfx1031 -> 10.3.0
- gfx1103 -> 11.0.0
If you have something else you might need to override the `HSA_OVERRIDE_GFX_VERSION` at Docker launch. Suppose the version you want is `9.0.0`, then you should configure it from command line as:
```bash
$ docker run -e HSA_OVERRIDE_GFX_VERSION=9.0.0 \
...
```
When using docker compose:
```yaml
services:
frigate:
...
environment:
HSA_OVERRIDE_GFX_VERSION:"9.0.0"
```
Figuring out what version you need can be complicated as you can't tell the chipset name and driver from the AMD brand name.
- first make sure that rocm environment is running properly by running `/opt/rocm/bin/rocminfo` in the frigate container -- it should list both the CPU and the GPU with their properties
- find the chipset version you have (gfxNNN) from the output of the `rocminfo` (see below)
- use a search engine to query what `HSA_OVERRIDE_GFX_VERSION` you need for the given gfx name ("gfxNNN ROCm HSA_OVERRIDE_GFX_VERSION")
- override the `HSA_OVERRIDE_GFX_VERSION` with relevant value
- if things are not working check the frigate docker logs
#### Figuring out if AMD/ROCm is working and found your GPU
```bash
$ docker exec -it frigate /opt/rocm/bin/rocminfo
```
#### Figuring out your AMD GPU chipset version:
We unset the `HSA_OVERRIDE_GFX_VERSION` to prevent an existing override from messing up the result:
There is no default model provided, the following formats are supported:
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
detectors:
rocm:
type:rocm
model:
model_type:yolonas
width:320# <--- should match whatever was set in notebook
height:320# <--- should match whatever was set in notebook
input_pixel_format:bgr
path:/config/yolo_nas_s.onnx
labelmap_path:/labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## ONNX
ONNX is an open format for building machine learning models, Frigate supports running ONNX models on CPU, OpenVINO, and TensorRT. On startup Frigate will automatically try to use a GPU if one is available.
:::info
If the correct build is used for your GPU then the GPU will be detected and used automatically.
- **AMD**
- ROCm will automatically be detected and used with the ONNX detector in the `-rocm` Frigate image.
- **Intel**
- OpenVINO will automatically be detected and used with the ONNX detector in the default Frigate image.
- **Nvidia**
- Nvidia GPUs will automatically be detected and used with the ONNX detector in the `-tensorrt` Frigate image.
- Jetson devices will automatically be detected and used with the ONNX detector in the `-tensorrt-jp(4/5)` Frigate image.
:::
:::tip
When using many cameras one detector may not be enough to keep up. Multiple detectors can be defined assuming GPU resources are available. An example configuration would be:
```yaml
detectors:
onnx_0:
type:onnx
onnx_1:
type:onnx
```
:::
### Supported Models
There is no default model provided, the following formats are supported:
#### YOLO-NAS
[YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) models are supported, but not included by default. You can build and download a compatible model with pre-trained weights using [this notebook](https://github.com/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb) [](https://colab.research.google.com/github/blakeblackshear/frigate/blob/dev/notebooks/YOLO_NAS_Pretrained_Export.ipynb).
:::warning
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
:::
The input image size in this notebook is set to 320x320. This results in lower CPU usage and faster inference times without impacting performance in most cases due to the way Frigate crops video frames to areas of interest before running detection. The notebook and config can be updated to 640x640 if desired.
After placing the downloaded onnx model in your config folder, you can use the following configuration:
```yaml
detectors:
onnx:
type:onnx
model:
model_type:yolonas
width:320# <--- should match whatever was set in notebook
height:320# <--- should match whatever was set in notebook
input_pixel_format:bgr
input_tensor:nchw
path:/config/yolo_nas_s.onnx
labelmap_path:/labelmap/coco-80.txt
```
Note that the labelmap uses a subset of the complete COCO label set that has only 80 objects.
## CPU Detector (not recommended)
The CPU detector type runs a TensorFlow Lite model utilizing the CPU without hardware acceleration. It is recommended to use a hardware accelerated detector type instead for better performance. To configure a CPU based detector, set the `"type"` attribute to `"cpu"`.
:::danger
The CPU detector is not recommended for general use. If you do not have GPU or Edge TPU hardware, using the [OpenVINO Detector](#openvino-detector) in CPU mode is often more efficient than using the CPU detector.
:::
The number of threads used by the interpreter can be specified using the `"num_threads"` attribute, and defaults to `3.`
A TensorFlow Lite model is provided in the container at `/cpu_model.tflite` and is used by this detector type by default. To provide your own model, bind mount the file into the container and provide the path with `model.path`.
```yaml
detectors:
cpu1:
type:cpu
num_threads:3
model:
path:"/custom_model.tflite"
cpu2:
type:cpu
num_threads:3
```
When using CPU detectors, you can add one CPU detector per camera. Adding more detectors than the number of cameras should not improve performance.
## Deepstack / CodeProject.AI Server Detector
The Deepstack / CodeProject.AI Server detector for Frigate allows you to integrate Deepstack and CodeProject.AI object detection capabilities into Frigate. CodeProject.AI and DeepStack are open-source AI platforms that can be run on various devices such as the Raspberry Pi, Nvidia Jetson, and other compatible hardware. It is important to note that the integration is performed over the network, so the inference times may not be as fast as native Frigate detectors, but it still provides an efficient and reliable solution for object detection and tracking.
@@ -313,39 +544,11 @@ Hardware accelerated object detection is supported on the following SoCs:
- RK3576
- RK3588
This implementation uses the [Rockchip's RKNN-Toolkit2](https://github.com/airockchip/rknn-toolkit2/) Currently, only [Yolo-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) is supported as object detection model.
This implementation uses the [Rockchip's RKNN-Toolkit2](https://github.com/airockchip/rknn-toolkit2/), version v2.0.0.beta0. Currently, only [Yolo-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) is supported as object detection model.
### Prerequisites
Make sure that you use a linux distribution that comes with the rockchip BSP kernel 5.10 or 6.1 and rknpu driver. To check, enter the following commands:
```
$ uname -r
5.10.xxx-rockchip # or 6.1.xxx; the -rockchip suffix is important
$ ls /dev/dri
by-path card0 card1 renderD128 renderD129 # should list renderD129
$ sudo cat /sys/kernel/debug/rknpu/version
RKNPU driver: v0.9.2 # or later version
```
I recommend [Joshua Riek's Ubuntu for Rockchip](https://github.com/Joshua-Riek/ubuntu-rockchip), if your board is supported.
### Setup
Follow Frigate's default installation instructions, but use a docker image with `-rk` suffix for example `ghcr.io/blakeblackshear/frigate:stable-rk`.
Next, you need to grant docker permissions to access your hardware:
- During the configuration process, you should run docker in privileged mode to avoid any errors due to insufficient permissions. To do so, add `privileged: true` to your `docker-compose.yml` file or the `--privileged` flag to your docker run command.
- After everything works, you should only grant necessary permissions to increase security. Add the lines below to your `docker-compose.yml` file or the following options to your docker run command: `--security-opt systempaths=unconfined --security-opt apparmor=unconfined --device /dev/dri:/dev/dri`:
```yaml
security_opt:
- apparmor=unconfined
- systempaths=unconfined
devices:
- /dev/dri:/dev/dri
```
Make sure to follow the [Rockchip specific installation instrucitions](/frigate/installation#rockchip-platform).
### Configuration
@@ -376,13 +579,21 @@ model: # required
input_pixel_format:bgr# required
# shape of detection frame
input_tensor:nhwc
# needs to be adjusted to model, see below
labelmap_path:/labelmap.txt# required
```
The correct labelmap must be loaded for each model. If you use a custom model (see notes below), you must make sure to provide the correct labelmap. The table below lists the correct paths for the bundled models:
| `path` | `labelmap_path` |
| --------------------- | --------------------- |
| deci-fp16-yolonas\_\* | /labelmap/coco-80.txt |
### Choosing a model
:::warning
yolo-nas models use weights from DeciAI. These weights are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
The pre-trained YOLO-NAS weights from DeciAI are subject to their license and can't be used commercially. For more information, see: https://docs.deci.ai/super-gradients/latest/LICENSE.YOLONAS.html
-By default the rknn detector uses the yolonas_s model (`model: path: default-fp16-yolonas_s`). This model comes with the image, so no further steps than those mentioned above are necessary and no download happens.
-The other choices are automatically downloaded and stored in the folder `config/model_cache/rknn_cache`. After upgrading Frigate, you should remove older models to free up space.
- Finally, you can also provide your own `.rknn` model. You should not save your own models in the `rknn_cache` folder, store them directly in the `model_cache` folder or another subfolder. To convert a model to `.rknn` format see the `rknn-toolkit2` (requires a x86 machine). Note, that there is only post-processing for the supported models.
-All models are automatically downloaded and stored in the folder `config/model_cache/rknn_cache`. After upgrading Frigate, you should remove older models to free up space.
-You can also provide your own `.rknn` model. You should not save your own models in the `rknn_cache` folder, store them directly in the `model_cache` folder or another subfolder. To convert a model to `.rknn` format see the `rknn-toolkit2` (requires a x86 machine). Note, that there is only post-processing for the supported models.
## Hailo-8l
This detector is available for use with Hailo-8 AI Acceleration Module.
See the [installation docs](../frigate/installation.md#hailo-8l) for information on configuring the hailo8.
@@ -20,15 +20,13 @@ For object filters in your configuration, any single detection below `min_score`
In frame 2, the score is below the `min_score` value, so Frigate ignores it and it becomes a 0.0. The computed score is the median of the score history (padding to at least 3 values), and only when that computed score crosses the `threshold` is the object marked as a true positive. That happens in frame 4 in the example.
show image of snapshot vs event with differing scores
### Minimum Score
Any detection below `min_score` will be immediately thrown out and never tracked because it is considered a false positive. If `min_score` is too low then false positives may be detected and tracked which can confuse the object tracker and may lead to wasted resources. If `min_score` is too high then lower scoring true positives like objects that are further away or partially occluded may be thrown out which can also confuse the tracker and cause valid events to be lost or disjointed.
Any detection below `min_score` will be immediately thrown out and never tracked because it is considered a false positive. If `min_score` is too low then false positives may be detected and tracked which can confuse the object tracker and may lead to wasted resources. If `min_score` is too high then lower scoring true positives like objects that are further away or partially occluded may be thrown out which can also confuse the tracker and cause valid tracked objects to be lost or disjointed.
### Threshold
`threshold` is used to determine that the object is a true positive. Once an object is detected with a score >= `threshold` object is considered a true positive. If `threshold` is too low then some higher scoring false positives may create an event. If `threshold` is too high then true positive events may be missed due to the object never scoring high enough.
`threshold` is used to determine that the object is a true positive. Once an object is detected with a score >= `threshold` object is considered a true positive. If `threshold` is too low then some higher scoring false positives may create an tracked object. If `threshold` is too high then true positive tracked objects may be missed due to the object never scoring high enough.
## Object Shape
@@ -36,7 +34,7 @@ False positives can also be reduced by filtering a detection based on its shape.
### Object Area
`min_area` and `max_area` filter on the area of an objects bounding box in pixels and can be used to reduce false positives that are outside the range of expected sizes. For example when a leaf is detected as a dog or when a large tree is detected as a person, these can be reduced by adding a `min_area` / `max_area` filter. The recordings timeline can be used to determine the area of the bounding box in that frame by selecting a timeline item then mousing over or tapping the red box.
`min_area` and `max_area` filter on the area of an objects bounding box in pixels and can be used to reduce false positives that are outside the range of expected sizes. For example when a leaf is detected as a dog or when a large tree is detected as a person, these can be reduced by adding a `min_area` / `max_area` filter.
### Object Proportions
@@ -52,7 +50,7 @@ Conceptually, a ratio of 1 is a square, 0.5 is a "tall skinny" box, and 2 is a "
### Zones
[Required zones](/configuration/zones.md) can be a great tool to reduce false positives that may be detected in the sky or other areas that are not of interest. The required zones will only create events for objects that enter the zone.
[Required zones](/configuration/zones.md) can be a great tool to reduce false positives that may be detected in the sky or other areas that are not of interest. The required zones will only create tracked objects for objects that enter the zone.
Recordings can be enabled and are stored at `/media/frigate/recordings`. The folder structure for the recordings is `YYYY-MM-DD/HH/<camera_name>/MM.SS.mp4` in **UTC time**. These recordings are written directly from your camera stream without re-encoding. Each camera supports a configurable retention policy in the config. Frigate chooses the largest matching retention value between the recording retention and the event retention when determining if a recording should be removed.
Recordings can be enabled and are stored at `/media/frigate/recordings`. The folder structure for the recordings is `YYYY-MM-DD/HH/<camera_name>/MM.SS.mp4` in **UTC time**. These recordings are written directly from your camera stream without re-encoding. Each camera supports a configurable retention policy in the config. Frigate chooses the largest matching retention value between the recording retention and the tracked object retention when determining if a recording should be removed.
New recording segments are written from the camera stream to cache, they are only moved to disk if they match the setup recording retention policy.
@@ -13,7 +13,7 @@ H265 recordings can be viewed in Chrome 108+, Edge and Safari only. All other br
### Most conservative: Ensure all video is saved
For users deploying Frigate in environments where it is important to have contiguous video stored even if there was no detectable motion, the following config will store all video for 3 days. After 3 days, only video containing motion and overlapping with events will be retained until 30 days have passed.
For users deploying Frigate in environments where it is important to have contiguous video stored even if there was no detectable motion, the following config will store all video for 3 days. After 3 days, only video containing motion and overlapping with alerts or detections will be retained until 30 days have passed.
```yaml
record:
@@ -21,9 +21,13 @@ record:
retain:
days:3
mode:all
events:
alerts:
retain:
default:30
days:30
mode:motion
detections:
retain:
days:30
mode:motion
```
@@ -37,25 +41,28 @@ record:
retain:
days:3
mode:motion
events:
alerts:
retain:
default:30
days:30
mode:motion
detections:
retain:
days:30
mode:motion
```
### Minimum: Events only
### Minimum: Alerts only
If you only want to retain video that occurs during an event, this config will discard video unless an event is ongoing.
If you only want to retain video that occurs during a tracked object, this config will discard video unless an alert is ongoing.
```yaml
record:
enabled:True
retain:
days:0
mode:all
events:
alerts:
retain:
default:30
days:30
mode:motion
```
@@ -65,7 +72,7 @@ As of Frigate 0.12 if there is less than an hour left of storage, the oldest 2 h
## Configuring Recording Retention
Frigate supports both continuous and event based recordings with separate retention modes and retention periods.
Frigate supports both continuous and tracked object based recordings with separate retention modes and retention periods.
:::tip
@@ -86,25 +93,28 @@ record:
Continuous recording supports different retention modes [which are described below](#what-do-the-different-retain-modes-mean)
### Event Recording
### Object Recording
If you only used clips in previous versions with recordings disabled, you can use the following config to get the same behavior. This is also the default behavior when recordings are enabled.
The number of days to record review items can be specified for review items classified as alerts as well as tracked objects.
```yaml
record:
enabled:True
events:
alerts:
retain:
default:10# <- number of days to keep event recordings
days:10# <- number of days to keep alert recordings
detections:
retain:
days:10# <- number of days to keep detections recordings
```
This configuration will retain recording segments that overlap with events and have active tracked objects for 10 days. Because multiple events can reference the same recording segments, this avoids storing duplicate footage for overlapping events and reduces overall storage needs.
This configuration will retain recording segments that overlap with alerts and detections for 10 days. Because multiple tracked objects can reference the same recording segments, this avoids storing duplicate footage for overlapping tracked objects and reduces overall storage needs.
**WARNING**: Recordings still must be enabled in the config. If a camera has recordings disabled in the config, enabling via the methods listed above will have no effect.
## What do the different retain modes mean?
Frigate saves from the stream with the `record` role in 10 second segments. These options determine which recording segments are kept for continuous recording (but can also affect events).
Frigate saves from the stream with the `record` role in 10 second segments. These options determine which recording segments are kept for continuous recording (but can also affect tracked objects).
Let's say you have Frigate configured so that your doorbell camera would retain the last **2** days of continuous recording.
@@ -112,11 +122,7 @@ Let's say you have Frigate configured so that your doorbell camera would retain
- With the `motion` option the only parts of those 48 hours would be segments that Frigate detected motion. This is the middle ground option that won't keep all 48 hours, but will likely keep all segments of interest along with the potential for some extra segments.
- With the `active_objects` option the only segments that would be kept are those where there was a true positive object that was not considered stationary.
The same options are available with events. Let's consider a scenario where you drive up and park in your driveway, go inside, then come back out 4 hours later.
- With the `all` option all segments for the duration of the event would be saved for the event. This event would have 4 hours of footage.
- With the `motion` option all segments for the duration of the event with motion would be saved. This means any segment where a car drove by in the street, person walked by, lighting changed, etc. would be saved.
- With the `active_objects` it would only keep segments where the object was active. In this case the only segments that would be saved would be the ones where the car was driving up, you going inside, you coming outside, and the car driving away. Essentially reducing the 4 hours to a minute or two of event footage.
The same options are available with alerts and detections, except it will only save therecordings when it overlaps with a review item of that type.
A configuration example of the above retain modes where all `motion` segments are stored for 7 days and `active objects` are stored for 14 days would be as follows:
@@ -126,44 +132,32 @@ record:
retain:
days:7
mode:motion
events:
alerts:
retain:
default:14
days:14
mode:active_objects
detections:
retain:
days:14
mode:active_objects
```
The above configuration example can be added globally or on a per camera basis.
### Object Specific Retention
You can also set specific retention length for an object type. The below configuration example builds on from above but also specifies that recordings of dogs only need to be kept for 2 days and recordings of cars should be kept for 7 days.
```yaml
record:
enabled:True
retain:
days:7
mode:motion
events:
retain:
default:14
mode:active_objects
objects:
dog:2
car:7
```
## Can I have "continuous" recordings, but only at certain times?
Using Frigate UI, HomeAssistant, or MQTT, cameras can be automated to only record in certain situations or at certain times.
## How do I export recordings?
The export page in the Frigate WebUI allows for exporting real time clips with a designated start and stop time as well as exporting a time-lapse for a designated start and stop time. These exports can take a while so it is important to leave the file until it is no longer in progress.
Footage can be exported from Frigate by right-clicking (desktop) or long pressing (mobile) on a review item in the Review pane or by clicking the Export button in the History view. Exported footage is then organized and searchable through the Export view, accessible from the main navigation bar.
### Time-lapse export
Time lapse exporting is available only via the [HTTP API](../integrations/api/export-recording-export-camera-name-start-start-time-end-end-time-post.api.mdx).
When exporting a time-lapse the default speed-up is 25x with 30 FPS. This means that every 25 seconds of (real-time) recording is condensed into 1 second of time-lapse video (always without audio) with a smoothness of 30 FPS.
To configure the speed-up factor, the frame rate and further custom settings, the configuration parameter `timelapse_args` can be used. The below configuration example would change the time-lapse speed to 60x (for fitting 1 hour of recording into 1 minute of time-lapse) with 25 FPS:
```yaml
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.