forked from Github/frigate
Add config option to select fp16 or quantized jina vision model (#14270)
* Add config option to select fp16 or quantized jina vision model * requires_fp16 for text and large models only * fix model type check * fix cpu * pass model size
This commit is contained in:
@@ -520,6 +520,8 @@ semantic_search:
|
||||
reindex: False
|
||||
# Optional: Set device used to run embeddings, options are AUTO, CPU, GPU. (default: shown below)
|
||||
device: "AUTO"
|
||||
# Optional: Set the model size used for embeddings. (default: shown below)
|
||||
model_size: "small"
|
||||
|
||||
# Optional: Configuration for AI generated tracked object descriptions
|
||||
# NOTE: Semantic Search must be enabled for this to do anything.
|
||||
|
||||
Reference in New Issue
Block a user