Add config option to select fp16 or quantized jina vision model (#14270)

* Add config option to select fp16 or quantized jina vision model

* requires_fp16 for text and large models only

* fix model type check

* fix cpu

* pass model size
This commit is contained in:
Josh Hawkins
2024-10-10 17:46:21 -05:00
committed by GitHub
parent dd6276e706
commit 54eb03d2a1
7 changed files with 44 additions and 10 deletions

View File

@@ -417,6 +417,7 @@ export interface FrigateConfig {
semantic_search: {
enabled: boolean;
model_size: string;
};
snapshots: {