forked from Github/frigate
Add config option to select fp16 or quantized jina vision model (#14270)
* Add config option to select fp16 or quantized jina vision model * requires_fp16 for text and large models only * fix model type check * fix cpu * pass model size
This commit is contained in:
@@ -417,6 +417,7 @@ export interface FrigateConfig {
|
||||
|
||||
semantic_search: {
|
||||
enabled: boolean;
|
||||
model_size: string;
|
||||
};
|
||||
|
||||
snapshots: {
|
||||
|
||||
Reference in New Issue
Block a user