Add config option to select fp16 or quantized jina vision model (#14270)

* Add config option to select fp16 or quantized jina vision model

* requires_fp16 for text and large models only

* fix model type check

* fix cpu

* pass model size
This commit is contained in:
Josh Hawkins
2024-10-10 17:46:21 -05:00
committed by GitHub
parent dd6276e706
commit 54eb03d2a1
7 changed files with 44 additions and 10 deletions

View File

@@ -520,6 +520,8 @@ semantic_search:
reindex: False
# Optional: Set device used to run embeddings, options are AUTO, CPU, GPU. (default: shown below)
device: "AUTO"
# Optional: Set the model size used for embeddings. (default: shown below)
model_size: "small"
# Optional: Configuration for AI generated tracked object descriptions
# NOTE: Semantic Search must be enabled for this to do anything.