Skip to content

Commit a326932

Browse files
prabodDevinTDHa
authored andcommitted
add notebook and documentation
1 parent 1524e88 commit a326932

File tree

3 files changed

+1532
-2
lines changed

3 files changed

+1532
-2
lines changed

docs/en/transformer_entries/E5VEmbeddings.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ E5VEmbeddings
55
{%- capture description -%}
66
Universal multimodal embeddings using E5-V.
77

8-
E5-V is a multimodal embedding model that bridges the modality gap between text and images, enabling strong performance in cross-modal retrieval, classification, clustering, and more. It supports both image+text and text-only embedding scenarios, and is fine-tuned from lmms-lab/llama3-llava-next-8b. The default model is `"e5v_1_5_7b_int4"`.
8+
E5-V is a multimodal embedding model that bridges the modality gap between text and images, enabling strong performance in cross-modal retrieval, classification, clustering, and more. It supports both image+text and text-only embedding scenarios, and is fine-tuned from lmms-lab/llama3-llava-next-8b. The default model is `"e5v_int4"`.
99

1010
Note that this annotator is only supported for Spark Versions 3.4 and up.
1111

0 commit comments

Comments
 (0)