Hey, quick question Is it possible to configure a...
# community-help
a
Hey, quick question Is it possible to configure a vector search field such that I can import my own embeddings at index time, then provide my own model to perform semantic search at query time? (not every one of my items will have an embedding field guaranteed too - so would like it to be optional) From my tinkering, it seems like it's one or the other (i.e. I either use auto-embed to generate from a field/import my own for that item and be able to auto semantic search, or I import my own embeddings, and have it be optional for some items - but have to perform the query time vector generation myself)
j
If you use vLLM you can use your own model at query time (and indexing time as well). Separately when using any model with auto embedding, if you import pre-generated embeddings into the document, we won’t regenerate them
a
Thanks for the quick answer! I might have to go for that first option you mentioned - I was running away from the extra infra work 😂 The only issue I face with the second option is that I index my items in realtime to the db, then generate my compute intensive embedding pipeline separately, so can end up in a state where an item gets indexed for general search, but has no embedding yet - in which case I'd just like it to be excluded from the vector search index, until I provide the computed embedding.
j
Ah yeah, you’d have to insert the document with the embedding otherwise auto-embedding will kick in