I haven't deployed the search yet, as I wanted to ...
# community-help
e
I haven't deployed the search yet, as I wanted to load test it first, so I don't really have a baseline of searches/second. But I did some experimenting and found some interesting results (and perhaps a bug?). • If I did just vector search, I had a search latency of ~20ms, at 20 QPS. • If I did keyword search, I had a search latency of 2-3ms (or something like it) • If I did a hybrid search, the latency went to 500+ ms. I didn't understand what caused this. It's obviously a huge difference, that can't be explained with just the added computation of comparing the ranking scores. Now, I have specified the typesense schema to have an embed field, i.e. allow auto-embedding, but my understanding was that if I provide a
vector_query
, typesense doesn't do the auto-embedding. But this experiment suggested that it does do the auto-embedding and ignore the
vector_query
IF you do hybrid search. I could confirm this by adding a
remote_embedding_timeout_ms=100
, which was indeed triggered, meaning typesense attempted to do the auto-embedding itself and ignore the provided vector. I tried changing the schema so that the embedding field didn't specify a model_config, so that it wouldn't attempt auto-embedding. The interesting result I got then when trying hybrid search was this error message:
Copy code
{'code': 400, 'error': 'Vector field `embedding` is not an auto-embedding field, do not use `query_by` with it, use `vector_query` instead.'}
This error seems to contradict this part of the docs, where it says you can do hybrid search with your own provided vector embedding, plus the keyword. Any ideas? Is it actually true that you can't do hybrid search with your own vector embeddings?