Hi, I've a small collection with ~630 documents in...
# community-help
l
Hi, I've a small collection with ~630 documents in it. I've an embedding field filled by Typesense using
openai/text-embedding-3-large
. Performing a keyword search return results in less than 100ms. A semantic search on the embedding field took more than 1 second and half. What can I do to speed up it? I'm on a Typesense cloud instance with 0.5 GB RAM and 2 vCPUs
k
Are you using a very large
k
value for the search? With 630 documents, the vector search should be < 10 ms.
l
this is the query I'm using:
Copy code
{
  "searches": [
    {
      "q": "some question...?",
      "collection": "courses",
      "query_by": "embedding",
      "exclude_fields": "embedding,rendered_item",
      "prefix": false
    }
  ]
}
k
It's because of the use of the openai API. That involves network latency because the query also needs to be parsed into vector form before nearest neighbor search can be done.
If the latency is an issue you can use a local model.
l
that's what I suspected, thanks for the confirmation
👍 1