thanks kishore for fixing that bug and creating a new release.
I've noticed that the response time of search requests can be anywhere from 1-4s when using the open ai embeddings. Has that been your experience as well?
j
Jason Bosco
06/28/2023, 4:50 PM
Unfortunately that’s what we’ve seen with a few other users as well. Sometimes even more - OpenAI API response times seem to vary widely
w
Walter Cavinaw
06/28/2023, 4:53 PM
Ah ok thanks. Do you know how vertex compares?
j
Jason Bosco
06/28/2023, 6:06 PM
Haven’t heard any feedback about Vertex
m
Manish Rai Jain
06/28/2023, 7:05 PM
I switched from OpenAI embeddings to use the E5 model, that's in Typesense RC. Works much better, latency is really good. See https://threads.typesense.org/kb
w
Walter Cavinaw
06/28/2023, 7:22 PM
Thanks for your comment Manish that's really helpful.