#community-help

Discussion on OpenAI Embeddings Response Time and Alternatives

TLDR Walter discusses the varying response time when using OpenAI embeddings. Jason confirms this issue and Manish suggests using the E5 model in Typesense RC as a better alternative.

Powered by Struct AI

1

6
3mo
Solved
Join the chat
Jun 28, 2023 (3 months ago)
Walter
Photo of md5-b0a343a23053bb091cc198f636ad4103
Walter
04:42 PM
thanks kishore for fixing that bug and creating a new release.

I've noticed that the response time of search requests can be anywhere from 1-4s when using the open ai embeddings. Has that been your experience as well?
Jason
Photo of md5-8813087cccc512313602b6d9f9ece19f
Jason
04:50 PM
Unfortunately that’s what we’ve seen with a few other users as well. Sometimes even more - OpenAI API response times seem to vary widely
Walter
Photo of md5-b0a343a23053bb091cc198f636ad4103
Walter
04:53 PM
Ah ok thanks. Do you know how vertex compares?
Jason
Photo of md5-8813087cccc512313602b6d9f9ece19f
Jason
06:06 PM
Haven’t heard any feedback about Vertex
Manish
Photo of md5-f0a83cd20895941fd74c026f9f15b61f
Manish
07:05 PM
I switched from OpenAI embeddings to use the E5 model, that's in Typesense RC. Works much better, latency is really good. See https://threads.typesense.org/kb
Walter
Photo of md5-b0a343a23053bb091cc198f636ad4103
Walter
07:22 PM
Thanks for your comment Manish that's really helpful.

1