#community-help

Typesense Performance in Cloud Run

TLDR Bill asked about the performance of Typesense in Cloud Run. CaptainCodeman and Kishore Nallan expressed doubts about this setup, while Thomas proposed using flexsearch and AWS Lambdas as a suitable alternative.

Powered by Struct AI

1

Feb 26, 2022 (20 months ago)
Bill
Photo of md5-be53735a2b0297bb542711c1d2ecea45
Bill
02:32 PM
Hello, i have already deployed a typesense cluster and it works perfect (3 droplets, load balancer etc.) but it just came to my mind an other idea. How would typesense perform in Cloud Run? For example, if I have a fixed number of documents ~ 3k-5k (require max 50mb RAM) and deploy it in a 512mb RAM and 2-4 CPU cloud run instance. Would it be suitable to serve unlimited clients, without the need of clustering etc? Would the data be corrupted across multiple instaces? If I set only 1 read (warm) instance, would the data be shared in other respawned instances if required?
Kishore Nallan
Photo of md5-4e872368b2b2668460205b409e95c2ea
Kishore Nallan
02:43 PM
I'm not familiar with cloud run but where would the data be stored persistently?
Bill
Photo of md5-be53735a2b0297bb542711c1d2ecea45
Bill
03:21 PM
Yes that's an issue. As I read in their docs each instance will be destroyed after request execution
Masahiro
Photo of md5-366dff6b5f9b1a7d0f404fdc3261e573
Masahiro
03:21 PM
Cloud Run is stateless. Maybe you can use GAE.

1

CaptainCodeman
Photo of md5-d3a4ca49ba4aeb3b9d0cb7d846eb0989
CaptainCodeman
04:26 PM
AFAIK cloud run instances can't communicate with each other and you can't really control the lifetimes, so it's unsuitable to use for Typesense IMO
04:28
CaptainCodeman
04:28 PM
container optimized compute instances seem like the easiest solution to use and also give the most CPU + memory bang-for-the-buck
Feb 28, 2022 (20 months ago)
Thomas
Photo of md5-364d4bd42c5fa7cc676d57e1c52abbbc
Thomas
07:10 AM
If you want to do FTS and filtering on that small amount of products with unlimited scaling, I'd suggest using https://github.com/nextapps-de/flexsearch and AWS Lambdas, they allow 500MB of tmp space and the instances stay alive as long as they have requests. I've tried up to 10K products and had latencies below 300ms
07:11
Thomas
07:11 AM
The actual index can be stored in S3, it has read speeds of 700MB/s when run in Lambdas
07:14
Thomas
07:14 AM
3M searches would cost 5$/mo