Hi everyone, I have a cluster with 32 GB of RAM av...
# community-help
m
Hi everyone, I have a cluster with 32 GB of RAM available. I noticed that we get a lot of timeouts when using the API to upsert/update documents in our collections. Since we have 32 GB of memory in the cluster, is this split into
System Memory Used
and
Typesense Memory
? After summing both graphs, we get pretty close to 32 GB of RAM used. However, we have not received any warning. We have used a lot of CPU beforehand and we have received a warning that we are using more than 70% of the resources - we optimized that. Could this be a memory issue why we have such spikes, is it a networking issue or do you have some internal process that takes up too much time per collection?
j
we get a lot of timeouts when using the API to upsert/update documents in our collections
From what I've seen usually these timeouts happen when the client-side library timeout configuration is set to too low a value and the timeout is triggered before the a large batch of imports is fully indexed. I also don't see any capacity issues on your prod cluster, so the client-side timeout is most likely the issue. You want to increase that when instantiating the library
is this split into System Memory Used and Typesense Memory
No, System Memory is actually physical RAM used. Typesense Memory is sum of physical RAM and swap used by the Typesense process.
👍 1
m
Hi Jason, thanks for the reply. We are also receiving timeouts on the API key creation. We want to create scoped API keys programmatically. Is there a limit to the scoped API keys?
j
Hmmm, scoped API keys are created cryptographically on the client-side and is not stored on the server-side, so there is no limit to how many keys you can create
Could you share a full code snippet that's causing a timeout?