What could be the reason to receive read timeouts ...
# community-help
s
What could be the reason to receive read timeouts on my cluster? Locally I can still reach it, also search works, but from my server I get read time out. I piped in a lot of data might there be some kind of AWS-ban that I ran into?
k
Are you saying that the same call that succeeds on your local machine fails when run from your server?
s
yes
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='6guhtikya0cf785np.a1.typesense.net', port=443): Read timed out. (read timeout=10)
k
Does a simple health check work? Hitting the /heath endpoint.
s
like that?
k
Yes, let me also take a look at metrics/logs on our end.
s
cloud dashboard says all good
does not succeed, neither locally nor on server
okay I put http 😄
locally "ok" true, server timeout
k
I think the
6guhtikya0cf785np-3
node is not quite alright. The other two are fine. Taking a look.
When you use the universal end-point with the "-x" it will try to hit the node that's closest to you.
Which might explain why it works locally but not from AWS.
Yup, that node is unhealthy. Thanks to HA your other two nodes will absorb the searches if the client is configured correctly. It will be replaced automatically if it does not recover soon.
s
endpoint, that's not enough for ha?
k
No, you have to give the full list of nodes. See the example here: https://typesense.org/docs/0.20.0/api/authentication.html#search-delivery-network
s
ah damn it
okay thank you
Maybe a note in the cloud central 😄
k
Since for search delivery network we can't use a load balancer which cannot itself be distributed 🙂
@Stefan Hesse Cluster has recovered now. https://6guhtikya0cf785np-3.a1.typesense.net/health is fine.
You might want to keep an eye on the free memory. Once you get within 50-75 MB on a small instance like this, anything can happen. I don't know if the previous node became unhealthy because of such an issue.
s
great, thank you! Also upgraded my config
could be the case it seems to hold more memory now than it used to before with same amount of data, but I'd probably should upgrade to 1gb anyhow. Can you do that or should I create a new cluster?
k
As of now, create a new cluster. We're working on a round of much needed self serve improvements to the cloud UI.