#community-help

Troubleshooting Typesense 503 Errors and Usage Queries

TLDR Kevin encountered 503s using typesense. Jason asked for logs and explained why 503s occur. They made recommendations to remedy the issue and resolved Kevin's import parameter confusion. User was asked to open a github issue for accepting booleans.

Powered by Struct AI

1

1

18
2mo
Solved
Join the chat
Jul 13, 2023 (2 months ago)
Kevin
Photo of md5-9ed470b5b3554b483d2ea5fd760cb7d5
Kevin
08:35 PM
hello- I was using typesense without any issues just a couple hours ago. we have it deployed on digitalocean and i was able to successfully insert roughly 6m documents into my collection. all of a sudden, i started getting 503s when i hit the /health endpoint. we have not made any updates to digitalocean within that timespan. i am wondering if this is related to rebuilding the index or some other expected issue?
Jason
Photo of md5-8813087cccc512313602b6d9f9ece19f
Jason
08:38 PM
Could you post say the last 100 lines from the logs?
08:39
Kevin
Photo of md5-9ed470b5b3554b483d2ea5fd760cb7d5
Kevin
08:43 PM
ueue_size: 0, local_sequence: 24476859
I20230713 20:40:18.999783 26938 raft_server.h:60] Peer refresh succeeded!
E20230713 20:40:24.000121 26902 raft_server.cpp:624] 4800 queued writes > healthy read lag of 1000
E20230713 20:40:24.000198 26902 raft_server.cpp:636] 4800 queued writes > healthy write lag of 500
I20230713 20:40:29.000618 26902 raft_server.cpp:546] Term: 2, last_index index: 34920, committed_index: 34920, known_applied_index: 34920, applying_index: 0, queued_writes: 4789, pending_queue_size: 0, local_sequence: 24476859
I20230713 20:40:29.000923 26928 raft_server.h:60] Peer refresh succeeded!
E20230713 20:40:33.001021 26902 raft_server.cpp:624] 4781 queued writes > healthy read lag of 1000
E20230713 20:40:33.001092 26902 raft_server.cpp:636] 4781 queued writes > healthy write lag of 500
I20230713 20:40:39.001595 26902 raft_server.cpp:546] Term: 2, last_index index: 34920, committed_index: 34920, known_applied_index: 34920, applying_index: 0, queued_writes: 4768, pending_queue_size: 0, local_sequence: 24476859
I20230713 20:40:39.001729 26938 raft_server.h:60] Peer refresh succeeded!
E20230713 20:40:42.001917 26902 raft_server.cpp:624] 4761 queued writes > healthy read lag of 1000
E20230713 20:40:42.001986 26902 raft_server.cpp:636] 4761 queued writes > healthy write lag of 500
I20230713 20:40:42.239393 26903 batched_indexer.cpp:284] Running GC for aborted requests, req map size: 1
I20230713 20:40:49.002573 26902 raft_server.cpp:546] Term: 2, last_index index: 34920, committed_index: 34920, known_applied_index: 34920, applying_index: 0, queued_writes: 4747, pending_queue_size: 0, local_sequence: 24476859
I20230713 20:40:49.002897 26928 raft_server.h:60] Peer refresh succeeded!
E20230713 20:40:51.002811 26902 raft_server.cpp:624] 4743 queued writes > healthy read lag of 1000
E20230713 20:40:51.002892 26902 raft_server.cpp:636] 4743 queued writes > healthy write lag of 500
I20230713 20:40:59.003530 26902 raft_server.cpp:546] Term: 2, last_index index: 34920, committed_index: 34920, known_applied_index: 34920, applying_index: 0, queued_writes: 4726, pending_queue_size: 0, local_sequence: 24476859
I20230713 20:40:59.003659 26938 raft_server.h:60] Peer refresh succeeded!
E20230713 20:41:00.003693 26902 raft_server.cpp:624] 4724 queued writes > healthy read lag of 1000
E20230713 20:41:00.003772 26902 raft_server.cpp:636] 4724 queued writes > healthy write lag of 500
I20230713 20:41:09.004508 26902 raft_server.cpp:546] Term: 2, last_index index: 34920, committed_index: 34920, known_applied_index: 34920, applying_index: 0, queued_writes: 4705, pending_queue_size: 0, local_sequence: 24476859
E20230713 20:41:09.004582 26902 raft_server.cpp:624] 4705 queued writes > healthy read lag of 1000
E20230713 20:41:09.004607 26902 raft_server.cpp:636] 4705 queued writes > healthy write lag of 500
I20230713 20:41:09.004624 26928 raft_server.h:60] Peer refresh succeeded!
E20230713 20:41:18.005333 26902 raft_server.cpp:624] 4687 queued writes > healthy read lag of 1000
E20230713 20:41:18.005399 26902 raft_server.cpp:636] 4687 queued writes > healthy write lag of 500
I20230713 20:41:19.005517 26902 raft_server.cpp:546] Term: 2, last_index index: 34920, committed_index: 34920, known_applied_index: 34920, applying_index: 0, queued_writes: 4685, pending_queue_size: 0, local_sequence: 24476859
I20230713 20:41:19.005776 26938 raft_server.h:60] Peer refresh succeeded!
E20230713 20:41:27.006239 26902 raft_server.cpp:624] 4668 queued writes > healthy read lag of 1000
E20230713 20:41:27.006325 26902 raft_server.cpp:636] 4668 queued writes > healthy write lag of 500
I20230713 20:41:29.006515 26902 raft_server.cpp:546] Term: 2, last_index index: 34920, committed_index: 34920, known_applied_index: 34920, applying_index: 0, queued_writes: 4664, pending_queue_size: 0, local_sequence: 24476859
I20230713 20:41:29.006635 26928 raft_server.h:60] Peer refresh succeeded!
E20230713 20:41:36.007102 26902 raft_server.cpp:624] 4650 queued writes > healthy read lag of 1000
E20230713 20:41:36.007184 26902 raft_server.cpp:636] 4650 queued writes > healthy write lag of 500
I20230713 20:41:39.007444 26902 raft_server.cpp:546] Term: 2, last_index index: 34920, committed_index: 34920, known_applied_index: 34920, applying_index: 0, queued_writes: 4644, pending_queue_size: 0, local_sequence: 24476859
I20230713 20:41:39.007699 26938 raft_server.h:60] Peer refresh succeeded!
I20230713 20:41:43.246860 26903 batched_indexer.cpp:284] Running GC for aborted requests, req map size: 1
E20230713 20:41:45.007987 26902 raft_server.cpp:624] 4632 queued writes > healthy read lag of 1000
E20230713 20:41:45.008046 26902 raft_server.cpp:636] 4632 queued writes > healthy write lag of 500
I20230713 20:41:49.008385 26902 raft_server.cpp:546] Term: 2, last_index index: 34920, committed_index: 34920, known_applied_index: 34920, applying_index: 0, queued_writes: 4624, pending_queue_size: 0, local_sequence: 24476859
I20230713 20:41:49.008605 26928 raft_server.h:60] Peer refresh succeeded!
E20230713 20:41:54.008860 26902 raft_server.cpp:624] 4613 queued writes > healthy read lag of 1000
E20230713 20:41:54.008942 26902 raft_server.cpp:636] 4613 queued writes > healthy write lag of 500
I20230713 20:41:59.009361 26902 raft_server.cpp:546] Term: 2, last_index index: 34920, committed_index: 34920, known_applied_index: 34920, applying_index: 0, queued_writes: 4603, pending_queue_size: 0, local_sequence: 24476859
I20230713 20:41:59.009531 26938 raft_server.h:60] Peer refresh succeeded!
E20230713 20:42:03.009764 26902 raft_server.cpp:624] 4595 queued writes > healthy read lag of 1000
E20230713 20:42:03.009831 26902 raft_server.cpp:636] 4595 queued writes > healthy write lag of 500
I20230713 20:42:09.010331 26902 raft_server.cpp:546] Term: 2, last_index index: 34920, committed_index: 34920, known_applied_index: 34920, applying_index: 0, queued_writes: 4583, pending_queue_size: 0, local_sequence: 24476859
I20230713 20:42:09.010445 26928 raft_server.h:60] Peer refresh succeeded!
E20230713 20:42:12.010648 26902 raft_server.cpp:624] 4577 queued writes > healthy read lag of 1000
E20230713 20:42:12.010730 26902 raft_server.cpp:636] 4577 queued writes > healthy write lag of 500
I20230713 20:42:19.011272 26902 raft_server.cpp:546] Term: 2, last_index index: 34920, committed_index: 34920, known_applied_index: 34920, applying_index: 0, queued_writes: 4563, pending_queue_size: 0, local_sequence: 24476859
I20230713 20:42:19.011497 26938 raft_server.h:60] Peer refresh succeeded!
E20230713 20:42:21.011502 26902 raft_server.cpp:624] 4559 queued writes > healthy read lag of 1000
E20230713 20:42:21.011561 26902 raft_server.cpp:636] 4559 queued writes > healthy write lag of 500
I20230713 20:42:29.012213 26902 raft_server.cpp:546] Term: 2, last_index index: 34920, committed_index: 34920, known_applied_index: 34920, applying_index: 0, queued_writes: 4543, pending_queue_size: 0, local_sequence: 24476859
I20230713 20:42:29.012432 26928 raft_server.h:60] Peer refresh succeeded!
E20230713 20:42:30.012383 26902 raft_server.cpp:624] 4541 queued writes > healthy read lag of 1000
E20230713 20:42:30.012452 26902 raft_server.cpp:636] 4541 queued writes > healthy write lag of 500
I20230713 20:42:39.013185 26902 raft_server.cpp:546] Term: 2, last_index index: 34920, committed_index: 34920, known_applied_index: 34920, applying_index: 0, queued_writes: 4524, pending_queue_size: 0, local_sequence: 24476859
E20230713 20:42:39.013257 26902 raft_server.cpp:624] 4524 queued writes > healthy read lag of 1000
E20230713 20:42:39.013283 26902 raft_server.cpp:636] 4524 queued writes > healthy write lag of 500
I20230713 20:42:39.013476 26938 raft_server.h:60] Peer refresh succeeded!
I20230713 20:42:44.254422 26903 batched_indexer.cpp:284] Running GC for aborted requests, req map size: 1
08:43
Kevin
08:43 PM
ah i didn't know that- i assume the 6 million documents are still being written and i can't do anything until that's finished?
Jason
Photo of md5-8813087cccc512313602b6d9f9ece19f
Jason
08:44 PM
You can change the healthy-read-lag and health-write-lag server params to high values and restart the server
08:45
Jason
08:45 PM
May I know how many CPU cores you have and if you have sufficient RAM to hold the entire dataset in RAM?
08:45
Jason
08:45 PM
Also, you want to use the import endpoint to index a large number of documents instead of the single document endpoint
David
Photo of md5-94c93df7325e8fde185c76c659656ee9
David
08:47 PM
8 vCPUs
64GB / 1200GB Disk
08:48
David
08:48 PM
The dataset is 3 gb, so I don’t think it’s a RAM limitation

1

08:48
David
08:48 PM
What would you recommend values wise for the read and write lag params?
Jason
Photo of md5-8813087cccc512313602b6d9f9ece19f
Jason
08:49 PM
You could try say 15K for write lag and 10K for read lag

1

Kevin
Photo of md5-9ed470b5b3554b483d2ea5fd760cb7d5
Kevin
09:04 PM
great, we will try that. one other thing i wanted to ask was if returnid is a parameter that works with the jsonl import in the python sdk?

here is how we are running the import:
res = ts.collections['entities'].documents.import
(jsonl_file.read().encode('utf-8'), {'action': 'upsert', "return_id": True})

however i get a 400 saying: Parameter return_id must be a true|false.
Jason
Photo of md5-8813087cccc512313602b6d9f9ece19f
Jason
09:09 PM
Hmm, could you check if "return_id": "true" works?
09:10
Jason
09:10 PM
On a related side note, could you also make sure you’ve increased your client-side timeout when instantiating the client to as high as say 60 minutes (we never want the import api call to timeout and retry)
Kevin
Photo of md5-9ed470b5b3554b483d2ea5fd760cb7d5
Kevin
09:11 PM
oh that worked, thanks!
Jason
Photo of md5-8813087cccc512313602b6d9f9ece19f
Jason
09:12 PM
Hmmm, could you open an issue in the typesense-python github repo mentioning this? We should accept booleans as well
Kevin
Photo of md5-9ed470b5b3554b483d2ea5fd760cb7d5
Kevin
09:14 PM
sure thing