Hi everyone - typesense newbie here. I've got a p...
# community-help
a
Hi everyone - typesense newbie here. I've got a python app and I've just tried to load approx 1 million nodes into my test typesense env. Doing so apparently overloaded something and now in my logs I see this sort of pattern:
Copy code
E20250911 07:31:34.140895 1174056 raft_server.cpp:783] 622 queued writes > healthy write lag of 500
I20250911 07:31:37.149612 1174056 raft_server.cpp:692] Term: 15, pending_queue: 0, last_index: 143407, committed: 143407, known_applied: 143407, applying: 0, pending_writes: 0, queued_writes: 622, local_sequence: 48889904
I20250911 07:31:37.149675 1174153 raft_server.h:60] Peer refresh succeeded!
E20250911 07:31:43.174098 1174056 raft_server.cpp:783] 622 queued writes > healthy write lag of 500
I20250911 07:31:47.192770 1174056 raft_server.cpp:692] Term: 15, pending_queue: 0, last_index: 143407, committed: 143407, known_applied: 143407, applying: 0, pending_writes: 0, queued_writes: 622, local_sequence: 48889904
I20250911 07:31:47.192821 1174143 raft_server.h:60] Peer refresh succeeded!
The 622 is not going down. And if I now try to post any more updates to typesense (even with a much smaller batch size than before), I get a
typesense.exceptions.ServiceUnavailable: [Errno 503] Not Ready or Lagging
Any guidance on what to do in this situation?
j
Could you make sure you have enough disk space, cpu and ram?
a
There is plenty of disk space, RAM etc as far as I can tell. I'm using batch loading of docs with batch size = 40 and a 0.5 second sleep between batches. At some point the server logs show:
Copy code
I20250911 18:38:12.625999 1987628 raft_server.h:60] Peer refresh succeeded!
I20250911 18:38:14.904073 1987634 log.cpp:536] close a full segment. Current first_index: 2358 last_index: 2369 raft_sync_segments: 0 will_sync: 1 path: /path/to/typesense/data/state/log/log_00000000000000002358_00000000000000002369
I20250911 18:38:14.907173 1987634 log.cpp:550] Renamed `/path/to/typesense/data/state/log/log_inprogress_00000000000000002358' to `/path/to/typesense/data/state/log/log_00000000000000002358_00000000000000002369'
I20250911 18:38:14.907274 1987634 log.cpp:114] Created new segment `/path/to/typesense/data/state/log/log_inprogress_00000000000000002370' with fd=245
I20250911 18:38:22.676370 1987541 raft_server.cpp:692] Term: 2, pending_queue: 0, last_index: 2377, committed: 2377, known_applied: 2377, applying: 0, pending_writes: 0, queued_writes: 3, local_sequence: 301249
I20250911 18:38:22.676440 1987638 raft_server.h:60] Peer refresh succeeded!
E20250911 18:38:24.646929 1987228 default_variables.cpp:109] Fail to read stat
E20250911 18:38:24.647040 1987228 default_variables.cpp:232] Fail to read memory state
E20250911 18:38:24.647075 1987228 default_variables.cpp:294] Fail to read loadavg
E20250911 18:38:25.651759 1987228 default_variables.cpp:109] Fail to read stat
E20250911 18:38:25.651875 1987228 default_variables.cpp:232] Fail to read memory state
E20250911 18:38:25.651912 1987228 default_variables.cpp:294] Fail to read loadavg
E20250911 18:38:26.655475 1987228 default_variables.cpp:109] Fail to read stat
Any guidance as to what's going on here?
(I'm on a Mac)
a
Copy code
E20250911 07:31:34.140895 1174056 raft_server.cpp:783] 622 queued writes > healthy write lag of 500
I20250911 07:31:37.149612 1174056 raft_server.cpp:692] Term: 15, pending_queue: 0, last_index: 143407, committed: 143407, known_applied: 143407, applying: 0, pending_writes: 0, queued_writes: 622, local_sequence: 48889904
I20250911 07:31:37.149675 1174153 raft_server.h:60] Peer refresh succeeded!
E20250911 07:31:43.174098 1174056 raft_server.cpp:783] 622 queued writes > healthy write lag of 500
These logs usually show when your server is overloaded trying to process writes. If you have metrics, they will likely show CPU being heavily utilized. Now these ones:
Copy code
E20250911 18:38:24.646929 1987228 default_variables.cpp:109] Fail to read stat
E20250911 18:38:24.647040 1987228 default_variables.cpp:232] Fail to read memory state
E20250911 18:38:24.647075 1987228 default_variables.cpp:294] Fail to read loadavg
E20250911 18:38:25.651759 1987228 default_variables.cpp:109] Fail to read stat
E20250911 18:38:25.651875 1987228 default_variables.cpp:232] Fail to read memory state
E20250911 18:38:25.651912 1987228 default_variables.cpp:294] Fail to read loadavg
E20250911 18:38:26.655475 1987228 default_variables.cpp:109] Fail to read stat
I've never seen. Could you try restarting your typesense instance?
a
Guess what? I can't replicate that when restarting right now. What I am seeing is a lot of these in the logs:
Copy code
E20250915 12:36:39.520720 2641817 collection.cpp:768] Write to disk failed. Will restore old document
E20250915 12:36:39.520793 2641817 collection.cpp:768] Write to disk failed. Will restore old document
E20250915 12:36:39.520865 2641817 collection.cpp:768] Write to disk failed. Will restore old document
E20250915 12:36:39.520937 2641817 collection.cpp:768] Write to disk failed. Will restore old document
E20250915 12:36:39.521010 2641817 collection.cpp:768] Write to disk failed. Will restore old document
E20250915 12:36:39.521085 2641817 collection.cpp:768] Write to disk failed. Will restore old document
E20250915 12:36:39.521157 2641817 collection.cpp:768] Write to disk failed. Will restore old document
E20250915 12:36:39.521229 2641817 collection.cpp:768] Write to disk failed. Will restore old document
E20250915 12:36:39.521299 2641817 collection.cpp:768] Write to disk failed. Will restore old document
E20250915 12:36:39.521371 2641817 collection.cpp:768] Write to disk failed. Will restore old document
E20250915 12:36:39.521445 2641817 collection.cpp:768] Write to disk failed. Will restore old document
E20250915 12:36:39.521517 2641817 collection.cpp:768] Write to disk failed. Will restore old document
And then a large number of "queued writes" that never goes down:
Copy code
E20250915 12:40:04.091602 2641798 raft_server.cpp:771] 1447 queued writes > healthy read lag of 1000
E20250915 12:40:04.091656 2641798 raft_server.cpp:783] 1447 queued writes > healthy write lag of 500
I20250915 12:40:13.133855 2641798 raft_server.cpp:692] Term: 12, pending_queue: 0, last_index: 4273, committed: 4273, known_applied: 4273, applying: 0, pending_writes: 0, queued_writes: 1447, local_sequence: 1294612
E20250915 12:40:13.133998 2641798 raft_server.cpp:771] 1447 queued writes > healthy read lag of 1000
E20250915 12:40:13.134014 2641798 raft_server.cpp:783] 1447 queued writes > healthy write lag of 500
I20250915 12:40:13.134050 2641891 raft_server.h:60] Peer refresh succeeded!
E20250915 12:40:22.177606 2641798 raft_server.cpp:771] 1447 queued writes > healthy read lag of 1000
E20250915 12:40:22.177685 2641798 raft_server.cpp:783] 1447 queued writes > healthy write lag of 500
I20250915 12:40:23.182776 2641798 raft_server.cpp:692] Term: 12, pending_queue: 0, last_index: 4273, committed: 4273, known_applied: 4273, applying: 0, pending_writes: 0, queued_writes: 1447, local_sequence: 1294612
I20250915 12:40:23.183512 2641891 raft_server.h:60] Peer refresh succeeded!
E20250915 12:40:31.222455 2641798 raft_server.cpp:771] 1447 queued writes > healthy read lag of 1000
E20250915 12:40:31.222540 2641798 raft_server.cpp:783] 1447 queued writes > healthy write lag of 500
I20250915 12:40:33.234589 2641798 raft_server.cpp:692] Term: 12, pending_queue: 0, last_index: 4273, committed: 4273, known_applied: 4273, applying: 0, pending_writes: 0, queued_writes: 1447, local_sequence: 1294612
I20250915 12:40:33.234967 2641891 raft_server.h:60] Peer refresh succeeded!
The "queued writes" number is not decreasing. Each time I restart the server I get a different number of queued_writes. Just now, for example, the logs are showing me:
Copy code
E20250915 12:48:32.260331 2656619 raft_server.cpp:771] 1603 queued writes > healthy read lag of 1000
E20250915 12:48:32.260416 2656619 raft_server.cpp:783] 1603 queued writes > healthy write lag of 500
I20250915 12:48:33.265466 2656619 raft_server.cpp:692] Term: 13, pending_queue: 0, last_index: 4274, committed: 4274, known_applied: 4274, applying: 0, pending_writes: 0, queued_writes: 1603, local_sequence: 1286521
I20250915 12:48:33.265599 2656710 raft_server.h:60] Peer refresh succeeded!
E20250915 12:48:41.304512 2656619 raft_server.cpp:771] 1603 queued writes > healthy read lag of 1000
E20250915 12:48:41.304574 2656619 raft_server.cpp:783] 1603 queued writes > healthy write lag of 500
I20250915 12:48:43.314658 2656619 raft_server.cpp:692] Term: 13, pending_queue: 0, last_index: 4274, committed: 4274, known_applied: 4274, applying: 0, pending_writes: 0, queued_writes: 1603, local_sequence: 1286521
Restarted it again and this time it's 1569:
Copy code
E20250915 12:53:17.801573 2668546 raft_server.cpp:771] 1569 queued writes > healthy read lag of 1000
E20250915 12:53:17.801613 2668546 raft_server.cpp:783] 1569 queued writes > healthy write lag of 500
I20250915 12:53:26.843848 2668546 raft_server.cpp:692] Term: 14, pending_queue: 0, last_index: 4275, committed: 4275, known_applied: 4275, applying: 0, pending_writes: 0, queued_writes: 1569, local_sequence: 1288065
E20250915 12:53:26.844029 2668546 raft_server.cpp:771] 1569 queued writes > healthy read lag of 1000
E20250915 12:53:26.844046 2668546 raft_server.cpp:783] 1569 queued writes > healthy write lag of 500
I20250915 12:53:26.844117 2668633 raft_server.h:60] Peer refresh succeeded!
In any event, at this point the server can't be used. Any attempts to do anything get a "not ready or lagging" messagin.
Copy code
✗ curl '<http://localhost:8108/collections>' -H 'Content-Type: application/json' -H 'X-TYPESENSE-API-KEY: my_secret_key'
{ "message": "Not Ready or Lagging"}%
At this point I can't do anything other than delete all the server files and start again. Which is obviously undesirable. So my question really is, when I get the server into this sort of situation by overloading it, what can I do to make it usable again?
a
When you restart, Typesense will load all your data from disk to RAM. After it finishes, it stops showing the Not ready or lagging message. Are you sending writes after starting the server? If the queued writes never go down, it might mean your CPU capacity (at least what you allocated for Typesense) can't keep on with the incoming writes + loading up data. If you are not, it might mean some data corruption on your disk, so it is not being able to load your data into RAM. You can test this by ensuring no more writes are sent until the server is operational. If it's not the first, the only fix for the second one is deleting your Typesense data-dir and let it start anew. A good way to know that your server is "loading up" data, is plotting the metrics Typesense export into some graphs. You'll clearly see a line going up until hitting a balance in the RAM metrics. If it's never going up, it's not loading data at all.
a
I've had a look at the stats and I can see a line increasing until eventually the server stops functioning. Looks like on my machine (Mac OSX Sequoia), typesense dies when
typesense_memory_active_bytes
is about 600,000,000 so well under even 1Gb. There is plenty of spare memory capacity. I can't see in the docs how to set the memory limit that typesense should use. What can I do to allow typesense to use more memory?
a
You can increase the docker container RAM. Then it means that it's hitting something while loading the data and crashing. Could you try making sure you are in the latest version? Also, are you using embeddings?
a
I wasn't using Docker, just installed it via
brew
it's
typesense-server@29.0
Yes I am using embeddings generated from SentenceTransformers Schema looks something like this:
Copy code
'fields': [
                {'name': 'uri', 'type': 'string'},
                {'name': 'name', 'type': 'string[]'},
                {'name': 'internal_id', 'type': 'int64'},
                {'name': 'embedding', 'type': 'float[]', 'num_dim': 768, 'optional': True}, 
            ],
            'default_sorting_field': 'internal_id'
I have about 3.9 million records I'm trying to load and it dies every time at a similar place around 350k documents. Could it be something in a document that I'm trying to index that is breaking things? How best to troubleshoot? My client app appears to hang when it was processing batch 8709 in this case (batch size is 40)
Copy code
2025-09-22 19:29:09,534 - topics.management.commands.refresh_typesense - INFO - Processed 348240/3858329 documents
2025-09-22 19:29:09,536 - topics.services.typesense_service - INFO - {'system_disk_total_bytes': '994662584320', 'system_disk_used_bytes': '827832455168', 'system_memory_total_bytes': '25769803776', 'system_memory_used_bytes': '11712086016', 'typesense_memory_active_bytes': '681328640', 'typesense_memory_allocated_bytes': '606053400', 'typesense_memory_fragmentation_ratio': '0.11', 'typesense_memory_mapped_bytes': '866893824', 'typesense_memory_metadata_bytes': '19777728', 'typesense_memory_resident_bytes': '681328640', 'typesense_memory_retained_bytes': '0'}
2025-09-22 19:29:09,537 - topics.services.typesense_service - INFO - {'cache_hit_ratio': 0.0, 'delete_latency_ms': 0, 'delete_requests_per_second': 0, 'import_70Percentile_latency_ms': 12.0, 'import_95Percentile_latency_ms': 18.0, 'import_99Percentile_latency_ms': 24.0, 'import_latency_ms': 10.821052631578947, 'import_max_latency_ms': 24, 'import_min_latency_ms': 4, 'import_requests_per_second': 9.5, 'latency_ms': {'GET /metrics.json': 1.0, 'GET /stats.json': 0.0, 'POST /collections/organizations/documents/import': 10.821052631578947}, 'overloaded_requests_per_second': 0, 'pending_write_batches': 0, 'requests_per_second': {'GET /metrics.json': 9.5, 'GET /stats.json': 9.5, 'POST /collections/organizations/documents/import': 9.5}, 'search_latency_ms': 0, 'search_requests_per_second': 0, 'total_requests_per_second': 28.5, 'write_latency_ms': 0, 'write_requests_per_second': 0}
2025-09-22 19:29:09,579 - topics.management.commands.refresh_typesense - INFO - Processing batch 8707...
2025-09-22 19:29:09,702 - topics.management.commands.refresh_typesense - INFO - Processed 348280/3858329 documents
2025-09-22 19:29:09,704 - topics.services.typesense_service - INFO - {'system_disk_total_bytes': '994662584320', 'system_disk_used_bytes': '827976687616', 'system_memory_total_bytes': '25769803776', 'system_memory_used_bytes': '11712364544', 'typesense_memory_active_bytes': '678281216', 'typesense_memory_allocated_bytes': '603128672', 'typesense_memory_fragmentation_ratio': '0.11', 'typesense_memory_mapped_bytes': '866893824', 'typesense_memory_metadata_bytes': '19777728', 'typesense_memory_resident_bytes': '678281216', 'typesense_memory_retained_bytes': '0'}
2025-09-22 19:29:09,705 - topics.services.typesense_service - INFO - {'cache_hit_ratio': 0.0, 'delete_latency_ms': 0, 'delete_requests_per_second': 0, 'import_70Percentile_latency_ms': 12.0, 'import_95Percentile_latency_ms': 18.0, 'import_99Percentile_latency_ms': 24.0, 'import_latency_ms': 10.821052631578947, 'import_max_latency_ms': 24, 'import_min_latency_ms': 4, 'import_requests_per_second': 9.5, 'latency_ms': {'GET /metrics.json': 1.0, 'GET /stats.json': 0.0, 'POST /collections/organizations/documents/import': 10.821052631578947}, 'overloaded_requests_per_second': 0, 'pending_write_batches': 0, 'requests_per_second': {'GET /metrics.json': 9.5, 'GET /stats.json': 9.5, 'POST /collections/organizations/documents/import': 9.5}, 'search_latency_ms': 0, 'search_requests_per_second': 0, 'total_requests_per_second': 28.5, 'write_latency_ms': 0, 'write_requests_per_second': 0}
2025-09-22 19:29:09,750 - topics.management.commands.refresh_typesense - INFO - Processing batch 8708...
2025-09-22 19:29:09,835 - topics.management.commands.refresh_typesense - INFO - Processed 348320/3858329 documents
2025-09-22 19:29:09,837 - topics.services.typesense_service - INFO - {'system_disk_total_bytes': '994662584320', 'system_disk_used_bytes': '828202520576', 'system_memory_total_bytes': '25769803776', 'system_memory_used_bytes': '11712528384', 'typesense_memory_active_bytes': '679460864', 'typesense_memory_allocated_bytes': '604075640', 'typesense_memory_fragmentation_ratio': '0.11', 'typesense_memory_mapped_bytes': '866893824', 'typesense_memory_metadata_bytes': '19777728', 'typesense_memory_resident_bytes': '679460864', 'typesense_memory_retained_bytes': '0'}
2025-09-22 19:29:09,838 - topics.services.typesense_service - INFO - {'cache_hit_ratio': 0.0, 'delete_latency_ms': 0, 'delete_requests_per_second': 0, 'import_70Percentile_latency_ms': 12.0, 'import_95Percentile_latency_ms': 18.0, 'import_99Percentile_latency_ms': 24.0, 'import_latency_ms': 10.821052631578947, 'import_max_latency_ms': 24, 'import_min_latency_ms': 4, 'import_requests_per_second': 9.5, 'latency_ms': {'GET /metrics.json': 1.0, 'GET /stats.json': 0.0, 'POST /collections/organizations/documents/import': 10.821052631578947}, 'overloaded_requests_per_second': 0, 'pending_write_batches': 0, 'requests_per_second': {'GET /metrics.json': 9.5, 'GET /stats.json': 9.5, 'POST /collections/organizations/documents/import': 9.5}, 'search_latency_ms': 0, 'search_requests_per_second': 0, 'total_requests_per_second': 28.5, 'write_latency_ms': 0, 'write_requests_per_second': 0}
2025-09-22 19:29:09,897 - topics.management.commands.refresh_typesense - INFO - Processing batch 8709...
And in the typesense server logs it seems to act normal and then the dreaded "fail to read" messages appear and I have to terminate the server.
Copy code
I20250922 20:31:04.126479 96134 raft_server.h:60] Peer refresh succeeded!
I20250922 20:31:14.172351 96043 raft_server.cpp:692] Term: 2, pending_queue: 0, last_index: 8730, committed: 8730, known_applied: 8730, applying: 0, pending_writes: 0, queued_writes: 3, local_sequence: 1338378
I20250922 20:31:14.172611 96134 raft_server.h:60] Peer refresh succeeded!
I20250922 20:31:21.564889 96044 batched_indexer.cpp:432] Running GC for aborted requests, req map size: 0, reference_q.size: 0
I20250922 20:31:24.210670 96043 raft_server.cpp:692] Term: 2, pending_queue: 0, last_index: 8730, committed: 8730, known_applied: 8730, applying: 0, pending_writes: 0, queued_writes: 3, local_sequence: 1338378
I20250922 20:31:24.210937 96134 raft_server.h:60] Peer refresh succeeded!
I20250922 20:31:34.259162 96043 raft_server.cpp:692] Term: 2, pending_queue: 0, last_index: 8730, committed: 8730, known_applied: 8730, applying: 0, pending_writes: 0, queued_writes: 3, local_sequence: 1338378
I20250922 20:31:34.259389 96134 raft_server.h:60] Peer refresh succeeded!
I20250922 20:31:44.301548 96043 raft_server.cpp:692] Term: 2, pending_queue: 0, last_index: 8730, committed: 8730, known_applied: 8730, applying: 0, pending_writes: 0, queued_writes: 3, local_sequence: 1338378
I20250922 20:31:44.301795 96134 raft_server.h:60] Peer refresh succeeded!
I20250922 20:31:54.343938 96043 raft_server.cpp:692] Term: 2, pending_queue: 0, last_index: 8730, committed: 8730, known_applied: 8730, applying: 0, pending_writes: 0, queued_writes: 3, local_sequence: 1338378
I20250922 20:31:54.344185 96134 raft_server.h:60] Peer refresh succeeded!
I20250922 20:32:04.387189 96043 raft_server.cpp:692] Term: 2, pending_queue: 0, last_index: 8730, committed: 8730, known_applied: 8730, applying: 0, pending_writes: 0, queued_writes: 3, local_sequence: 1338378
I20250922 20:32:04.387455 96134 raft_server.h:60] Peer refresh succeeded!
E20250922 20:32:09.988420 95738 default_variables.cpp:109] Fail to read stat
E20250922 20:32:09.988538 95738 default_variables.cpp:232] Fail to read memory state
E20250922 20:32:09.988580 95738 default_variables.cpp:294] Fail to read loadavg
E20250922 20:32:10.994004 95738 default_variables.cpp:109] Fail to read stat
E20250922 20:32:10.994156 95738 default_variables.cpp:232] Fail to read memory state
E20250922 20:32:10.994225 95738 default_variables.cpp:294] Fail to read loadavg
E20250922 20:32:11.999470 95738 default_variables.cpp:109] Fail to read stat
E20250922 20:32:11.999624 95738 default_variables.cpp:232] Fail to read memory state
E20250922 20:32:11.999691 95738 default_variables.cpp:294] Fail to read loadavg
E20250922 20:32:13.004289 95738 default_variables.cpp:109] Fail to read stat
Thanks so much for your continued investigations into this
a
Looking at the metrics, seems like you have 25GB RAM available. To store the embeddings with 768 dimensions of 3.9 million documents, you'll need around 20GB alone: 7 bytes x 3.9 million documents x 768 dimensions That's just for the embeddings of this collection. Could you make sure you have this amount of free RAM when indexing?
a
Thanks for this. I've noticed that there is quite a lot of duplicate data within my dataset so there's probably also a memory saving I can do by putting only unique records into Typesense. I'll let you know how I get on, but will be some time while I do some more testing.
I've done some refactoring to handle the embeddings more efficiently so now I'm looking at about 250k nodes each with one embedding field and a few string fields. So it should comfortably fit into 1-2 Gb which is available on this machine. I'm still seeing the problem using typesense 29 installed via
brew
on my M4 Mac. After a time the typesense server shows these errors:
Copy code
I20250928 14:48:39.631742 126404 raft_server.h:60] Peer refresh succeeded!
I20250928 14:48:49.665649 126306 raft_server.cpp:692] Term: 2, pending_queue: 0, last_index: 3888, committed: 3888, known_applied: 3888, applying: 0, pending_writes: 0, queued_writes: 5, local_sequence: 326000
I20250928 14:48:49.665894 126397 raft_server.h:60] Peer refresh succeeded!
I20250928 14:48:59.711536 126306 raft_server.cpp:692] Term: 2, pending_queue: 0, last_index: 3888, committed: 3888, known_applied: 3888, applying: 0, pending_writes: 0, queued_writes: 5, local_sequence: 326000
I20250928 14:48:59.711771 126404 raft_server.h:60] Peer refresh succeeded!
E20250928 14:49:03.139144 125995 default_variables.cpp:109] Fail to read stat
E20250928 14:49:03.139271 125995 default_variables.cpp:232] Fail to read memory state
E20250928 14:49:03.139314 125995 default_variables.cpp:294] Fail to read loadavg
E20250928 14:49:04.144349 125995 default_variables.cpp:109] Fail to read stat
E20250928 14:49:04.144511 125995 default_variables.cpp:232] Fail to read memory state
E20250928 14:49:04.144579 125995 default_variables.cpp:294] Fail to read loadavg
E20250928 14:49:05.149319 125995 default_variables.cpp:109] Fail to read stat
E20250928 14:49:05.149487 125995 default_variables.cpp:232] Fail to read memory state
typesense-memory.png shows memory stats. The x axis is batches. I'm using 40 source objects from the database per patch. Each object might generate zero or more typesense docs, but overall it's roughly a 1 to 1 relationship.
It always dies at around this volume - as if the 0.8Gb of RAM is some kind of limit. Any ideas?
a
Alan, Could you kindly try using docker?
a
Well what do you know. In Docker it works fine. Here is the memory usage graph. Fits nicely inside 1.5Gb. So for my purposes I guess I'll stick to using Docker from now on. But I wonder what the issue was with memory management from the version installed with
brew
?
a
Alan, That's great. Is just that the API for checking resources on Mac is different
a
Very glad you helped me get it working - it's much more fun to use than previous search tools I've used ;) Quick question - this was working on my local mac for testing purposes. When I come to run this on my server (an Ubuntu VM) do you usually see better results running it in docker or installing it using DEB package?
a
If you are already running in a Linux VM, there should be no reason to use Docker, except for the facilitated setup. Also, not using docker would save you some trouble managing resources or solving network quirks if you plan to move to a 3-node cluster in the future. There should be no difference in performance, but since Docker itself will use some resources, it's expected that less capacity will be available to Typesense.
1
a
Thanks Alan
a
Glad to help!