Hi , i am running typesense on eks with 3 node clu...
# community-help
m
Hi , i am running typesense on eks with 3 node cluster i see different memory usage for each pods any idea why this might be happening
Copy code
NAME                         CPU(cores)   MEMORY(bytes)            
typesense-0                  5m           47082Mi         
typesense-1                  4m           105839Mi        
typesense-2                  4m           85150Mi
i see this logs on each pods
Copy code
20241130 04:00:55.227258   454 raft_server.cpp:706] Term: 12, pending_queue: 0, last_index: 686477, committed: 686477, known_applied: 686477, applying: 0, pending_writes: 0, queued_writes: 0, local_sequence: 1532529754
I20241130 04:01:04.699625   455 batched_indexer.cpp:428] Running GC for aborted requests, req map size: 0
I20241130 04:01:05.231527   454 raft_server.cpp:706] Term: 12, pending_queue: 0, last_index: 686477, committed: 686477, known_applied: 686477, applying: 0, pending_writes: 0, queued_writes: 0, local_sequence: 1532529754
I20241130 04:01:15.235275   454 raft_server.cpp:706] Term: 12, pending_queue: 0, last_index: 686477, committed: 686477, known_applied: 686477, applying: 0, pending_writes: 0, queued_writes: 0, local_sequence: 1532529754
There is no traffic right now on this.
k
Can you hit the
/metrics.json
API on each node and report the
typesense_memory_active_bytes
value?
m
Copy code
Memory
active : 69.8 GB
allocated : 50.9 GB
fragmentation : 0.27
mapped : 71.9 GB
metadata : 1.66 GB
resident : 69.8 GB
retained : 126 GB
Copy code
Memory
active : 53.5 GB
allocated : 50.9 GB
fragmentation : 0.05
mapped : 55.2 GB
metadata : 1.45 GB
resident : 53.5 GB
retained : 114 GB
Copy code
Memory
active : 31.9 GB
allocated : 30.5 GB
fragmentation : 0.04
mapped : 33.1 GB
metadata : 1.01 GB
resident : 31.9 GB
retained : 99.7 GB
For each node this is the values
is this issue releted to compaction?
k
Have you verified that all nodes actually have the same record count?
One node has high fragmentation, which indicates memory "holes" that are present in the memory pages allocated by the application.
In general, it's a bit tricky to measure actual memory usage when you do extensive writes and deletes, because the memory allocator we (jemalloc) tends to reserve memory and will not release it back to OS in anticipation of reusing those memory blocks.
m
but there no writes and deletes operation happening right now, is there any way to free this memory.
yes i am getting same record count on fetch on hitting multiple times.
k
You can restart the pod with high fragmentation, that tends to help with fragmentation because bulk loading during restart is more efficient.
👀 1
m
will it lead to down time? if i do one pod restart at a time.
k
No, 3 node cluster can survive a pod going down
👍 1