Already got `{"success":true}{"message": "Rejectin...
# community-help
s
Already got
{"success":true}{"message": "Rejecting write: running out of resource type: OUT_OF_MEMORY"}
after the first 150 documents As far I see I did not hit a single limit (lowest package of .5GB) Dataset complete size is merely 80mb
k
Sometimes, if you keep recreating collections, the allocator is lazy to return memory back to OS in anticipation of future allocations. So we have improved this memory checking logic in recent builds. Would you be able to try on
28.0.rc23
to see if it helps?
s
Uhm, but this is the first collection I see it was clearly a
⚠ Total Memory (RAM + Swap) Usage exceeded available capacity at least once during this time period, given the amount of data you've indexed. We highly recommend upgrading your cluster's RAM capacity to handle the data you've indexed.
So my only concern is, how we can better estimate it For now, we have upgraded to 5gb just to do the initial upsert and will observe the stat to see what happens I feel the estimate of "size of JSONL/CSV for ram" is inadeqate, since we are so massively under .5 gb and it eat up all with a few upserts
I am currently waiting for
Configuration Change is in progress...
to coplete, which seems to take failry long After that I can do further test.
k
Are you using a local emebdding model?
s
GPT
(I mean openai) I also notice that now node 3 still says out of mem, even if upserting stopped Can I see somehow if there are still processes ongoing? I assume it is 0, according this stat?
The config change still says it is in progress Is this normal to take so long? this has been about 15 minutes since change.
(we had set it to change "immediately")
k
DM me the cluster id
1
It's proceeding node by none. Second node in progress now.
s
Oh, ok - so it is expected to be slow-ish Thanks for confirming.
I have to wait until all is done to re-try, correct?
k
Automated cluster rotation rotates node by node to be safe.