Hello everyone! We are running into a `422 runnin...
# community-help
a
Hello everyone! We are running into a
422 running out of resource type: OUT_OF_MEMORY
— my first guess is obviously to increase the RAM. However our dataset is rather small (~600k docs) and we have a cluster with 2GB currently. I am trying to understand how can I calculate the size of my dataset in memory, is there a tool or algorithm I can use? As far as I understand in the documentation, it is recommended to have a cluster memory size 2X to 3X the size of the datasets, and only indexed fields are counting against memory, unindexed fields are stored in disk storage. My typical documents weight 800B which roughly gives me a max dataset size of 480MB, does that mean that my cluster will take 3X this amount of memory anyways or is it the recommendation in order to not run into 422 issues? Thank you all for you help! Much appreciated 😊 Here is my metrics.json for a 300k dataset (I deleted half of my dataset between the message above and this metrics output):
Copy code
[
    {
        "system_cpu1_active_percentage": "0.00",
        "system_cpu2_active_percentage": "0.00",
        "system_cpu_active_percentage": "0.00",
        "system_disk_total_bytes": "10726932480",
        "system_disk_used_bytes": "695521280",
        "system_memory_total_bytes": "1936801792",
        "system_memory_total_swap_bytes": "2147479552",
        "system_memory_used_bytes": "1103896576",
        "system_memory_used_swap_bytes": "173015040",
        "system_network_received_bytes": "19335404729",
        "system_network_sent_bytes": "14861285830",
        "typesense_memory_active_bytes": "815525888",
        "typesense_memory_allocated_bytes": "642774600",
        "typesense_memory_fragmentation_ratio": "0.21",
        "typesense_memory_mapped_bytes": "883200000",
        "typesense_memory_metadata_bytes": "50424256",
        "typesense_memory_resident_bytes": "815525888",
        "typesense_memory_retained_bytes": "2597023744",
        "ok": true
    }
]