Thanks <@U01NZ4D0LDD>! The experience was about ha...
# community-help
ó
Thanks @Jason Bosco! The experience was about having big fields indexed in one collection and not for performant reasons but for operational pains as having a node down for multiple hours for updates or issues means: • You won't have HA while upgrading or having issues for several hours, high risk. Or you'll have to pay for another node while the process happens, which means adding several hours and cost to spin it up. In the hundreds of Gbs, it is costly. • The resync time between nodes takes time. • In case you need to use the backup from the primary source for any catastrophic reason, that means at the very least several hours to several days of work and downtime. • Any change to add or modify fields will be a painful and very long process. If you want to do it following a parallel changes approach (also known as expand and contract) it will take a long time as you first need to add the new field and update all the records, then deploy the change to start using that field and then drop the first, which is several hours per step. With the 32gb 64gb for having 3 very big fields with millions of records, that's the sweetspot as it's one hour downtime and you can make changes within the work day as per the one core per index limit (3 cores at max in use for this case). In his case, with 5-10 he will benefit having 8-16 cores, so probable he can go to 128Gb or even 256Gb configurations. But it needs testing. I just wanted to share the whole experience and learnings we already got