Hi everyone,
We’ve recently set up a semantic search search using an open-source model. We're building a proof of concept and currently our collection contains around 70 records. However, we’ve noticed that memory usage keeps spiking to its limit (1GB RAM and 2 vCPUs).
We’re looking for suggestions to optimize memory usage and prevent it from reaching peak levels.
Additionally, we’re exploring ways to update the collection in bulk instead of processing records one at a time.
Any insights or recommendations would be greatly appreciated!
Thanks in advance!