Hi everyone, We’ve recently set up a semantic sear...
# community-help
r
Hi everyone, We’ve recently set up a semantic search search using an open-source model. We're building a proof of concept and currently our collection contains around 70 records. However, we’ve noticed that memory usage keeps spiking to its limit (1GB RAM and 2 vCPUs). We’re looking for suggestions to optimize memory usage and prevent it from reaching peak levels. Additionally, we’re exploring ways to update the collection in bulk instead of processing records one at a time. Any insights or recommendations would be greatly appreciated! Thanks in advance!
j
When using semantic search, the model itself requires RAM to run depending on the size of the model: https://typesense.org/docs/guide/system-requirements.html#for-semantic-and-hybrid-search
You can update documents in a collection in bulk using the import endpoint and `action=update`: https://typesense.org/docs/27.1/api/documents.html#index-multiple-documents