Hi, I have an issue with vector search. I want to search for 10 titles and I use multi search but the response time is about 1-2 seconds. My collection has max 200 docs and I think the issue is with the multi_search. Is there any other way to search for multiple terms (embedding search)? Tested on a 2 vCPU - 8GB RAM droplet, 15% CPU & 20% RAM usage.
Edit: After many tests even with 8 vCPU - 16 GB RAM machine, the performance is disappointing. Is there any way to connect vector search with Hugging Face Inference Endpoints?