Indexing Time in Concurrent API Requests
TLDR Somesh noticed no improvement in indexing time when making concurrent API requests. Kishore Nallan suggested checking if their CPU usage was saturated, but Somesh needed clarification.
Nov 15, 2023 (2 weeks ago)
Somesh
09:36 AMKishore Nallan
09:38 AMSomesh
09:46 AMKishore Nallan
09:47 AMTypesense
Indexed 3015 threads (79% resolved)
Similar Threads
Discussion on Performance and Scalability for Multiple Term Search
Bill asks the best way for multi-term searches in a recommendation system they developed. Kishore Nallan suggested using embeddings and remote embedder or storing and averaging vectors. Despite testing several suggested solutions, Bill continued to face performance issues, leading to unresolved discussions about scalability and recommendation system performance.
Enhancing Vector Search Performance and Response Time using Multi-Search Feature
Bill faced performance issues with vector search using multi_search feature. Jason and Kishore Nallan suggested running models on a GPU and excluding large fields from the search. Through discussion, it was established that adding more CPUs and enabling server-side caching could enhance performance. The thread concluded with the user reaching a resolution.
Implementing Semantic Search with Typesense
Erik sought advice for semantic search implementation in Typesense and raised issues around slow document import and excessive latency. Upon implementing advice from Kishore Nallan to try different models, Erik reported faster times, ultimately deciding to rate-limit imports.