Discussing Server Side Cache Limit for Optimising CPU Usage.
TLDR Aljosa expressed difficulty benefiting from their server's cache due to varied queries, and suggested a cache limit increase. Kishore Nallan provided potential solutions and agreed to make the cache limit configurable. After adjusting queries, Aljosa saw improved CPU usage.


Feb 18, 2022 (19 months ago)
Aljosa
09:18 PMFeb 19, 2022 (19 months ago)
Kishore Nallan
07:46 AMI can't think of a scenario where this will be counter productive, though. Have you noticed any strange behaviors?
Aljosa
06:09 PMI believe our queries are much too varied to benefit from the current cache implementation with a 128 query response limit since we have over 1000 collections with unique facets for each collection. Definitely explains why we haven't seen a reduction in CPU usage with server side cache 😅 (and Cloudflare doesn't (and can't) cache POST requests used for multi_search queries)
128 does seem very conservative though, perhaps the option to configure it could be exposed in a future build? Wondering if we should set up redis but that feels wrong since Typesense is already in-memory !
I think for CPU usage optimisation, our main concern may be the
max_facet_values
which we've set to 1000 (!?) from the default value of 10 to ensure all facet values are returned due to the dynamic nature of our collections.I would imagine that it's one of the most computationally expensive operations, if not the most expensive one. I'll be making our
max_facet_values
call dynamic which should help there.Feb 20, 2022 (19 months ago)
Kishore Nallan
01:57 AMIf you had a mapping of which facets are applicable to which collection, then you can reduce the number of facets called for. I wonder if allowing an approximate facet counting mode will also help, but so far nobody has actually asked for that.
Feb 21, 2022 (19 months ago)
Aljosa
12:46 AMAs far as making the cache entries configurable, I guess it will depend on what the cost is to increase the number. Would going from 128 values to 10000 simply be an overhead of 10k values * the record size?
Kishore Nallan
02:24 AMsize_of_record * num_results_returned * 10,000

Feb 28, 2022 (19 months ago)
Aljosa
03:47 PMKishore Nallan
03:49 PM
Typesense
Indexed 2764 threads (79% resolved)
Similar Threads
Improving System Performance and Typesense Query Efficiency
SamHendley was experiencing performance issues with Typesense's large-scale system testing and proposed several improvements. Both Jason and Kishore Nallan addressed the suggestions and corrected some misconceptions. They provided further clarification and recommended upgrades for better performance.
Discussions on Typesense, Collections, and Dynamic Fields
Tugay shares plans to use Typesense for their SaaS platform and asks about collection sizes and sharding. Jason clarifies Typesense's capabilities and shares a beta feature. They discuss using unique collections per customer and new improvements. Kishore Nallan and Gabe comment on threading and data protection respectively.


Dynamic Facets in Typesense Cloud Version and Optimizing Performance
Andrew asked about dynamic facets in Typesense. Jason gave in-depth explanations and confirmed that Typesense API already supports dynamic facets. Alex inquired on improving performance and SSR support, and Jason suggested server-side rendering and caching for optimized performance.

Understanding Indexing and Search-As-You-Type In Typesense
Steven had queries about indexing and search-as-you-type in Typesense. Jason clarified that bulk updates are faster and search-as-you-type is resource intensive but worth it. The discussion also included querying benchmarks and Typesense's drop_tokens_threshold parameter, with participation from bnfd.

Production Typesense Issue with Unexpected Filter Behavior
Ankit flagged a problem with a specific filter on the production server of Typesense. After several exchanges regarding optimisation and version checks, Kishore Nallan provided latest builds to troubleshoot. The filtering within facets issue persists and potential edge cases are being investigated.

