Hi all, could I get some advice on how to optimize...
# community-help
a
Hi all, could I get some advice on how to optimize a collection/query? For context the collection holds about 2.2M documents. As you can tell, there are a number of
facet_by
fields in use, which is one part of the issue, performance-wise. We’ve managed to overcome this by using the facet sampling parameters. However some facets are of high cardinality, which means the sample may not include them, leaving users confused. The second problem is the
filter_by
, particularly the
(city:[...])
array filter. This is how we perform authorization at the moment, which we intend to replace with scoped api keys in the near future. Performance is quite bad for large arrays (~50 cities). Any advice on overcoming for these issues? Sample query attached.