Having an issue with the Geo load-balanced endpoin...
# community-help
e
Having an issue with the Geo load-balanced endpoint in that responses are often very slow. We have had issues in the past with the load balanced endpoint but recently upgraded to have SDN to support our global offices but it seems that it’s not working correctly. 1st screenshot is with the ‘nearest_node’ option and the second is just reverting to using the 3 separate node endpoints. There’s a clear performance difference. I suspect this isn’t correct? Third screenshot may be related in that sometimes the requests fail but it seems to only fail on the load-balanced endpoint.
k
👋 Are you using any proxy or do you requests get routed via some firewall?
👇 1
e
Yes, we do have cloudflare installed. Could that be it?
k
Yes, it's possible that somehow the outgoing IP is not getting matched with the nearest geo node OR the outgoing IP is actually somewhere further out than what you are expecting it to be.
Try using a machine that is not connected to your corporate network and try making the same requests.
e
There’s no corporate network as such and but all requests go through cloudflare which acts as a waf for the application
k
Yes, that will proxy your requests
But wait, this is for the application only right? Those XHR requests are hitting Typesense directly?
e
yes, and we were just discussing that we were experiencing this in local development too
k
Can you also verify the
search_time_ms
value in the responses just to be sure?
Total time shown by browser is a sum of
search_time_ms
and actual network latency.
To truly test this, pick a really fast query that gets executed within a few milliseconds as per
search_time_ms
and then try running that via both LB and non-LB end-points.
e
We’ll check now. The search_time_in_ms on the non-load-balanced configuration lags about 100ms on the total time
t
Initial load of non-query results with nearest_node enabled: 1.82s network call - search_time_ms: 1691 - This was tested in a local dev environment
e
This was run locally too
k
Can we totally eliminate search_time_ms from the equation? Pick a query that produces no results, for e.g. like a keyword like
afkjfkjdskjfsd
that won't exist.
Or if the end-point and the app is already public, DM me the cluster ID and I can debug this for you.
e
We ran that search and it was pretty much the same across both configurations - 8/9 ms
k
Then I don't think the LB end-point is having an issue. Because if you remove the actual query latency, everything else will be network latency.
e
There is still a significant difference between them which is odd.
k
With the normal queries?
e
Our search is much more geared to using facets and filters than the actual search bar too. not sure if that would change things?
Yes, with normal queries and initial load
k
I don't think the LB configuration can be a culprit if it is not reproduced across simple queries. The good news I have for you regarding general query latency with facets is that we are actively working to fix that so should see drastic improvements in a few weeks.
e
oooh. That’s good to hear.