Hi everyone, while trying to load test an API that...
# community-help
j
Hi everyone, while trying to load test an API that my team built, I'm running into an issue that looks like it could be a memory leak, and was curious about the community's thoughts. It appears that Typesense's memory footprint on our server is increasing with every search we execute, and never seems to go back down again. Eventually, it gets to the point where the server becomes unresponsive and needs to be rebooted. Has anyone ever seen this sort of behavior?
To get slightly more specific, I just ran a test which executed just one search per second for fifteen minutes, and the "typesense_memory_active_bytes" increased by about 10MB. It looks like it may be going back down again, but far too slowly for me to be comfortable with.
j
@Jim Murtha we did have a memory leak in one of the earlier RC builds of v0.22.0. Could you try this again on 0.22.0.rcs19?
j
I believe our server is running v0.21.0 (is there is an easy way to double check this though?), so would that issue still apply?
j
Hmm, no the memory leak I’m referring to was only in an early RC build of v0.22.0. So this is something different. You can verify which version you’re running by doing a GET /debug request
j
GET /debug tells me it is indeed 0.21.0
j
Oh ok, could you check if the memory increases beyond 10MB over longer periods of times? Depending on your dataset size a 10MB variation could be within the realm of expected variation…
j
Sure. I expect that it will go beyond 10MB based on the behavior I saw in some earlier tests, but those were before I was actively monitoring memory, so I will try it out
The memory usage has decreased by just over 1MB in the ~30 minutes since my last test finished.
Same test as before, but running for 30 minutes instead of 15 saw about a 23MB increase in "typesense_memory_active_bytes"
j
Could you increase the concurrency or requests per minute to see if that causes memory consumption to increase more steeply?
j
Increasing it to 10 searches per second saw it increase by 8MB in only one minute. Unfortunately, I don't have time to test any more right now, but I'll do more in the morning and share the results. Thanks for your help so far
👍 1
k
We are running hundreds of clusters on 0.21 in production and I haven't seen any memory leaks. The builds are also tested with Valgrind for memory leaks. However I will be happy to take a look if it's possible for you to share a sample dataset and query on which you observe this behaviour.
j
I believe I've isolated what is causing the memory issue in my searches. I ran 10 searches per second for 10 minutes and saw a 94MB increase in memory usage. I removed one filter which was on all of my searches, and memory usage only increased by a couple MB, and most importantly went back down again after a minute or two.
The filter in question looks like "resolution:!=[
merge
,
reject
]". "resolution" in this case is an optional string field.
k
Thats great, can you please post a sample collection schema here? I can try with it. Thanks!
j
In my API's real world usage, this filter will probably be on most, but not all, searches. I can probably accomplish this in other, less memory intensive way
👍 1
And sure, give me a moment and I'll provide the schema
ts_data_schema.json
My API has a few optional parameters which add other filters to the search. I'll need to do more testing to see if any of those filters cause issues, but I feel confident in saying that the filter i mentioned before was the root of what i was seeing
k
@Jim Murtha It looks like we have fixed this on the 0.22 RC builds.
Would you be able to give that a spin?
Do you use Linux binary or Docker container?
j
The Linux binary
k
j
I'll need to regroup with my team about trying any other builds. Thanks for your help though.
k
Thank you. I went through the change log and found a Valgrind warning we addressed specific to the use of the NOT operator, so I am confident that the 0.22 RC builds are free of this.
👏 1
👍 1
If you do try the build, here are the things that have changed in 0.22 RC to be aware of: https://github.com/typesense/typesense-website/tree/v0.22.0-docs/docs-site/content/0.22.0/api#deprecations--behavior-changes
a
@Kishore Nallan when do you think this built will be officially out for use?
k
There are already customers using these RC builds. We have been identifying a few smaller last mile issues to close. These are some rough edges / usability issues in the newer features in this release. It is pretty stable otherwise.