Jason Bosco
06/30/2025, 7:43 PMv29.0
- is now out!
Here are some key new features:
⭐️ Natural Language Search - Using the magic of LLMs, Typesense can now detect user intent in search queries automatically, and convert parts of the query into structured filters and sorts.
For eg, if you have a cars data set, and a user types in A Honda or BMW with at least 200 hp
, Typesense can convert that automatically into: filter_by: make:[Honda, BMW] && engine_hp:>=200
. This leads to much higher quality, less-noisy results, when compared to a simple full-text search or even semantic / hybrid search.
Just like Text-to-SQL, think of this as "Text to Typesense Query". You can also combine this with Conversational Search to create novel search experiences.
The best part about this feature is that you do not need to generate embeddings! You can use this with your existing JSON documents as is and there are no additional storage requirements.
Checkout out the docs.
⭐️ Filtering on attributes of Nested Array of Objects: a commonly requested feature. Docs.
⭐ Streaming Responses in Conversational Search: Previously you had to wait till the entire response was formed before Typesense would return the results. But now you can stream the response in real-time as the response is being returned from the LLM.
⭐ Add Tags to Popular and No-Hits searches in Search Analytics: You can now add tags to searches and track those tags in Typesense's built-in analytics. A few use-cases we've heard where this could be helpful - track analytics per web store, per locale, per platform, etc. You can also track values of the filters used.
⭐ Ability to upload any user-provided images at search time, to do dynamic image similarity searches
⭐ Performance improvements when using group_by on high cardinality fields.
⭐ Various stability improvements and quality of life improvements in the JOIN feature and use_cache
feature.
Those are just some of the highlights. The complete changelog is here: https://typesense.org/docs/29.0/api/#what-s-new
As always, thank you for all your feedback, questions, feature requests, bug reports and contributions, to help ship this release 🙏 🙌Ankur Gupta
06/30/2025, 7:44 PM️ *Natural Language Search* - Using the magic of LLMs, Typesense can now detect user intent in search queries automatically, and convert parts of the query into structured filters and sorts.
is hugeAnkur Gupta
06/30/2025, 7:44 PMJason Bosco
06/30/2025, 7:45 PMSam Schelfhout
06/30/2025, 7:49 PMVamshi Aruru
06/30/2025, 8:41 PMFiltering Nested Array Objects
, the release notes say Filter for two properties within a nested array of objects
, but the docs don't say anything about the limitation of filtering for only two properties. I imagine the docs are correct here?Jason Bosco
06/30/2025, 8:46 PMMac McCabe
06/30/2025, 10:53 PMJason Bosco
06/30/2025, 10:55 PMCharley Carriero
07/01/2025, 11:24 AMÓscar Vicente
07/01/2025, 12:42 PMAditya Verma
07/04/2025, 5:57 AMJason Bosco
07/14/2025, 7:03 PMnl_query=true
will go through the LLM. So you can control when to trigger this feature. For eg, you could show instant keyword search results, and then in parallel trigger nl_query search results as neededJason Bosco
07/14/2025, 7:03 PMGauthier PLM
08/07/2025, 10:03 AMnew_in_town
08/16/2025, 3:02 PMcurl "<https://api.novita.ai/openai/v1/chat/completions>" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer " \
-d @- << 'EOF'
{
"model": "qwen/qwen3-8b-fp8",
"messages": [
{
"role": "system",
"content": "Be a helpful assistant"
},
{
"role": "user",
"content": "Hi there!"
}
],
"response_format": { "type": "text" },
"max_tokens": 10000,
"temperature": 1,
"top_p": 1,
"min_p": 0,
"top_k": 50,
"presence_penalty": 0,
"frequency_penalty": 0,
"repetition_penalty": 1
}
So, you do not need to list Novita+Qwen3-8b as "Supported Model Type". Just make the URL <https://api.novita.ai/openai/v1/chat/completions>
fully configurable (all such URL's have /chat/completions
at the end) as well as max_tokens, temperature, top_p...
Thats it, Just write in the documentation "We tested this feature with this-and-that model/provider with these parameters"
P.S.
You can compare with, let say, OpenRouter: https://openrouter.ai/docs/quickstart#using-the-openrouter-api-directly
click on "Shell" tab: same /chat/completions
at the end and same set of parameters!