https://typesense.org logo
Join Slack
Powered by
# community-help
  • m

    Mohsin Malik

    11/27/2025, 10:46 AM
    Hi, is there a way we can increase group_limit from 99 to some other for grouping. Basically I want to do AVG on some value of documents group_by col e.g 'date' but there are many documents in 1 date?
    a
    f
    • 3
    • 2
  • h

    Hariharan Palani

    11/28/2025, 10:11 AM
    Hi, I am using typesense version 27. Till 26, multi_search API call returns correct value. Whereas in 27, we are getting output for one search query eventhough we send the multiple query in the request. Following is the multi search query object and the result for your reference,
  • h

    Hariharan Palani

    11/28/2025, 10:14 AM
    Hi, @Kishore Nallan @Alan Martini @Shiva Dixith I am using typesense version 27. Till 26, multi_search API call returns correct value. Whereas in 27, we are getting output for one search query eventhough we send the multiple query in the request. Following is the multi search query object and the result for your reference, Query: { "searches": [ { "query_by": "component_name", "drop_tokens_threshold": 0, "num_typos": 0, "group_limit": 1, "sort_by": "creationtime:desc", "highlight_full_fields": "component_name", "collection": "components", "q": "*", "facet_by": "Attribute_Make,Attribute_Model,attributes.Attr_Width,component_name,system_name", "filter_by": "(languagecode=en&&packagenamekey=[Tree_Disassembly]) && component_name:=[
    A/C Gas
    ]", "page": 1, "per_page": 24 }, { "query_by": "component_name", "drop_tokens_threshold": 0, "num_typos": 0, "group_limit": 1, "sort_by": "creationtime:desc", "exhaustive_search": true, "highlight_full_fields": "component_name", "collection": "components", "q": "*", "facet_by": "component_name", "filter_by": "(languagecode=en&&packagenamekey=[Tree_Disassembly])", "page": 1 } ] } Output: Output shown in the attachment. Expected behavior is to have results should contains 2 items in the collection.
    f
    • 2
    • 1
  • h

    Himanshu Seth

    11/28/2025, 10:39 AM
    Hi, we are evaluating Typesense for our Shopify store. Has anyone had any success with this integration?
    a
    f
    • 3
    • 5
  • o

    Ollie J

    11/28/2025, 12:54 PM
    After doing a schema migration (drop and change type), l seem to have my cluster stuck at 100%, and it has been for over 1hr now, cant search it or anything
    a
    • 2
    • 7
  • p

    Pream Pinbut

    11/28/2025, 5:48 PM
    hello I am new to typesense (not just typesense but the search engine stuff) I have container running with 1 collections and 4 documents inside let it run over night and at one point memory increase drastically is this something not supposed to happened or was it expected behavior of memory optimization?
    a
    • 2
    • 1
  • f

    fudon

    11/29/2025, 4:46 AM
    Hello everyone, I am thinking of introducing Typesense to a X-like social media app, and trying to remove posts by blocked users from the search result. However,
    !
    in the query filtering is not working as I expected, and  
    "filter_by" : "!$blocks(blocker_user_id := 1)"
    and
    "filter_by" : "$blocks(blocker_user_id := 1)"
    returns the seame result. Can anyone let me know how to solve this? I’m using version 29.0
    Copy code
    blocks_schema = {
      'name': 'blocks',
      'fields': [
        {'name': 'id', 'type': 'string'},
        {'name': 'blocker_user_id', 'type': 'int32'},
        {'name': 'blockee_user_id', 'type': 'int32'},
      ],
    }
    
    posts_schema = {
      'name': 'posts',
      'fields': [
        {'name': 'id', 'type': 'string'},
        {'name': 'name', 'type': 'string', "stem": true},
        {'name': 'description', 'type': 'string'},
        {'name':'created_at', 'type': 'int64'},
        {'name':'created_by', 'type': 'int32', "reference": "blocks.blockee_user_id", "async_reference": true},
      ],
    }
    j
    • 2
    • 2
  • t

    Thomas Andersson

    11/29/2025, 2:30 PM
    V30 changelog: ”Support for ! as a standalone negation operator in filters, allowing field:![value] syntax as an alternative to field:!=[value].. ” is this related?
  • t

    Thomas Andersson

    11/29/2025, 2:30 PM
    Ie missing = in v29?
  • t

    Thomas Andersson

    11/29/2025, 2:30 PM
    In phone right now so might me wrong idea, cannot check so easy
    f
    f
    • 3
    • 6
  • v

    Vinay Varma

    11/29/2025, 10:34 PM
    Hi all! I noticed stopwords are currently removed only from the query, not from the documents we score against. Is there a way to apply a similar stopword-handling (or equivalent normalization) on the document side? The issue I’m trying to solve: users often omit common words like “the” in their queries, but documents containing them seem to get scored lower even when they’re the most relevant match. Any workaround or recommended approach?
    f
    • 2
    • 3
  • a

    Assad Yousuf

    11/30/2025, 5:37 PM
    Hi all, who do i reach out to regarding billing anomalies? Currently just going through support@typesense.com but unsure if that is the right way to reach out
    f
    a
    • 3
    • 4
  • h

    Hung-wei Chuang

    11/30/2025, 6:33 PM
    so with the query
    Florida State vs Florida
    and
    prioritizeExactMatch=true
    • doc 1:
    Florida State vs Florida mens basketball
    • doc 2:
    Alabamda State vs Florida State basketball
    these two documents have the same textmatch score. is this because
    prioritizeExactMatch
    only checks for occurrences of token matches without regard to phrase order? if so, is there a way to factor in phrase order to give it a higher score? doc2 having the same textmatch as doc1 is extremely problematic
    j
    • 2
    • 2
  • v

    Vamshi Aruru

    12/01/2025, 4:54 AM
    HI everyone, I am trying to use nested array search as mentioned in the docs here https://typesense.org/docs/29.0/api/search.html#filter-parameters under
    Filtering Nested Arrays of Objects:
    . Howver, when I use the following filter by:
    Copy code
    variants.{ageGroup := [`4-6 Y`] && availability := `IN_STOCK`}
    typesense takes very long to respond anything and eventually times out. Our typesense version is v29, we are on typesense cloud. The cluster id is
    f1kdzbg6o7i5n2sxp
    . here's the entire curl for reference:
    Copy code
    curl --location --globoff '<https://f1kdzbg6o7i5n2sxp-1.a1.typesense.net/collections/includ_products/documents/search?q=*&query_by=searchText0%2CsearchText1%2CsearchText2&filter_by=variants.{ageGroup%20%3A%3D%20[%604-6%20Y%60]%20%26%26%20availability%20%3A%3D%20%60IN_STOCK%60}>' \
    --header 'x-typesense-api-key: ••••••'
    Please let me know if I am doing anything wrong or how I can use this feature. Thank you.
    k
    • 2
    • 3
  • m

    Mitul Savaliya

    12/01/2025, 6:45 AM
    Hi everyone,
    quick question
    — I see Typesense releases using both rc and rca tags. I understand rc = Release Candidate, but what does rca mean in this context? How is it different from rc? Just trying to understand the tagging convention. Thanks!
    f
    • 2
    • 2
  • s

    Siddharth Mahesh Tiwari

    12/01/2025, 12:49 PM
    Hi all, We have recently migrated from algolia to typesense, with 22M docs in our index. We are using 32GB 8v 3node server for our use-case. We are facing issue with search terms: if searching "TRIVOLIB FORTE", search is fast and is within our timeout limit. Response time is 300ms. but if we add "TRIVOLIB FORTE 1", the search becomes very latent and causes CPU utilisation to go 100% with both write and read taking lot of time. Sharing the Preset Config we are using in our searches. Response time becomes 2.5s. Has anyone faced this before?? Any help would be appreciated. More details attached in below doc. https://docs.google.com/document/d/1tdjRe3RRJ8EtraqBFTaqjGARlYIQsZj1tC6zO7_jrE4/edit?tab=t.17s1n6bifxqm
    k
    • 2
    • 15
  • g

    Georgi Nachev

    12/01/2025, 3:36 PM
    Hi, I wanted to ask if there is any plan or expected timeline for supporting full-text search (
    query_by
    ) on referenced/joined fields in Typesense (like
    vendor.name
    when searching products)? Currently, it seems that
    query_by
    works only on the main collection, and I’m wondering if native support for searching across related collections will be added soon. Thank you!
    k
    • 2
    • 4
  • j

    Josh Handley

    12/01/2025, 4:20 PM
    Ran into this crash while uploading synonyms on a new cluster we just set up. We just created the collection and had not added any documents yet. It is the synonym list we uploaded successfully in another cluster.
    a
    • 2
    • 5
  • m

    Mustafa Kilic

    12/02/2025, 12:01 AM
    Hello guys, I am very new to open-source search engine tools and would like an answer to a simple question. Why can´t these tools run without Docker on Windows? What is the technical reason behind it? Why couldn´t it be an .exe file, a library, API access, or anything else 🙂 ? I wanted to use any tool, but as far as I understand, Typensense and OpenSearch need Docker, and Elastic... I would really appreciate any sources of information you could give me about this
    k
    • 2
    • 1
  • g

    Georgi Nachev

    12/02/2025, 8:14 AM
    Hi @Harpreet Sangar! Since full-text search on referenced fields is not available yet, could you share what the Typesense team considers the recommended approach for handling this use case today? Specifically: • Is denormalizing related fields into the main collection the preferred method? • Or is a multi-step search (search vendors → filter products) the approach you recommend? Any guidance or examples on best practices would be greatly appreciated!
    h
    k
    • 3
    • 25
  • c

    Chandar Venkata Rama

    12/03/2025, 5:25 AM
    Hello I have to search for a string by name `*Hey`di AS*` [with back tick value] when i filter these ones as given in the example i get zero values , whereas when i remove `*Hey`di AS*` it returns the value. "filter_by":"name:=[
    *Heydi*
    ,
    *Hey´di*
    ,`*Hey`di AS*`,
    *Heydi AS*
    ]", is backtick in the sting causing the issue and how can i escape this ?
    f
    • 2
    • 1
  • l

    Luca Lusso

    12/03/2025, 3:01 PM
    Hi, I'm using an instance of LiteLLM as a remote embedder. For some network misconfiguration the LiteLLM instance went down for a while. It seems that this event caused typesense to delete the embedding field. Was it just a coincidence? In any case, the embedding field disappeared from all collections at some point.
    👀 1
    a
    • 2
    • 9
  • m

    Mateusz Buśkiewicz

    12/03/2025, 6:15 PM
    I’m having issues with setting custom
    num_dim
    for OpenAI-compatible APIs. I’m using
    typesense/typesense:30.0.rca34
    field settings
    Copy code
    {
          name: "embeddings_q",
          type: "float[]",
          embed: {
            from: ["q"],
            model_config: {
              model_name: "openai/mymodel",
              api_key: "",
              url: "<http://debug-proxy:8080/v1>",
            },
          },
          num_dim: 2,
        },
    API is served with
    <http://ghcr.io/huggingface/text-embeddings-inference:hopper-1.8.3|ghcr.io/huggingface/text-embeddings-inference:hopper-1.8.3>
    and then I use
    mitmproxy/mitmproxy
    for debugging purposes. `Creating the collection works fine, and Typesense is passing Creating works fine, and Typesense is passing
    dimensions
    properly:
    Copy code
    debug-proxy  | 172.18.0.2:54786: POST <http://embeddings/v1/embeddings>
    debug-proxy  |     Host: embeddings
    debug-proxy  |     User-Agent: Typesense/1.0
    debug-proxy  |     Accept: */*
    debug-proxy  |     Content-Type: application/json
    debug-proxy  |     Authorization: Bearer
    debug-proxy  |     Content-Length: 55
    debug-proxy  | 
    debug-proxy  |     {
    debug-proxy  |         "dimensions": 2,
    debug-proxy  |         "input": "typesense",
    debug-proxy  |         "model": "my-model"
    debug-proxy  |     }
    debug-proxy  | 
    debug-proxy  |  << 200 OK 190b
    debug-proxy  |     content-type: application/json
    debug-proxy  |     x-compute-type: gpu+optimized
    debug-proxy  |     x-compute-time: 2
    debug-proxy  |     x-compute-characters: 9
    debug-proxy  |     x-compute-tokens: 4
    debug-proxy  |     x-total-time: 2
    debug-proxy  |     x-tokenization-time: 0
    debug-proxy  |     x-queue-time: 0
    debug-proxy  |     x-inference-time: 1
    debug-proxy  |     vary: origin, access-control-request-method, access-control-request-headers
    debug-proxy  |     access-control-allow-origin: *
    debug-proxy  |     content-length: 190
    debug-proxy  |     date: Wed, 03 Dec 2025 18:14:27 GMT
    debug-proxy  | 
    debug-proxy  |     {
    debug-proxy  |         "object": "list",
    debug-proxy  |         "data": [
    debug-proxy  |             {
    debug-proxy  |                 "object": "embedding",
    debug-proxy  |                 "embedding": [
    debug-proxy  |                     -0.8041298,
    debug-proxy  |                     0.59445375
    debug-proxy  |                 ],
    debug-proxy  |                 "index": 0
    debug-proxy  |             }
    debug-proxy  |         ],
    debug-proxy  |         "model": "Snowflake/snowflake-arctic-embed-l-v2.0",
    debug-proxy  |         "usage": {
    debug-proxy  |             "prompt_tokens": 4,
    debug-proxy  |             "total_tokens": 4
    debug-proxy  |         }
    debug-proxy  |     }
    debug-proxy  |
    But then, when I actually try to index something, it throws:
    Copy code
    curl -X POST "<http://localhost:8108/collections/blocks/documents>"   -H "X-TYPESENSE-API-KEY: key"   -H "Content-Type: application/json"   -d '{
        "q": "example query text",
        "organics_text": "example organics text content"
      }'
    {"message":"Vector size mismatch."}
    And mitmproxy shows that incorrect num_dim is passed:
    Copy code
    debug-proxy  | 172.18.0.2:46254: POST <http://embeddings/v1/embeddings>
    debug-proxy  |     Host: embeddings
    debug-proxy  |     User-Agent: Typesense/1.0
    debug-proxy  |     Accept: */*
    debug-proxy  |     Content-Type: application/json
    debug-proxy  |     Authorization: Bearer
    debug-proxy  |     Content-Length: 70
    debug-proxy  | 
    debug-proxy  |     {
    debug-proxy  |         "dimensions": 1024,
    debug-proxy  |         "input": [
    debug-proxy  |             "example query text "
    debug-proxy  |         ],
    debug-proxy  |         "model": "my-model"
    debug-proxy  |     }
    debug-proxy  | 
    debug-proxy  |  << 200 OK 12.6k
    debug-proxy  |     content-type: application/json
    debug-proxy  |     x-compute-type: gpu+optimized
    debug-proxy  |     x-compute-time: 2
    debug-proxy  |     x-compute-characters: 19
    debug-proxy  |     x-compute-tokens: 7
    debug-proxy  |     x-total-time: 2
    debug-proxy  |     x-tokenization-time: 0
    debug-proxy  |     x-queue-time: 0
    debug-proxy  |     x-inference-time: 1
    debug-proxy  |     vary: origin, access-control-request-method, access-control-request-headers
    debug-proxy  |     access-control-allow-origin: *
    debug-proxy  |     content-length: 12885
    debug-proxy  |     date: Wed, 03 Dec 2025 18:15:04 GMT
    debug-proxy  | 
    debug-proxy  |     {
    debug-proxy  |         "object": "list",
    debug-proxy  |         "data": [
    debug-proxy  |             {
    debug-proxy  |                 "object": "embedding",
    debug-proxy  |                 "embedding": [
    debug-proxy  |                     -0.0013086506,
    debug-proxy  |                     0.114735976,
    debug-proxy  |                     0.028122982,
    // rest omitted for brevity
    j
    • 2
    • 1
  • m

    Mohsin Malik

    12/03/2025, 8:47 PM
    Hi is there a way I can filter data based on some key presence in doc? e.g only return documents where prices: [{}] present?
    a
    • 2
    • 3
  • n

    Nik Spyratos

    12/04/2025, 8:50 AM
    Hey peeps! Odd one I'm encountering that I haven't been able to replicate on my local typesense vs production. For a handful of document updates in a large data set, I'm getting a 400
    Field <fieldname> must be an array.
    . As far as I can tell, an array with values is always being sent through in my payload. The field is not set to optional, but passing empty arrays seems to work locally. The schema definition from the cluster is:
    Copy code
    {
          "facet": true,
          "index": true,
          "infix": false,
          "locale": "<foreign locale>",
          "name": "fieldname",
          "optional": false,
          "sort": false,
          "stem": false,
          "stem_dictionary": "",
          "store": true,
          "type": "string[]"
        },
    This is only happening for a small subset of data, and again from inspecting what should be sent through, there definitely is data inside of the field array being passed through.
    f
    • 2
    • 6
  • d

    Diego Chacón Sanchiz

    12/04/2025, 11:17 AM
    Hello guys! We've noticed that without any changes (no indexing, no searches, no infrastructure changes), starting last night at 12:00 AM, we began seeing this log in the leader: E20251204 121017.259342 3351530 raft_server.cpp:779] 527 lagging entries > healthy write lag of 500 E20251204 121026.260561 3351530 raft_server.cpp:779] 527 lagging entries > healthy write lag of 500 E20251204 121035.261900 3351530 raft_server.cpp:779] 527 lagging entries > healthy write lag of 500 I20251204 121042.761341 3351531 batched_indexer.cpp:441] Stuck req_key: 1764803434843672 I20251204 121042.761355 3351531 batched_indexer.cpp:441] Stuck req_key: 1764803434846285 I20251204 121042.761365 3351531 batched_indexer.cpp:441] Stuck req_key: 1764803434846395 I20251204 121042.761377 3351531 batched_indexer.cpp:441] Stuck req_key: 1764803434846420 I20251204 121042.761391 3351531 batched_indexer.cpp:441] Stuck req_key: 1764803434847389 E20251204 121044.263202 3351530 raft_server.cpp:779] 527 lagging entries > healthy write lag of 500 Why does this happen? And how can we mitigate it?
    a
    a
    k
    • 4
    • 16
  • d

    Diego Chacón Sanchiz

    12/04/2025, 12:02 PM
    The curious thing, Alan, is that this happened to us in a non-production environment and at night; that is, there was nobody operating on the cluster at that time.
  • g

    Gauthier Robe

    12/04/2025, 7:34 PM
    Hi! I am on Typesense Cloud v30.rca34 I was hoping it would allow me to define a Natural Language Search model that is a local llama.cpp -openai format I am able to define a conversational model (RAG) that way using the
    openai_url
    property; it doesn't seem to be available for NLS? I was hoping to use something like this:
    Copy code
    {
      "id": "NL-llama.cpp-ministral-3-14B",
      "model_name": "openai/Ministral-3-14B-Instruct-2512-Q4_K_M.gguf",
      "api_key": "NOT_NEEDED",
      "openai_url": "<http://xxxxxxxx:5000>",
      "max_bytes": 16000,
      "temperature": 0,
      "system_prompt": "Be precise and accurate in parsing queries"
    }
    Any other options? Thank you!
    a
    • 2
    • 3
  • p

    Praneeth Patlola

    12/05/2025, 3:25 AM
    has anyone build tracing on top of typenses which uses openai - We are seeing a huge burn on our tokens and trying to understand where the tokens are getting burnt.
    f
    • 2
    • 1
  • p

    Pratiksha Bhosle

    12/05/2025, 9:56 AM
    👋 Hi everyone! i am using algolia want to shift on typesense my main concern is can i get same result of data like algolia giving me now if some one did that please let me know
    f
    • 2
    • 1