https://typesense.org logo
Join Slack
Powered by
# community-help
  • j

    Julian

    10/14/2025, 10:18 AM
    Hi all ✌️ We are facing massive performance issues when Filtering Nested Array Objects. Applying a filter like the following (taken from our code; it's a recipe database) results in a response time of ~5 seconds on an update-to-date MacBook with 12 CPUs and 64gb ram.
    Copy code
    nutritionalValues.{field:=energyCal && perPortion:>200}
    Are there any optimization strategies or are we simply using it wrong?
    a
    k
    • 3
    • 4
  • v

    Vikas Chawla

    10/14/2025, 5:22 PM
    Is there any recommended typesense library available for performing CRUD operations for searching/filtering/reading/pushing/updating/delete jsonl records from and into typesense collection. Library or repo where all functions are available?
    f
    • 2
    • 2
  • j

    James King

    10/15/2025, 10:08 AM
    Mornings all, we are looking to monitor TS cluster stats and metrics using the respective endpoints, how frequently should we be hitting those endpoints to make sure we do not miss any data?
    a
    • 2
    • 2
  • v

    Vidar Brudvik

    10/15/2025, 8:11 PM
    Hi! We’re building a candidate search with Typesense (similar to LinkedIn). We want to allow users to search for multiple values in the same field, e.g.: q: “developer designer”, query_by: “Title” But Typesense applies AND logic between words in q. We want OR logic – so that results match either “developer” OR “designer”. We want to avoid filter_by, since we rely on typo tolerance, synonyms, and relevance scoring from q/query_by. Is there a way to get OR behavior in q? Would comma-separated terms work? Or do we need to run multiple queries and merge results manually? Any suggestions welcome!
    h
    • 2
    • 5
  • ó

    Óscar Vicente

    10/16/2025, 9:02 AM
    We just got this error: W20251016 083017.783157 348334 replicator.cpp:397] Group default_group fail to issue RPC to 10.0.1.481078108 _consecutive_error_times=5551, [E2][10.0.1.4:8107][E2]peer_id not exist [R1][E2][10.0.1.4:8107][E2]peer_id not exist [R2][E2][10.0.1.4:8107][E2]peer_id not exist [R3][E2][10.0.1.4:8107][E2]peer_id not exist E20251016 083018.985527 348057 backward.hpp:4200] Stack trace (most recent call last) in thread 348057: E20251016 083018.985599 348057 backward.hpp:4200] #12 Object "/usr/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2", at 0xffffffffffffffff, in E20251016 083018.985607 348057 backward.hpp:4200] #11 Object "/usr/lib/x86_64-linux-gnu/libc.so.6", at 0x7f295e929c6b, in E20251016 083018.985615 348057 backward.hpp:4200] #10 Object "/usr/lib/x86_64-linux-gnu/libc.so.6", at 0x7f295e89caa3, in E20251016 083018.985622 348057 backward.hpp:4200] #9 Object "/usr/bin/typesense-server", at 0x6324d8932753, in execute_native_thread_routine E20251016 083018.985630 348057 backward.hpp:4200] #8 | Source "include/threadpool.h", line 59, in operator() E20251016 083018.985635 348057 backward.hpp:4200] Source "/usr/include/c++/10/future", line 1592, in ThreadPool [0x6324d5c45acc] E20251016 083018.985642 348057 backward.hpp:4200] #7 | Source "/usr/include/c++/10/future", line 1459, in _M_set_result E20251016 083018.985649 348057 backward.hpp:4200] | Source "/usr/include/c++/10/future", line 412, in call_once<void (std: future base State baseV2:*)(std:function&lt;stdunique ptr&lt;std future base:_Result_base, std: future base Result base:_Deleter>()>*, bool*), std: future base:_State_baseV2*, std:function&lt;stdunique ptr&lt;std future base:_Result_base, std: future base Result base:_Deleter>()>*, bool*> E20251016 083018.985657 348057 backward.hpp:4200] | Source "/usr/include/c++/10/mutex", line 729, in __gthread_once E20251016 083018.985664 348057 backward.hpp:4200] Source "/usr/include/x86_64-linux-gnu/c++/10/bits/gthr-default.h", line 700, in _M_run [0x6324d5e9ee53] E20251016 083018.985672 348057 backward.hpp:4200] #6 Object "/usr/lib/x86_64-linux-gnu/libc.so.6", at 0x7f295e8a1ed2, in E20251016 083018.985679 348057 backward.hpp:4200] #5 | Source "/usr/include/c++/10/future", line 572, in operator() E20251016 083018.985687 348057 backward.hpp:4200] Source "/usr/include/c++/10/bits/std_function.h", line 622, in _M_do_set [0x6324d5c44ce2] E20251016 083018.985694 348057 backward.hpp:4200] #4 | Source "/usr/include/c++/10/bits/std_function.h", line 292, in __invoke_r<std:unique ptr&lt;std future base:_Result_base, std: future base Result base:_Deleter>, std: future base Task setter&lt;stdunique ptr&lt;std future base:_Result<void>, std: future base Result base:_Deleter>, std: future base:_Task_state<_Fn, _Alloc, _Res(_Args ...)>: M run&lt;std Bind&lt;Index:batch_memory_index(Index*, std::vector<index_record>&, const string&, const tsl::htrie_map<char, field>&, const tsl::htrie_map<char, field>&, const string&, const std::vector<char>&, const std::vector<char>&, bool, size_t, size_t, size_t, bool, bool, const tsl::htrie_map<char, field>&, const string&, const spp:sparse hash map&lt;std cxx11:basic_string<char>, std::set<reference_pair_t> >&)::<lambda()>()>, std::allocator<int>, void, {}>::<lambda()>, void>&> E20251016 083018.985702 348057 backward.hpp:4200] | Source "/usr/include/c++/10/bits/invoke.h", line 115, in __invoke_impl<std:unique ptr&lt;std future base:_Result<void>, std: future base Result base:_Deleter>, std: future base Task setter&lt;stdunique ptr&lt;std future base:_Result<void>, std: future base Result base:_Deleter>, std: future base:_Task_state<_Fn, _Alloc, _Res(_Args ...)>: M run&lt;std Bind&lt;Index:batch_memory_index(Index*, std::vector<index_record>&, const string&, const tsl::htrie_map<char, field>&, const tsl::htrie_map<char, field>&, const string&, const std::vector<char>&, const std::vector<char>&, bool, size_t, size_t, size_t, bool, bool, const tsl::htrie_map<char, field>&, const string&, const spp:sparse hash map&lt;std cxx11:basic_string<char>, std::set<reference_pair_t> >&)::<lambda()>()>, std::allocator<int>, void, {}>::<lambda()>, void>&> E20251016 083018.985720 348057 backward.hpp:4200] | Source "/usr/include/c++/10/bits/invoke.h", line 60, in operator() E20251016 083018.985726 348057 backward.hpp:4200] | Source "/usr/include/c++/10/future", line 1397, in operator() E20251016 083018.985733 348057 backward.hpp:4200] | Source "/usr/include/c++/10/future", line 1456, in __invoke_r<void, std: Bind&lt;Index:batch_memory_index(Index*, std::vector<index_record>&, const string&, const tsl::htrie_map<char, field>&, const tsl::htrie_map<char, field>&, const string&, const std::vector<char>&, const std::vector<char>&, bool, size_t, size_t, size_t, bool, bool, const tsl::htrie_map<char, field>&, const string&, const spp:sparse hash map&lt;std cxx11:basic_string<char>, std::set<reference_pair_t> >&)::<lambda()>()>&> E20251016 083018.985741 348057 backward.hpp:4200] | Source "/usr/include/c++/10/bits/invoke.h", line 110, in __invoke_impl<void, std: Bind&lt;Index:batch_memory_index(Index*, std::vector<index_record>&, const string&, const tsl::htrie_map<char, field>&, const tsl::htrie_map<char, field>&, const string&, const std::vector<char>&, const std::vector<char>&, bool, size_t, size_t, size_t, bool, bool, const tsl::htrie_map<char, field>&, const string&, const spp:sparse hash map&lt;std cxx11:basic_string<char>, std::set<reference_pair_t> >&)::<lambda()>()>&> E20251016 083018.985749 348057 backward.hpp:4200] | Source "/usr/include/c++/10/bits/invoke.h", line 60, in operator()<> E20251016 083018.985755 348057 backward.hpp:4200] | Source "/usr/include/c++/10/functional", line 499, in __call<void> E20251016 083018.985762 348057 backward.hpp:4200] | Source "/usr/include/c++/10/functional", line 416, in __invoke<Index::batch_memory_index(Index*, std::vector<index_record>&, const string&, const tsl::htrie_map<char, field>&, const tsl::htrie_map<char, field>&, const string&, const std::vector<char>&, const std::vector<char>&, bool, size_t, size_t, size_t, bool, bool, const tsl::htrie_map<char, field>&, const string&, const spp:sparse hash map&lt;std cxx11:basic_string<char>, std::set<reference_pair_t> >&)::<lambda()>&> E20251016 083018.985769 348057 backward.hpp:4200] | Source "/usr/include/c++/10/bits/invoke.h", line 95, in __invoke_impl<void, Index::batch_memory_index(Index*, std::vector<index_record>&, const string&, const tsl::htrie_map<char, field>&, const tsl::htrie_map<char, field>&, const string&, const std::vector<char>&, const std::vector<char>&, bool, size_t, size_t, size_t, bool, bool, const tsl::htrie_map<char, field>&, const string&, const spp:sparse hash map&lt;std cxx11:basic_string<char>, std::set<reference_pair_t> >&)::<lambda()>&> E20251016 083018.985776 348057 backward.hpp:4200] Source "/usr/include/c++/10/bits/invoke.h", line 60, in _M_invoke [0x6324d5edeea6] E20251016 083018.985783 348057 backward.hpp:4200] #3 Source "src/index.cpp", line 677, in operator() [0x6324d5edebb2] E20251016 083018.985790 348057 backward.hpp:4200] #2 Source "src/index.cpp", line 876, in index_field_in_memory [0x6324d5edc22c] E20251016 083018.985797 348057 backward.hpp:4200] #1 | Source "src/art.cpp", line 750, in recursive_insert E20251016 083018.985818 348057 backward.hpp:4200] | Source "src/art.cpp", line 641, in add_document_to_leaf E20251016 083018.985826 348057 backward.hpp:4200] Source "src/art.cpp", line 428, in art_inserts [0x6324d5c08475] E20251016 083018.985832 348057 backward.hpp:4200] #0 Source "src/posting.cpp", line 58, in upsert [0x6324d5f9dcd0] I20251016 083019.793485 348057 batched_indexer.cpp:539] Saving currently applying index: 1756860 I20251016 083019.793681 348057 housekeeper.cpp:96] No in-flight search queries were found. E20251016 083019.793690 348057 typesense_server.cpp:159] Typesense 29.0.rc30 is terminating abruptly. It took the entire cluster down, one by one during the span of an hour. It was working untill all the cliuster was restarting. Is this a known error for rc30? Was it fixed so should we upgrade? This is production, so we would like to know before upgrading.
    k
    • 2
    • 67
  • ó

    Óscar Vicente

    10/16/2025, 3:55 PM
    Is there any plan in implementing diversifying algorithms? I'm using typesense as a vector store for llms, and it would be great, since most of the top results are practically the same sometimes and I would like to get results with less relevance but that are different. Here's how elastic search does it: Diversifying search results with Maximum Marginal Relevance - Elasticsearch Labs https://share.google/UG7BwcEJ1SkRv02LZ
    a
    h
    • 3
    • 21
  • j

    Jochem Top

    10/17/2025, 9:46 AM
    Hi all! Does anyone know of a way to see average search calls being made in a cluster for a month.
    f
    • 2
    • 2
  • s

    smileBeda

    10/17/2025, 10:40 AM
    Hi - has anyone come across a good self hosted typesense dashboard? I don’t need much - and have tried some available on GitHub which however are all quite „work in progress“… wanted to see if the community here has some inputs
    f
    • 2
    • 3
  • d

    Dave

    10/17/2025, 2:23 PM
    Hey, we are upgrading a cluster that was using 2h burst mode, the upgrade seems to be stuck. Can someone please verify on your end?
    a
    • 2
    • 9
  • s

    Santhosh

    10/17/2025, 3:50 PM
    Hi everyone, What is the required
    filter_by
    syntax to include only documents that contain a specific field (
    must_have_field
    ), neglecting any document where the field is entirely missing? Does Typesense have an operator that is equivalent to checking for a non-null or existent field?
    a
    • 2
    • 3
  • b

    Balaji SM

    10/18/2025, 12:43 PM
    Question: Dynamic field schema with regex and sort not working Hi all — I’m trying to define dynamic fields in my collection schema using a regex pattern like this:
    Copy code
    {
      "name": "^cs_.+(_sort)$",
      "type": "int32",
      "sort": true,
      "optional": true,
      "index": true,
      "facet": false
    }
    My goal is to allow multiple fields like
    cs_price_range_float_sort
    ,
    cs_rating_sort
    , etc., to be sortable without explicitly defining each one. But when I index a document with:
    Copy code
    {
      "cs_price_range_float_sort": 100
    }
    …it doesn’t get picked up by the schema, unless I define the exact field name in the schema. Is regex-based dynamic field matching supported for sortable fields? Or is the only option to explicitly define each
    cs_*_sort
    field in the schema? Thanks in advance!
    f
    • 2
    • 1
  • j

    John B

    10/18/2025, 2:24 PM
    is it recommended to have separate lib/typesense-server.ts (admin client, management, indexing; never imported from the client) and lib/typesense-client.ts ? or having them in one file is ok?
    h
    • 2
    • 1
  • j

    John B

    10/20/2025, 11:46 AM
    is it normal to wait for 4-5 mins after starting typesense? if i try loading the page before ~5 mins i get : Request to Node 0 failed due to " Request failed with HTTP code 503 | Server said: Not Ready or Lagging"
    a
    • 2
    • 6
  • j

    John B

    10/20/2025, 11:47 AM
    container memory usage is ~4gb
  • n

    new_in_town

    10/20/2025, 3:34 PM
    Guys, I know you have documentation describing your API: This thing: https://github.com/typesense/typesense-api-spec/blob/master/openapi.yml And per-programming-language libs like Java: https://github.com/typesense/typesense-java Do you have some form of llms.txt ? My idea is to build some client code, talking to Typesense, in Java. And I know that LLMs ( GPT-5, Sonnet ) can be very effective for this, and they produce GREAT results when they see the most-up-to-date-API-documentation, and
    llms.txt
    is a well known hack to feed such documentation into LLM...
    a
    • 2
    • 3
  • j

    John B

    10/20/2025, 10:55 PM
    what are some recommended ec2 instances for typesense ? i need 32gb ram or more
    p
    • 2
    • 1
  • p

    Patrick Gray

    10/20/2025, 11:00 PM
    f2.48xlarge 🙂
    😂 1
  • j

    John B

    10/20/2025, 11:03 PM
    lol, that's way too much
  • j

    John B

    10/20/2025, 11:05 PM
    also what about non ec2 picks? chatgpt says hetzner is affordable and good value but i had a corrupted db with a hetzner box once..
    f
    • 2
    • 3
  • a

    Abtin Okhovat

    10/21/2025, 7:33 AM
    I have a Typesense cluster, and a question came up: Does it matter which node I write to — the leader or a follower? If it doesn’t make a difference, how does Typesense handle it internally? For example, does a follower automatically forward write requests to the leader?
    f
    • 2
    • 1
  • j

    John B

    10/21/2025, 2:07 PM
    after a reindexing i'm getting: the host can’t reach http://localhost:8108/health—each curl attempt returns “connection refused”. The logs (docker compose logs --tail=50 typesense) show the node is still replaying a large Raft backlog (tens of thousands of lagging entries) and repeatedly reporting “lagging entries > healthy … lag”, which keeps the API from serving traffic.
  • j

    John B

    10/21/2025, 2:07 PM
    do i have to nuke it/reset or is there an alternative?
    f
    • 2
    • 2
  • r

    Rushil Srivastava

    10/21/2025, 11:51 PM
    Are there any plans for an async Python client? We are running into a bunch of issues and wondering if we need to scope out our own client. GH issue: https://github.com/typesense/typesense-python/issues/12
  • a

    Aditya Verma

    10/22/2025, 7:52 AM
    Hi, is there any way to get similar items from multiple typesense ids? https://typesense.org/docs/29.0/api/vector-search.html#querying-for-similar-documents This only accepts single id
    a
    m
    • 3
    • 3
  • l

    Luca Lusso

    10/22/2025, 9:13 AM
    Hi, I have a Typesense cluster deployed on our K8S. The cluster has 3 nodes. When I call the
    /collections
    endpoint sometimes the
    embedding
    field is present, sometime is missing. Embedding is configured to use a LiteLLM instance to compute the vectors. What can be?
    f
    • 2
    • 8
  • m

    Melvin Brem

    10/22/2025, 11:46 AM
    Hi all, We've recently integrated Natural Language Search on our site, and the requests using it are now around ~3000ms (about equal search and parse time) instead of the ~20ms for a regular request. Is that an expected response time, or is it higher than it should be? (using openai/gpt-4.1-nano)
    a
    • 2
    • 3
  • k

    Kiran Gopalakrishnan

    10/22/2025, 3:16 PM
    I'm running a Typesense query on a
    string[]
    field which has some values that use
    -
    , for example: Ford F-150, F-250 etc, I want them to match for any of the following
    f150
    ,
    f-150
    ,
    f 150
    , i am using
    token_separators: "-"
    on a field level, but this doesn't match for
    F-150
    but seems to work for
    F150
    , i tried
    symbols_to_index:"-"
    in combination with
    token_seperators:"-"
    , It still doesn't match for F-150. Am i doing this wrong ? I'm on Typesense cloud v29.
    a
    f
    • 3
    • 7
  • d

    Dominik Henneke

    10/23/2025, 8:26 AM
    We are using the
    highlight_full_fields
    setting in Typesense 29.0 and wonder if we are interpreting the results correctly. When I enable that for a field, then the "snippet" value still has the shortened content, but there is a new "value" field that contains the full string. I didn't found any documentation for this, but is that observation correct that we should use something like
    highlight.name.value ?? highlight.name.snippet
    if we want to use the full title?
    f
    • 2
    • 3
  • j

    John Doisneau

    10/23/2025, 5:08 PM
    Hey my friends, for the first time we got an HEALTH issue on our Typesense Cloud cluster. It stopped responding. I decided to perform a restart, which is still undergoing. From the graph what we saw is that the CPU usage went from 20% to nearly 100%, even though we did not notice anything abnormal on our side... What would you do in order to understand this better?
    a
    • 2
    • 4
  • v

    V Parthiban

    10/24/2025, 5:32 AM
    Need help: Search taking 1-2 seconds on 7.8M documents, need <100ms Hi! We have a Typesense (selfhosted) collection with about 7.8 million documents and our searches are taking 1-2 seconds. We need to get this down to under 100ms for production use. I am using react instent search in frontend. Setup: • Typesense version: 28.0 • Server: 269GB RAM, 8 CPUs • Collection size: 7,885,266 documents • Typesense is using about 57GB of memory What we're seeing: • Searches with
    ***
    (show all): ~2 seconds • Regular text searches: 1-1.5 seconds Our query looks like this: json
    Copy code
    {
      "q": "*",
      "query_by": "properties.EntryName_str,template_fields.Client_sp_ID_arr,template_fields.Client_sp_Name_sp_List_arr,template_fields",
      "infix": "always,off,off,off",
      "facet_by": "properties.CreatedBy_str,properties.CreatedDate,properties.Extension_str,properties.ModifiedDate,template_fields.Client_sp_ID_arr,template_fields.Client_sp_Name_sp_List_arr,template_fields.Document_sp_Date_str,template_fields.Document_sp_Sub_sp_Type_sp_II_str,template_fields.Document_sp_Sub_sp_Type_sp_II_str,template_fields.Document_sp_Type_str",
      "max_facet_values": 20,
      "sort_by": "properties.file_modified_date:desc",
      "highlight_full_fields": "properties.EntryName_str,template_fields.Client_sp_ID_arr,template_fields.Client_sp_Name_sp_List_arr,template_fields",
      "per_page": 100,
      "page": 1
    }
    Schema details: • All string fields have
    sort: true
    enabled in the schema • Infix is enabled on the first query field (properties.EntryName_str) System metrics from our instance: Debug endpoint: json
    Copy code
    {
      "state": 4,
      "version": "28.0"
    }
    Health endpoint: json
    Copy code
    {
      "ok": true
    }
    Metrics endpoint: json
    Copy code
    {
      "system_cpu1_active_percentage": "0.00",
      "system_cpu2_active_percentage": "0.00",
      "system_cpu3_active_percentage": "0.00",
      "system_cpu4_active_percentage": "0.00",
      "system_cpu5_active_percentage": "0.00",
      "system_cpu6_active_percentage": "0.00",
      "system_cpu7_active_percentage": "0.00",
      "system_cpu8_active_percentage": "0.00",
      "system_cpu_active_percentage": "0.00",
      "system_disk_total_bytes": "105089261568",
      "system_disk_used_bytes": "9131921408",
      "system_memory_total_bytes": "269206294528",
      "system_memory_total_swap_bytes": "511700992",
      "system_memory_used_bytes": "64942874624",
      "system_memory_used_swap_bytes": "0",
      "system_network_received_bytes": "0",
      "system_network_sent_bytes": "0",
      "typesense_memory_active_bytes": "57420935168",
      "typesense_memory_allocated_bytes": "57133960256",
      "typesense_memory_fragmentation_ratio": "0.00",
      "typesense_memory_mapped_bytes": "59525619712",
      "typesense_memory_metadata_bytes": "1849614416",
      "typesense_memory_resident_bytes": "57420935168",
      "typesense_memory_retained_bytes": "42020577280"
    }
    Stats endpoint: json
    Copy code
    {
      "delete_latency_ms": 0,
      "delete_requests_per_second": 0,
      "import_latency_ms": 0,
      "import_requests_per_second": 0,
      "latency_ms": {
        "GET /health": 0
      },
      "overloaded_requests_per_second": 0,
      "pending_write_batches": 0,
      "requests_per_second": {
        "GET /health": 0.5
      },
      "search_latency_ms": 0,
      "search_requests_per_second": 0,
      "total_requests_per_second": 0.5,
      "write_latency_ms": 0,
      "write_requests_per_second": 0
    }
    Search response details: json
    Copy code
    {
      "facet_counts": [
        {
          "counts": [{"count": 1905194, "highlighted": "ADMIN", "value": "ADMIN"}],
          "field_name": "properties.CreatedBy_str"
        }
      ],
      "found": 7885266,
      "hits": [...],
      "out_of": 7885518,
      "page": 1,
      "request_params": {
        "collection_name": "ActiveClientDocuments",
        "first_q": "*",
        "per_page": 100,
        "q": "***"
      },
      "search_cutoff": false,
      "search_time_ms": 2041
    }
    Question: What's the best way to configure Typesense to get search responses under 100ms for a collection of this size? Are there specific settings or approaches we should be using? Any help would be appreciated!
    k
    • 2
    • 4