I am trying to search in name and slug. once it do...
# community-help
a
I am trying to search in name and slug. once it do find i am getting the facet_by values. Is there a way to group a facet element with other values i want it to grouped by. like for example
Copy code
0	
count	295
highlighted	"union bank of india"
value	"union bank of india"
i am getting this right now. i want that along with this there should be a another fields called name: "union_bank_name" that will give the value of the name here.
k
Please provide a complete example document for reference. And, also please refrain from `@`ing individual users. We take turns to respond to community support.
a
so will not mention anyone. now.
so here is my payload
Copy code
{
  "searches": [
    {
      "collection": "properties",
      "query_by": "dump.location.state_name,dump.location.state_slug",
      "facet_by": "dump.bank.bank_name,dump.bank.bank_slug,dump.location.state_name,dump.location.state_slug,dump.asset.asset_type_name,dump.asset.asset_type_slug",
      "q": "andhra-pradesh",
      "fitler": "true",
      "filter_by": "",
      "infix": "always"
    }
  ]
}
k
In 0.26 RC we support this:
Copy code
Fetching parent of faceted field: When you facet on a nested field like `color.name` you can now set 
  `"facet_return_parent": "color.name"`. This will return the parent color object as parent property in the facet response.
a
ok let me check
is it not added to the documentation yet.
k
Not yet, upcoming version
a
means cant use it right now.
if thats the case is there any other approach
k
You can use the RC version, very close to release
a
can you tell me how can use the Rc version as it is not available in documentation.
as in inside any testing section.
and also as i have to discuss use of testing version with others developers can you tell me when i expect the 0.26 version will realease
k
How are you deploying Typesense?
Use this version:
0.26.0.rc66
-- you can replace any URL with the version number.
a
i am using typesense cloud
k
We are in code freeze, should be released once doc work is over.
0.26.0.rc66
is available in TS Cloud.
a
ok thanks
is it expected to be bit slower while indexing
for the version .26
k
Should not be too slow, what are you noticing?
a
white running const returnData = await typesense .collections(currentSchema.name) .documents() .import(tableData); this api its taking more then ever
also not completing importing whole documents
k
Typesense cloud?
a
yes
k
Did you launch a new cluster?
a
ja6ilv4d8oqwys1bp
yes this is the new host
k
What's your client timeout? You might have to increase it for import. Default is only a few seconds.
a
ok checking
even though their is space still available it is showing out of space
i was using same data size with old version.
my actual data size is around 80mb
k
When are you getting the error? Can you post actual error sent?
I actually don't see any collections in your cluster.
a
ok
message has been deleted
k
But I still see no collection created. First hit the collections end-point to ensure that collection exists.
a
now can you check again
message has been deleted
k
Ok I see it now.
a
same error is coming
k
How many documents are u sending? Are you using the import API?
a
yes
the record size is around 33k
k
Maybe this is an infra issue with this instance. Let me look.
We have many customers on 0.26 who have seamlessly switched.
a
ok
can it be related to field types.
just asking.
k
The memory issue should not occur
a
ok
k
What cluster are you using with older version?
a
no
i am using latest version (0.26 )cluster
k
I mean whats the cluster ID running the version you had no problem with?
a
moq6kx24sj7yrn51p
this is Typesense v0.25.2 version
k
Can you try again but with 50% of your data first?
a
ok let me try
message has been deleted
same though there are 16k records.
k
Yes, i can see it rejecting write because of memory. However there is memory. We had done some improvements on low memory detection to protect against OOM crashes. I have to investigate what's happening here and get back to you.
There's memory but somehow that flag is getting tripped.
a
ok sure.
k
Meanwhile can you just try with just 1000 documents to see what happens?
a
it worked
tested with other sizes
started failing around 8k
k
I suspect that the older cluster was just hovering over the limit and with some additional data structures in 0.26 it's just tipping over. I recommend trying with 1 GB node to test what happens. It should go through.
The documents are buffered during import so the error happens then. When the import is terminated, memory falls back.
You could also try importing the documents in batches of 5K docs.
a
ok
This is just a suggestion can we also add support for typesense to monitor most searched fields and values and based on that we can give popularity to our data.