Hi guys, need some assistance. Recently, i generat...
# community-help
a
Hi guys, need some assistance. Recently, i generated image embedding using ts/clip-vit-b-p32 model. Image/Similar search is not working as per expectation. Is there anything i can do to improve the results? Sharing example in the thread
For req -
Copy code
curl --location '<https://xyz:443/multi_search>' \
--header 'X-TYPESENSE-API-KEY: xyz' \
--header 'Content-Type: application/json' \
--data '{
    "searches": [
        {
            "collection": "variantV2",
            "q": "*",
            "group_by": "visualOptionId",
            "per_page": "250",
            
            "page": 1,
            "max_facet_values": 999,
            "exclude_fields": "imageEmbedding,textEmbedding",
            
            "vector_query": "imageEmbedding:([], id:684fac9a992a3e58256a570a-684fac9a992a3e58256a574f"
        }
    ]
}'
Attaching the dress that's in the document- The collection has lot of similar like red dresses but the response doesn't have much red dresses.
This is the result that i'm getting
a
Hi @Aditya Verma, CLIP (ts/clip-vit-b-p32) optimizes for overall semantics (pose, silhouette, style, setting) more than strict color. You can still try tweaking the k (>= per_page) and distance_threshold vector_query parameters. It would also be beneficial to temporarily disable group_by to ensure that no similar results are grouped together.
m
can also try using multiple fields for embedding, not just image but also category and colour, for example