Hey all! I'm looking to create an experience like...
# community-help
i
Hey all! I'm looking to create an experience like https://shop.app/search/results?query=laptop. I've got some back and forth going with CGPT but haven't been able to have it successfully update the results grid (yet). === Essentially, the user will be able to have a conversation with an AI shopping assistant and ideally, have the results grid show show relevant results based on the evolving conversation. I imagine the search query adapts as the AI determines which additional filters it deems necessary to apply like price, brand, features based on the conversation etc. Is this possible and/or has anyone done something like this before? If yes/no, would you happen to have any tips or advice on a way to do this? Thanks! 🙏 === Screenshots show an example of Shop.App and what I have so far.
I’ve made some progress and will update this with a more specific question as it relates to my vector search set up in a during lunch. My collection shows an embedding field for each document but calling the document doesn’t show it in the response (even with exclude fields searxh param not present)
Background with more context on my problem Yesterday I decided to send my ~24,000 products from Supabase to OAI's Embedding API as batches using the model *`text-embedding-3-small`*that has 1536 dimensions. Once I retrieved the embeddings, I combined them with my product information from Supabase and created a new collection in Typesense with this upload schema code for the embedding field:
Copy code
{ 
      name: 'embedding', 
      type: 'float[]',
      num_dim: 1536, // model uses 1536 dimensions
      optional: true
    }
When I view my current schema in Typesense, I see:
Copy code
{
      "facet": false,
      "hnsw_params": {
        "M": 16,
        "ef_construction": 200
      },
      "index": true,
      "infix": false,
      "locale": "",
      "name": "embedding",
      "num_dim": 1536,
      "optional": true,
      "sort": false,
      "stem": false,
      "store": true,
      "type": "float[]",
      "vec_dist": "cosine"
    },
Current Testing/ Debugging In my current process that isn't working, when a user inputs a query, I make a call to OAI with the
text-embedding-3-small
model to get the vector query which I insert into the search query object. Every time I've sent this request or similar (see below), I keep getting 0 products returned.
Copy code
Complete Typesense Request:
{
  "searches": [
    {
      "collection": "product-embeddings-v2",
      "q": "*",
      "vector_query": "embedding:([ {vector query from OAI} ], k:200)",
      "exclude_fields": "embedding",
      "sort_by": "averageRating:desc",
      "per_page": 24
    }
  ]
}
I then decided to query the products themselves directly to see what fields they have and I do not see
embedding
being returned as a field which is probably why I'm getting 0 products when attempting a vector_query search. Does anyone know what might be happening here? Thanks!
Below is a console log that shows the various steps. The bottom shows the search query object and output that says no items found with multi_search. > 🟦 Starting chat request processing > 📨 Latest user message: i'm looking to purchase a tv > 💭 Using full conversation history of 1 messages > 🤖 Requesting AI response > GET /shop-with-ai 200 in 221ms > GET /favicon.ico 200 in 10ms > AI response: Great! I can help you with that. What size TV are you considering, and do you have a specific budget in mind? Also, will you primarily be using it for movies, gaming, or sports? > 🧬 Generating semantic embedding from relevant context... > 🔢 Query tokens: 7 > 🔤 Getting embedding for user query: > Query (length): 28 > Query (full): i'm looking to purchase a tv > Generated embedding with 1536 dimensions > > 🔎 Typesense Search Details: > > 🔍 Testing collection with simple search... > Collection test: Found 1 documents > First document sample: { > "document": { > "available_in_stores": 1, > "averageRating": 3, > "brand": "Yardbird®", > "categoryNames": [ > "Outdoor Living", > "Patio Furniture & Decor", > "Patio Furniture Accessories" > ], > "categoryNames.lvl0": [ > "Outdoor Living" > ], > "categoryNames.lvl1": [ > "Outdoor Living > Patio Furniture & Decor" > ], > "categoryNames.lvl2": [ > "Outdoor Living > Patio Furniture & Decor > Patio Furniture Accessories" > ], > "condition": "New", > "customerPrice": 38.4, > "description": "Yardbird's outdoor throw pillows are 100% solution dyed acrylic outdoor...", > "id": "975123070141", > "modelNumber": "PILLSLA", > "name": "Yardbird® - Pillow - Simplicity Lagoon", > "name_ngram": [ > "Yardbird®", > "Pillow", > "Simplicity", > "Lagoon" > ], > "onSale": false, > "slug": "yardbird-pillow-simplicity-lagoon", > "spec.brand": [ > "Yardbird®" > ], > "spec.protective_qualities": [ > "Fade resistant", > "Stain resistant", > "Water resistant", > "Weather resistant" > ], > "spec.quantity": [ > "1" > ], > "spec.removable_cover": [ > "No" > ], > "spec.reversible": [ > "No" > ], > "spec.shape": [ > "Square" > ], > "spec.upc": [ > "975123070141" > ], > "upc": "975123070141" > }, > "highlight": {}, > "highlights": [] > } > > 📋 Complete search request (with truncated vector): > { > "searches": [ > { > "collection": "product-embeddings-v2", > "q": "*", > "vector_query": "embedding:([-0.028074, -0.060718, -0.050412, -0.047544, -0.003309, ...], k:50)", > "sort_by": "averageRating:desc", > "per_page": 24 > } > ] > } > > 🔍 Executing Typesense multi-search... > Search complete: Found 0 results > > POST /api/chat 200 in 4388ms
j
For your use-case, instead of using RAG, I would recommend having the LLM generate Typesense
filter_by
queries for you, so you don't even have to use embeddings inside of Typesense. That will produce much better results. This article describes the concept: https://typesense.org/docs/guide/natural-language-search.html We're using Gemini in that example, but the same concept can be used with any LLM.
🔥 1
i
Thanks @Jason Bosco ! Can’t wait to set it up this weekend 🙏