Hey all - I started working with Typesense last we...
# community-help
k
Hey all - I started working with Typesense last week and love it so far! I'm facing a bug with grouping documents on conversational RAG. Any way around this? Would love to reference grouped documents in the LLM response. I built out a conversational RAG using javascript (note that requests failed until I added
prefix: false
), but I noticed that grouping documents (while technically allowed) messes up the AI conversation response because it looks like the grouped hits are not passed as context. Here's my code:
Copy code
const searchParams: any = {
      q: query,
      query_by: DEFAULT_QUERY_BY,
      exclude_fields: EXCLUDE_FIELDS,
      conversation_model_id: CONVERSATION_MODEL_ID,
      conversation: true,
      per_page: DEFAULT_PER_PAGE,
      prefix: CONVERSATION_PREFIX,
      // group_by: 'slug',
      // group_limit: 3, // we're able to add these and the database query works, but the llm response is inadequate
      filter_by: 'status:active'
    }

    // Add conversation ID for follow-up questions
    if (conversationId) {
      searchParams.conversation_id = conversationId
    }

    const response = await typesense
      .collections(OPPORTUNITIES_COLLECTION)
      .documents()
      .search(searchParams)
f
Could you share your constants as well? There's a default query by and an exclude fields constant in the snippet. Also, are you using Typesense Cloud or a self-hosted instance?
k
Constants below - using Cloud:
Copy code
/ Typesense Conversation Model Configuration

// Query Parameters
export const DEFAULT_QUERY_BY = 'embedding' // Auto-embedding field for semantic search
export const EXCLUDE_FIELDS = 'embedding' // Fields to exclude from search results
export const CONVERSATION_PREFIX = false // Disable prefix matching for conversations
f
Could you share your cluster ID in a DM?
1