Hey there, I have some questions about the Convers...
# community-help
s
Hey there, I have some questions about the Conversational Search that I was hoping someone could help me out with. I followed the documentation and created a model using azure openai, created an invoices collection with embeddings for certain fields. Apparently, these fields can only be strings, so I can't embed a total_amount number, for example? I left total_amount out of that and created 3000 invoice documents that I then wanted to search through using the conversational search. Asking questions in general does work, but it seems like it does not work on all the data, when I ask it for example, "give me the most expensive invoice", it does not find the correct invoice. Instead, it returns a seemingly random invoice that is not near the most expensive one at all. I tried to add a sort_by to sort the query by total_amount, but that also did not help so I am kinda confused on if I did something wrong or if I am misunderstanding how it is supposed to work. I was hoping that my users could enter search questions and typesense / my llm would turn that into a proper query and sort/filter everything for me and then return a proper result. I then looked at Natural Language Search but that does not support Azure Openai yet. What would be the best approach to achieve this goal of semantic searching?
f
Embeddings are floating point numbers generated from strings, so using a number there wouldn't be valuable for vector search. Using Natural Language search would be the solution here. I've just posted a PR to support Azure OpenAI models on natural language scenarios
s
Thank you for the quick response! So for my use case I would use the Natural Language search to generate the "search terms" (filter by, sort by etc.), then find them with typesense, but that would just return the documents in the collection, not a user friendly text like the conversational search would return, so I would need to send the result from the natural search to an LLM after receiving it from typesense to format it in a user friendly way?
f
Yup, Conversational search happens after semantic search, no filters / sort orders are generated. Natural Language search takes advantage of the llm to generate Typesense queries, not iterate on Typesense results
s
Can I combine them to first do a Natural Language Search and with the result of that do a Conversational Search or would it be easier to just send the documents returned by the Natural Language Search to my own LLM without another Typesense call?
f
The Typesense Call would help with keeping context of previous conversations, but the conversational search is a thin wrapper around the completions API of the LLM using the results from Typesense
s
Okay because I tried to set sorting when calling the Conversational Search but that did not seem to have an impact, my problem was that I had 3 k invoices and I wanted to find the most expensive invoice and when I asked it "which invoice was the most expensive one" it returned a very cheap one and never found the expensive one so I tried to add sort_by: total_amount:desc but that did nothing. So I am a bit confused about how I would use the created search terms from the Natural Search and pass it to the Conversational Search.
f
You'd filter by the ids of the documents and sort on the sort order generated
s
Ahh, I just tried that, and it seems to work perfectly! So Natural Search first, then filter by resulting IDs in the Conversational Search, got it! Looking forward to testing it with Natural Language Search when the PR is merged. Curious how long the request will take if I have to make two LLM calls but should be fine. Thank you for creating it and the quick help!
🙌 1