Hi guys, I'm trying to connect Typesense with an e...
# community-help
l
Hi guys, I'm trying to connect Typesense with an embeddings model deployed in Azure. I've followed the instructions here that explains exactly this use case. I'm seeing this problem when I update my schema and include the vector:
Error: t: Request failed with HTTP code 400 | Server said: OpenAI API error: Resource not found
I've tried the calling the model from my machine and it returns the embedding correctly. I think the problem it's because typesense expects the endpoint to be
POST /v1/embeddings
but azure's provided endpoint doesn't have that structure. A solution could be creating a custom server that handles the calls from typesense and calls the azure's endpoint. Is this the right approach? Isn't there a simpler way of connecting typesense directly with the model deployed? Here's is the payload used to update the schema:
Copy code
{
  "fields": [
    {
      "name": "embedding",
      "type": "float[]",
      "embed": {
        "from": [
          "fullName",
          "username"
        ],
        "model_config": {
          "model_name": "openai/text-embedding-3-small",
          "api_key": "---",
          "url": "<http://MY-URL-AAA.COM/embeddings?api-version=2023-05-15|MY-URL-AAA.COM/embeddings?api-version=2023-05-15>"
        }
      }
    }
  ]
}
🙏 1
j
Could you try adding
https://
before the URL in the model_config?
l
It has
https://
, the start of the url is
<https://embedding-test-openai.openai.azure.com/openai/>
j
CC: @Ozan Armağan
o
Hi @Luis Gestoso Muñoz, since we expect fully compatibility with OpenAI server, both the path and the body should be in OpenAI server convention. As you suggested, best solution for you case is having a custom server between to proxy the requests to Azure Server.
j
Hello @Ozan Armağan and @Jason Bosco! Luis is in my team. The issue we are having is the same as described here 6 months ago: https://github.com/typesense/typesense/issues/1828 Any idea if and when this will be prioritized and fixed? Even if not the final solution, it would be nice if you can fix the documentation as it misled us (and probably other customers) into spending time on a dead end that was reported 6 months ago already 😅. For now, the proxy adds some effort to our development, so we will consider going with Google instead.
j
We've now prioritized this - we'll work on it in the next two weeks and keep you posted in that GitHub issue
🙌 1
j
Amazing Jason! Thanks a lot, we appreciate it! Typesense rocks 😄 🚀
😄 1
🙌 2
l
Hi @Jason Bosco @Ozan Armağan, thanks for the quick action!! I see this PR was merged into v29. Do you have any estimation on when will it be released? So we can adjust our timelines for that ty
k
We have just published
29.0.rc1
that contains the fix.
@Luis Gestoso Muñoz Please try it out and let us know.
l
We are using typesense Cloud, should I be able to see that version here?
k
Ah wait, hold on
Please refresh, it will be available now.
l
on it
It succeeded to add the embedding field to the existing collection. Will it automatically trigger calls to calculate the embeddings for existing items, or it will only work for new ones?
k
Did you do an alter?
l
I did "Update schema"
k
Yes, so I don't think we update existing documents. @Ozan Armağan can confirm.
l
I think it just started to make calls to the model! thanks so much
🚀 1
👍 1
o
Yes, so I don't think we update existing documents. @Ozan Armağan can confirm.
Yes, correct.
j
Thanks for the support guys!!