Hi guys, I'm trying to connect Typesense with an e...
# community-help
l
Hi guys, I'm trying to connect Typesense with an embeddings model deployed in Azure. I've followed the instructions here that explains exactly this use case. I'm seeing this problem when I update my schema and include the vector:
Error: t: Request failed with HTTP code 400 | Server said: OpenAI API error: Resource not found
I've tried the calling the model from my machine and it returns the embedding correctly. I think the problem it's because typesense expects the endpoint to be
POST /v1/embeddings
but azure's provided endpoint doesn't have that structure. A solution could be creating a custom server that handles the calls from typesense and calls the azure's endpoint. Is this the right approach? Isn't there a simpler way of connecting typesense directly with the model deployed? Here's is the payload used to update the schema:
Copy code
{
  "fields": [
    {
      "name": "embedding",
      "type": "float[]",
      "embed": {
        "from": [
          "fullName",
          "username"
        ],
        "model_config": {
          "model_name": "openai/text-embedding-3-small",
          "api_key": "---",
          "url": "<http://MY-URL-AAA.COM/embeddings?api-version=2023-05-15|MY-URL-AAA.COM/embeddings?api-version=2023-05-15>"
        }
      }
    }
  ]
}
šŸ™ 1