Luis Gestoso MuƱoz
01/23/2025, 4:38 PMError: t: Request failed with HTTP code 400 | Server said: OpenAI API error: Resource not found
I've tried the calling the model from my machine and it returns the embedding correctly. I think the problem it's because typesense expects the endpoint to be POST /v1/embeddings
but azure's provided endpoint doesn't have that structure.
A solution could be creating a custom server that handles the calls from typesense and calls the azure's endpoint. Is this the right approach? Isn't there a simpler way of connecting typesense directly with the model deployed?
Here's is the payload used to update the schema:
{
"fields": [
{
"name": "embedding",
"type": "float[]",
"embed": {
"from": [
"fullName",
"username"
],
"model_config": {
"model_name": "openai/text-embedding-3-small",
"api_key": "---",
"url": "<http://MY-URL-AAA.COM/embeddings?api-version=2023-05-15|MY-URL-AAA.COM/embeddings?api-version=2023-05-15>"
}
}
}
]
}