Hello! I am writing regarding my <post on GitHub>...
# community-help
m
Hello! I am writing regarding my post on GitHub. Is there any known solution to Auto-embedding generation directly from SageMaker AI Inference Endpoint to TypeSense? Any help will be highly appreciated. Thank you in advance. BR, Mehti
j
Not at the moment. But we do support vLLM, so if they have an adapter for sagemaker then it might work
👍 1
m
@Jason Bosco thanks for your quick response! Is there any documentation/examples of using vLLM for populating embeddings to TypeSense?
j
vllm provides open api compatible endpoints. Once you deploy vllm, you can then change the base URL that Typesense uses to your vllm server: https://typesense.org/docs/27.1/api/vector-search.html#using-openai-compatible-apis
🙏 1