#community-help

API Key Permissions for Typesense Docsearch Scraper

TLDR JP asked about configuring API key actions for reindexing a site using Typesense. Kishore Nallan clarified the required permission. Then, JP and Jason discussed specific permission configurations, and JP made a documentation update to illustrate their findings.

Powered by Struct AI

2

1

13
11mo
Solved
Join the chat
Jan 10, 2023 (11 months ago)
JP
Photo of md5-c793ac7faa870e19aa043d1f9b35abd1
JP
01:07 AM
Hi there, I’m looking for some help with API key permissions (actions) 🧵
Any guidance would be much appreciated 🙂
01:07
JP
01:07 AM
I’m using the docker typesense/docsearch-scraper to reindex a documentation site every time it’s built.
I’m wanting to create an API key which only has permission for the action needed reindex a collection (rather than using the ‘admin’ key).
I can’t see a way to do this in the API documentation (have looked under ‘Create an API Key’ & ‘Collections’).
My best guess is I either use
actions: ["collections:*"] (which would also allow delete) or
actions: ["*"] (also allows deleting and likely other actions too)
Neither of those seem ideal. Any pointers would be much appreciated. Thanks 🙂
Kishore Nallan
Photo of md5-4e872368b2b2668460205b409e95c2ea
Kishore Nallan
10:40 AM
documents:create is the permission that is needed for indexing documents. See here: https://typesense.org/docs/0.23.1/api/api-keys.html#document-actions
JP
Photo of md5-c793ac7faa870e19aa043d1f9b35abd1
JP
08:08 PM
Ahh for some reason my browser defaulted to an older version of the docs (0.22.2) which didn’t have the expanded permission tables (perhaps that was the version I last viewed a few months ago?). I was also looking for references to ‘indexing’/‘reindexing’ rather than document (I should have realised that).
That’s great thanks Kishore Nallan!
09:18
JP
09:18 PM
I had a play around with the scraper and creating some different action permissions.
What I came out with for a minimal permission config was:
{
  "description": "Scraper Indexing Key (2023/01/11)",
  "actions": [
    "aliases:*",
    "collections:delete",
    "collections:create",
    "documents:import"
  ],
  "collections": [
    "myprefix-.*"
  ]
}

Initially I used the same as above but with aliases:create & aliases:get (instead of alieases:*) however the scraper kept failing on the final alias step trying to do a PUT (after an initial GET).
DEBUG:urllib3.connectionpool: "PUT /aliases/myprefix-master HTTP/1.1" 401 None

I tried adding aliases:delete but to no avail.
I suspect there is an aliases:update action missing from the docs?
Jan 11, 2023 (11 months ago)
Jason
Photo of md5-8813087cccc512313602b6d9f9ece19f
Jason
05:03 PM
JP Could you try aliases:upsert?
Jan 12, 2023 (11 months ago)
JP
Photo of md5-c793ac7faa870e19aa043d1f9b35abd1
JP
10:32 PM
I’m pretty happy with how it is at the moment using aliases:* (from a security perspective).
That said, I’ll can try and do this later today if I get a moment and get back to you.

1

Jan 13, 2023 (11 months ago)
JP
Photo of md5-c793ac7faa870e19aa043d1f9b35abd1
JP
02:09 AM
This worked 👍
"aliases:get", 
"aliases:upsert", 
"aliases:create", 
"collections:delete", 
"collections:create", 
"documents:import"

I kept in aliases:create since I have a feeling that if you run the scraper for the first time and the alias doesn’t exist, it tries to create it (needs to be confirmed and I’m not comfortable doing so in our production cluster).

I have a feeling there will be a lot of people out there running their scrapers with full admin keys (including the keys actions). I think it’d be worth updating the docs for people like me who are wanting to slim down down their permissions to a least privilege model.

https://typesense.org/docs/guide/docsearch.html (include a minimal required api key config for scraping).
https://typesense.org/docs/0.23.1/api/api-keys.html#alias-actions (add the upsert action).
Jason
Photo of md5-8813087cccc512313602b6d9f9ece19f
Jason
02:58 AM
If you’re down for it, would be great if you could do a PR to the docs site with these additions. You’ll find an edit button at the bottom of the page
02:59
Jason
02:59 AM
PUT methods get translated to upsert in the auth layer… And aliases only have a PUT method and not a POST. So you shouldn’t need create
Jan 15, 2023 (11 months ago)
JP
Photo of md5-c793ac7faa870e19aa043d1f9b35abd1
JP
07:16 PM
Sure thing, I’ll try and find some time over the next couple of days.

1

Jan 16, 2023 (11 months ago)
Jason
Photo of md5-8813087cccc512313602b6d9f9ece19f
Jason
07:12 PM
Merged. Thank you!

1

Typesense

Lightning-fast, open source search engine for everyone | Knowledge Base powered by Struct.AI

Indexed 3005 threads (79% resolved)

Join Our Community

Similar Threads

Correct API Key Generation and Usage on Cloud

Tom faced 401 errors while creating keys via the Cloud API. Kishore Nallan clarified the correct syntax and mechanics, and identified a header mislabeling on Tom's part that caused the issue. They also discussed using scoped API keys.

3

31
14mo
Solved

Handling Kinesis Stream Event Batching with Typesense

Dui had questions about how to handle Kinesis stream events with Typesense. Kishore Nallan suggested using upsert mode for creation/update and differentiating with logical deletion. After various discussions including identifying and resolving a bug, they finalized to introduce an `emplace` action in Typesense v0.23.

8

91
24mo

Resolving Issues with Scoped API Keys in Typesense with Golang

Suvarna had problems with generating and using scoped API keys in Typesense with Golang. Several bugs misleading the user were found and fixed by Kishore Nallan.

6

158
28mo
Solved

Assigning an API Key for Backend Indexing Operations

Ross inquired about assigning an API key for backend service. After some confusion, Jason clarified that the `documents:upsert`, `documents:update`, `documents:create`, and `documents:delete` actions were necessary for document-only operations.

2

15
13mo
Solved

Docsearch Scrapper Metadata Configuration and Filter Problem

Marcos faced issues with Docsearch scrapper not adding metadata attributes and filtering out documents without content. Jason helped fix the issue by updating the scraper and providing filtering instructions.

2

82
8mo
Solved