Hello everyone, I have deployed Typesense v27 in c...
# community-help
a
Hello everyone, I have deployed Typesense v27 in cluster mode with peering enabled (3 replicas) on an EKS cluster. A Load Balancer is distributing the load across the replicas. Additionally, I have configured a headless service to allow the pods to communicate with each other on the peering port. I have a couple of questions regarding how Typesense handles consistency across the cluster: 1. When I create a collection and ingest documents through the Load Balancer, I understand that the request is sent to a single replica. Does the peering mechanism then propagate the creation of the collection and the ingested documents to the other replicas to ensure consistency? 2. Given a scenario where I ingest 10,000 documents, how long can I expect it to take for the other peers to sync and have a consistent state across all replicas?
j
1) That's correct. To clarify terminology, in a clustered setup, we call it leader and follower. And the nodes internally automatically one of them as the leader and the others as followers. Reads and writes can be sent to any of the nodes in the cluster, regardless of whether they are the leader or follower. The cluster will take care of replicating writes to all the nodes, and reads will be serviced fully from the node that receives it. 2) It should take say 5 seconds to achieve eventual consistency for 10K docs across all the nodes, from the time the write API call succeeds.
a
Hi, I’m facing an issue with my Typesense deployment in eks setup and need help understanding a few things. Here are the details: 1. Headless Service Configuration: Could you verify if my headless service configuration is correct? I’ve attached the YAML. Is it necessary to also include port 8108 in this configuration? . Replication and Consistency Issue: I have set up multiple replicas (3 , but the writes are only being executed on one replica. The others are not becoming consistent. However, I’m seeing this log message:
raft_server.h:60] Peer refresh succeeded!
Does this indicate that peering is functioning correctly? Could the replicas not be aligning because the consistency is eventual (i.e., they will align eventually, but there’s no guarantee when)? What steps can I take to ensure that the replicas synchronize properly? Thank you in advance!
j
Like I mentioned before, this is a kubernetes specific issue and we are unable to help any further given the number of additional variables at play