You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have rack-awareness on for our cluster. If I use kafkactl to increase the replication-factor on a topic, does the new replication-factor and assignment maintain rack awareness?
We used kafkactl last week, and it now appears that one of the topics is no longer rack-safe. I'm checking to see if the topic that is no longer rack-safe is the topic that we used kafkactl on.
I browsed through the code for kafkactl, and couldn't tell if you were taking into account broker racks, when increasing replication factor.
we currently do not take the rack information into account when altering replicas.
Since I currently only have limited time to spend on this project, I will not implement this in the near future.
If someone is interested in implementing this I can give some advice.
The rack information can be retrieved with: https://pkg.go.dev/github.com/shopify/sarama#Broker.Rack
Then all thats needed is a logic which spreads the partition replicas across brokers of different racks.
Thanks @d-rk ! I also don't have time to make the change myself, but I will leave this issue open for tracking purposes, in case someone else has a similar question. And if someone is able to implement it, they can refer to this issue.
We have rack-awareness on for our cluster. If I use kafkactl to increase the replication-factor on a topic, does the new replication-factor and assignment maintain rack awareness?
We used kafkactl last week, and it now appears that one of the topics is no longer rack-safe. I'm checking to see if the topic that is no longer rack-safe is the topic that we used kafkactl on.
I browsed through the code for kafkactl, and couldn't tell if you were taking into account broker racks, when increasing replication factor.
kafkactl/internal/topic/topic-operation.go
Lines 501 to 545 in 105b481
Thanks!
The text was updated successfully, but these errors were encountered: