You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 12, 2022. It is now read-only.
When a producer is instantiated and populateProducerPool is set to true (which is the default case), then a socket is opened for every broker. If the client fails to connect to the broker and it is not the first broker it is trying to connect to (i.e. not broker with id==0), then when an exception is thrown, none of the broker connections that have already been connected get disconnected. If the client keeps trying to connect, eventually the client will run out of available connections causing other unintended consequences for other parts of the process.
Repro:)
Configure 2 brokers and start them.
zk broker nodes are supposed to be ephemeral for brokers, but in practice this isn't always the case. The client needs to account for this. Check out the brokers in zk with path /brokers/ids. Take a broker id that is not the first one (likely broker id == 1) and copy the value of the znode. Once you have the data, stop the kafka server with broker id == 1. This should remove the broker from ids in zk. Add it back using manually to replicate an issue where the node does not properly get cleaned up.
Create a new Producer object
Result: An exception is thrown b/c the client cannot connect to the client. Further, the connection to broker 0 is not disposed.
Expected: Constructor at the very least should clean up the open connection. For extra credit, the connections should not be opened for a broker until it is actually sending data to that particular broker. Connections should also respect kafka configurations for reconnectInterval and reconnectTimeInterval which will freshen the connection periodically.
The text was updated successfully, but these errors were encountered:
When a producer is instantiated and populateProducerPool is set to true (which is the default case), then a socket is opened for every broker. If the client fails to connect to the broker and it is not the first broker it is trying to connect to (i.e. not broker with id==0), then when an exception is thrown, none of the broker connections that have already been connected get disconnected. If the client keeps trying to connect, eventually the client will run out of available connections causing other unintended consequences for other parts of the process.
Repro:)
Result: An exception is thrown b/c the client cannot connect to the client. Further, the connection to broker 0 is not disposed.
Expected: Constructor at the very least should clean up the open connection. For extra credit, the connections should not be opened for a broker until it is actually sending data to that particular broker. Connections should also respect kafka configurations for reconnectInterval and reconnectTimeInterval which will freshen the connection periodically.
The text was updated successfully, but these errors were encountered: