You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If between the point the client requests a block location from the namenode, the datanode it was given drops out of the cluster, an exception is bubbled to the top... this exception is a result of us not being able to connect to the datanode to get the block.
In this specific case, we should detect a datanode failure and ask the namenode again for a location of the block. I'm not sure how many times we should retry this process, I think the hadoop-client does it indefinitely.
Let's consult the java implementation and follow that.
Note: this filesystem implementation is still sitting in review via #2
The text was updated successfully, but these errors were encountered:
If between the point the client requests a block location from the namenode, the datanode it was given drops out of the cluster, an exception is bubbled to the top... this exception is a result of us not being able to connect to the datanode to get the block.
In this specific case, we should detect a datanode failure and ask the namenode again for a location of the block. I'm not sure how many times we should retry this process, I think the hadoop-client does it indefinitely.
Let's consult the java implementation and follow that.
Note: this filesystem implementation is still sitting in review via #2
The text was updated successfully, but these errors were encountered: