-
Notifications
You must be signed in to change notification settings - Fork 784
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generalize sync ActiveRequests #6398
Conversation
2e502a9
to
043d52b
Compare
@@ -42,11 +37,19 @@ pub enum SyncRequestId { | |||
/// Request searching for a set of blobs given a hash. | |||
SingleBlob { id: SingleLookupReqId }, | |||
/// Request searching for a set of data columns given a hash and list of column indices. | |||
DataColumnsByRoot(DataColumnsByRootRequestId, DataColumnsByRootRequester), | |||
DataColumnsByRoot(DataColumnsByRootRequestId), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having the requester inside the ID just simplifies downstream code and makes it symmetrical with the other variants
// Network context returns "download success" because the request has enough blobs + it | ||
// downscores the peer for returning too many. | ||
.expect_no_block_request(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test needs a scenario where sync receives more blob chunks than originally requested. This can never happen in practice so I choose to not handle this scenario to reduce complexity
lighthouse/beacon_node/network/src/sync/network_context/requests.rs
Lines 114 to 119 in 9446569
// Should never happen, ReqResp network behaviour enforces a max count of chunks | |
// When `max_remaining_chunks <= 1` a the inbound stream in terminated in | |
// `rpc/handler.rs`. Handling this case adds complexity for no gain. Even if an | |
// attacker could abuse this, there's no gain in sending garbage chunks that | |
// will be ignored anyway. | |
State::CompletedEarly => None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just had some nits, but looks good, overall like the direction 👌
Co-authored-by: realbigsean <[email protected]>
Co-authored-by: realbigsean <[email protected]>
Co-authored-by: realbigsean <[email protected]>
@realbigsean sir I fix the test sir |
looks like there's another test broken, sorry @dapplion ! |
@mergify queue |
🛑 The pull request has been removed from the queue
|
@mergify dequeue |
✅ The pull request has been removed from the queue
|
@mergify queue |
🛑 The pull request has been removed from the queue
|
@mergify queue |
🛑 The pull request has been removed from the queue
|
looks like CI here is hung on b9cd5ad ? |
@mergify cancel |
❌ Sorry but I didn't understand the command. Please consult the commands documentation 📚. |
…nc-active-request-generalize
@mergify queue |
✅ The pull request has been merged automaticallyThe pull request has been merged automatically at a074e9e |
* Generalize sync ActiveRequests * Remove impossible to hit test * Update beacon_node/lighthouse_network/src/service/api_types.rs Co-authored-by: realbigsean <[email protected]> * Update beacon_node/network/src/sync/network_context.rs Co-authored-by: realbigsean <[email protected]> * Update beacon_node/network/src/sync/network_context.rs Co-authored-by: realbigsean <[email protected]> * Simplify match * Fix display * Merge remote-tracking branch 'sigp/unstable' into sync-active-request-generalize * Sampling requests should not expect all responses * Merge remote-tracking branch 'sigp/unstable' into sync-active-request-generalize * Fix sampling_batch_requests_not_enough_responses_returned test * Merge remote-tracking branch 'sigp/unstable' into sync-active-request-generalize * Merge branch 'unstable' of https://github.com/sigp/lighthouse into sync-active-request-generalize
Part of
To address PeerDAS sync issues we need to make individual by_range requests within a batch retriable. We should adopt the same pattern for lookup sync where each request (block/blobs/columns) is tracked individually within a "meta" request that group them all and handles retry logic.
The first step is to add
ActiveBlocksByRangeRequest
,ActiveBlobsByRangeRequest
andActiveDataColumnsByRangeRequest
. I'm in the progress of doing that in this branch https://github.com/sigp/lighthouse/compare/unstable...dapplion:lighthouse:sync-active-requests?expand=1 but I noted a lot of sensitive code repetition.This PR is pre-step to add
ActiveRequest
to our existingActiveRequest*ByRoot
(the ones for by_root). Next PR will add theActiveRequest*ByRange
goodies :)