-
-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Internal refactor to enable using 100% of PacketBuf as response buffer #70
Comments
We can store the |
Some relevant discussion regarding this issue: #72 (comment) |
Summary of discussion in #72 (comment): Because of the existence of
I prefer the third way. The environment that can't allocate a big stack space is really rare, and it's very likely the |
This is a great summary of our discussion - thanks you! Personally, I prefer the first way. It gives the end user the ultimate flexibility wrt. where they'd like to allocate this additional response buffer, and exactly how large it should be. Much of the "complexity" of this approach will be hidden within That said, as I've stated earlier: I believe the current |
I think the first way is quite meaningless. We must know why we want to use the whole If we use the first way, our implementation is only perfect when the user provide But if we choose the second or the third way, we can assure our implementation is always perfect.
I really agree with it! The situation that |
We may need to do something what use bytes::Bytes;
let mut mem = Bytes::from("Hello world");
let a = mem.slice(0..5);
assert_eq!(a, "Hello");
let b = mem.split_to(6);
assert_eq!(mem, "world");
assert_eq!(b, "Hello "); (But |
I'm not entirely sure what you're suggesting with that |
Isn't this the way you used by And I have a new idea: we can only assure the code is perfect (as described above) when |
Ahh, yes, sure, that's certainly a valid way of doing it (i.e: instead of storing a slice reference directly, store the offsets into FWIW, that approach would enable using 100% of the PacketBuf for any fields of target-dependent size (e.g: The real complexity is how to handle packets like |
We may need to figure out that if there is only gdbstub/src/gdbstub_impl/ext/base.rs Lines 232 to 251 in 4e46b72
If what |
So, to reiterate - it is entirely possible to rewrite the current I agree - now that we have the extra protocol constraint of "responses must fill fit in the packet buffer", we can absolutely remove that gnarly Could you rephrase your comment regarding Also, I should probably clarify something important about The rationale for this is to support the kind of use cases like gz's kernel debugger - a use-case where Right now, the only time |
I know we can solve it with I think |
Again - and I really cannot stress this enough: using Period.
No exceptions. With
FWIW, I actually lean towards the first option, as while it results in a slightly less "elegant" API, it doesn't require the end user to allocate a buffer that will be left unused almost all of the time. Also, some more thoughts on If It would probably look a lot closer to https://github.com/luser/rust-gdb-remote-protocol, with plenty of But The constraints of the platform that I honestly feel like I should put this explanation somewhere more visible, like a top-level To really summarize the point: |
I know this, so I said if what bytes does is what we need, we may suggest the upstream not to use
Don't you said it's impossible for us to allocate a local buffer with unknown size? |
Ah, apologies - I didn't catch the part about wanting to upstream these changes... I don't believe that For Yes, us as in "us who write the code in If they want to allocate some buffer on the heap, sure. If they want to stack allocate it, also cool. If they want to write data into some long-lived global buffer, that's also fine. The important thing is we don't make the decision for them. |
As discussed in #69, we'll gradually want to move various callback based APIs over to a "here is a
&mut [u8]
buffer, please fill it with data" APIs.Instead of allocating a whole new outgoing packet buffer for this, it'd be nice to reuse the existing
PacketBuf
. This could work, since packets that request data are almost always able to be parsed into fixed-size, lifetime-free structs, which would leave the packet buf available to be used as scratch space.Unfortunately, the current implementation of the packet parser includes a "late decode" step, whereby target-dependent fields (such as memory addresses / breakpoint kinds / etc...) are parsed into
&[u8]
bufs in the packet parsing code, and are only converted into their concrete types later on, in the handler code (where the type ofTarget
is known). This is an important property to maintain, as eventually, we'd want to support debugging multipleTarget
types at the same time (e.g: on macOS, a single gdbserver can debug both x86 and ARM code), and the only way to do this would be by having the packet parsing code beTarget
agnostic.Instead, there should be some way of obtaining a reference to the entire, raw, underlying
&mut [u8]
PacketBuf
after the late decode step has been completed, but this is harder than it seems. Getting the lifetimes to line up here will probably be tricky, and I suspect getting this working will require some real code-cotorsion, and possibly even a sprinkle ofunsafe
.In the meantime, we'll be going with the approach used by the
m
packet, whereby the packet parsing code will "stash" a&mut [u8]
pointing to the trailing unused bit of the buffer as part of the parsed struct.This works, but is a bit wasteful (as not 100% of the packet buffer is being utilized), and also a bit annoying to implement on a per-packet basis. Nonetheless, the GDB RSP allows targets to return less than the requested amount of data provided without there necessarily being an error, specifically because certain implementation might be using different sized buffers for incoming / outgoing data.
Given that "workaround" works pretty well, and that losing ~30 bytes of a ~4096 byte PacketBuf isn't particularly noticeable, getting to 100% efficiency isn't a super high priority, but it's still something to think about, and potentially implement at some point.
The text was updated successfully, but these errors were encountered: