-
Notifications
You must be signed in to change notification settings - Fork 566
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
detach on linux #95
Comments
From [email protected] on March 25, 2009 08:21:15 we'll also need to get each thread to run its own os_tls_exit() since we |
From [email protected] on September 27, 2012 15:22:50 we also need each thread to do its own os_switch_lib_tls(dc, true/to |
From [email protected] on August 21, 2013 09:45:10 Re: running os_switch_lib_tls in each target thread being suspended: We can just have the suspender set an ostd flag which is read in Owner: peter.goodman |
From [email protected] on August 30, 2013 10:09:49 ** TODO complications with setting the context Although Windows has complications with stacked callbacks, Windows supports At first, we considered a two-step solution where the master resumes each We have to store the sigreturn context somewhere: either mmapped memory Given that we have that problem, we may as well avoid the regular ** TODO set mask on final sigreturn to app value |
From [email protected] on September 03, 2013 08:12:07 We might over-design.
So we do not even need synch_all as the master only need notify the slave to perform some action.
Owner: [email protected] |
From [email protected] on September 03, 2013 08:32:16 Yes, the extra slave wait may not be necessary. You do need the synchall as it's the mechanism to get a thread to a safe spot. To avoid it you'd end up duplicating essentially the same logic elsewhere. And you have to use signals to interrupted the target thread, and ensure it's at a safe spot, for critical reasons:
|
From [email protected] on September 10, 2013 09:46:53 master can do xl8 while in synchall slave can copy sigcontext to app stack. should leverage:
slave must do:
signal arrives during detach: how handle Owner: peter.goodman |
From [email protected] on September 10, 2013 12:47:20 continuing the final thought above about handling signals during the detach process: we'll probably want to mask all signals in the slaves (the current mask + the few we usually let through: sigsegv and the suspend signal itself or sthg), have the master restore the app signal handlers during synchall (file a separate issue on handling signal handlers not shared by the entire thread group -- won't happen for pthreads), and have the sigreturn restore the mask. |
From [email protected] on September 19, 2013 13:54:46 ** TODO discussion on start/stop all-thread vs single-thread Discussion about how to handle changes to the address space while threads Stop should not free anything that start is not prepared to re-initialize. For client: thread_exit should be at dr_app_cleanup. => add go-native and What if thread is native and it exits? Feasible to we hook all calls to => Challenges to figure out later? For our purposes we limit our API But, there's a problem if we remove our signal handler at stop: have to => Conclusion: two usage models: single-thread start and stop with separate API routine renames: |
From [email protected] on September 20, 2013 12:21:53 I've done a bit of work on the API renaming stuff. It mostly works on Linux, but have not done extensive Windows testing (first test failed because I didn't update the CMakeLists.txt to export the renamed dr_app_start stuff). I've renamed according to what's described, with the addition of a dr_app_start_all for temporary convenience. dr_app_start_thread and dr_app_start_all differ only in how they invoke dynamo_start, by passing true/false on whether or not to take over all threads. Not much testing beyond this though. Going back to working on detach. |
From [email protected] on September 23, 2013 12:57:39 ** TODO dynamo_shared_exit discussion Should still follow Windows detach general flow of detach code doing So:
|
From [email protected] on October 08, 2013 06:36:26 Revisiting the conclusions of comment Supporting re-takeover when stopping is tied to full cleanup is problematic Plus, one of our use cases for attach/detach is to do instrumentation over In fact, we already have code for re-examining the process: For Linux we would want to add hooks for thread init+exit, mmap-related, |
From [email protected] on October 08, 2013 10:26:23 More comments on start/stop vs cleanup: it seems that for repeated |
From [email protected] on October 08, 2013 10:27:18 Linux syscall hooks: issue #721 |
Re: the ongoing and back-and-forth argument over having dr_app_stop do full It seems like we can design start/stop and attach/detach to permanently
|
However, it is much easier to remove hooks while all threads are suspended. For going native, b/c each thread has to restore its own segment, it's not |
For taking over known-but-native threads we actually need a synchall-style loop as we see when we run the new api.startstop test with -loglevel 3:
It looks like SIGUSR2 came in during DR code handling the sysenter hook Seems like we need a synchall-style loop there. How about we just remove the sysenter hook while all-native? And we have a We should also remove the signal handlers. |
*** TODO translate threads at syscalls: assume app will re-try interrupted syscall? Doesn't seem good enough. Xref #1145 and adjust_syscall_for_restart().
|
This is a big feature with a lot of pieces and corner cases. Recent CL's have the basics working for an app-triggered detach: a84214e i#95 Linux detach: post-sysenter is a safe spot |
dr_app_stop_and_cleanup() does a full detach today. A separated dr_app_stop() and later dr_app_cleanup() are different: today a separate cleanup kind of assumes it's terminating the app. I have a forthcoming CL to at least remove the thread termination in the final synch. But it seems better to look like dr_app_stop_and_cleanup() and do a full detach: worst-case we can insert a dr_app_start() first?!? Or we could just drop support for dr_app_cleanup(). |
Nudge-based detach is only supported on Windows due to the assumption of a separate thread stack. If we implement a scheme to create a temporary separate stack for a client-triggered detach #2644 we could use the same mechanism for a nudge-based detach on UNIX which I believe is the only reason this issue is still open? |
Pasting notes from re-examining the Windows detach status today: The existing Windows detach nudge feature is internal to DR and does not go through a client. It looks like it is not exposed in drconfig and the old tests for it are not all enabled; the old way to trigger it is in the no-longer-supported "drcontrol" front-end, but it's just a different nudge type so it should be feasible to tweak drconfig to enable it to try it out. You can see the code paths in libutil/detach.c triggering the nudge to be sent and handling it in core/nudge.c "TEST(NUDGE_GENERIC(detach), nudge_action_mask)" calling detach_helper() which calls the shared-with-all-detach-types detach_on_permanent_stack(). |
We'd want to merge nudgeunix into drdeploy.c so we can invoke nudges in general and detach nudges from the dronconfig front-end: that's part of #840. |
Re-exposing the Windows detach from drconfig is part of #725 |
The relevant code for detachment on Linux is submitted in #6513 |
Enable the feature of detachment on Linux. - I setup a temporary separate stack for the nudged thread, and resume the normal execution of this thread after detachment. - A new variable named `nudged_sigcxt` is used to save the `sigcxt` of nudged thread. - And the extra code `dcontext == GLOBAL_DCONTEXT` is added to cover the new code path that won't be executed before during the process of thread exit. - What's more, I turn off the option of `sigreturn_setcontext` for the smooth resumption of nudged thread on X64 ISA. - Finally, the frontend, automated test cases and documentation are modified accordingly. Issue: [#95](#95)
I think we can close this now. Any problems or enhancements can be filed as separate issues. |
From [email protected] on March 24, 2009 00:47:16
this was PR 212088
we support detach on Windows but not on Linux
we don't support detach with clients: that was PR 199120 (not filed here yet)
xref attach: issue #38 for detach we'll need a scheme to free the sigstack for suspended threads,
as on linux suspended threads sit on their sigstack
Original issue: http://code.google.com/p/dynamorio/issues/detail?id=95
The text was updated successfully, but these errors were encountered: