-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
exposure pipeline stops processing association on first fully saturated input #1523
Comments
Thanks Brett. @ddavis-stsci probably knows the history here best; I imagine that there is some requirement somewhere saying that we have to handle fully saturated inputs, and for those cases most of the pipeline will not doing anything sane and so it makes sense to exit early. IMO this should not happen very often; I feel like if a device is fully saturated we have done something very bad and are running of the risk of damaging something. Absent requirements, we should just take the simplest route to exit early for these exposures. I agree with you that it makes more sense to stop the individual fully saturated detector than to stop all of the detectors; I think we expect SDP to process one detector at a time anyway. I suspect that any tests we may have only operate on a single detector, and so this behavior could be changed without affecting existing tests. But because this is such a rare case we should focus on whatever the cleanest / easiest / lowest maintenance solution is, rather than trying to save many cycles here or something. |
Thanks. Skipping steps for a saturated input makes sense. The skipping of inputs was more surprising. For processing one detector at a time, what will the input association look like (or will the elp only be given a single uncal file)? Skipping inputs might not always be a problem (if the expectation is that a single saturated uncal will always be followed by equally saturated uncals). |
For the ELP I think the early steps are just run on the single uncal files in the current SDP implementation. I don't think we should code around that assumption---I'm mostly saying that that would be a sane implementation, and agreeing with you that it's awkward if that implementation gives different results than passing the whole association (except for tweakreg, where the behavior is conceptually different). I am having trouble of deciding when to expect fully saturated images, since as I mentioned, if this happens I worry it's instrument damaging. In my other experience this happens, e.g., when you accidentally open the shutter during the day or start doing twilight flats too early or don't notice that there's a cloud illuminated by the setting sun during twilight making the twilight sky much brighter than usual. In these kinds of instances it's likely that the whole array would be hit. In real science observations we will end up putting very bright stars on the actual detectors and not their neighbors. It doesn't seem likely to me that that ever results in fully saturating the detectors. e.g., we pointed DECam at alpha Cen and blasted a big portion of one detector, much of that detector was fine. From a raw number of photons perspective Roman won't ever be hit more badly than that. But ultimately I don't think we should assume very much about what fully saturated data will look like since it seems to me that if we ever see such data it will be a weird event. Maybe there's some kind of calibration data where they want to figure out what fully saturated is and they intentionally fully saturate the detector?? That wouldn't be science data we would want to reduce, though. |
Thanks. Perhaps INS has some input about if/when a fully saturated input might be expected (more on that below). I traced some changes back to #541 with a referenced ticket https://jira.stsci.edu/browse/RCAL-353 which mentions:
Looking at the truth file for the "all saturated" test https://bytesalad.stsci.edu/ui/repos/tree/General/roman-pipeline/dev/truth/WFI/image/r0000101001001001001_0001_wfi01_ALL_SATURATED_cal.asdf the result is a RampModel stored in a "cal" suffix file and the arrays aren't zeroed. > m = rdm.open("test_bigdata/roman-pipeline/dev/truth/WFI/
...: image/r0000101001001001001_0001_wfi01_ALL_SATURATED_cal.asdf")
> type(m)
roman_datamodels.datamodels._datamodels.RampModel
> m.data[0, 0, 0]
65535.0 The association processing was added to elp at a later date #802 which dropped the usage of |
Good archaeology! I can ping around about when we might get fully saturated images but it's ultimately going to be a weird corner case. I agree that the current saturated behavior seems like garbage to me and we'd rather restore the old "make a zeroed L2 image" behavior than the current "save whatever we happen to have in memory with the wrong suffix" behavior. |
Would it be "overstuffing" if I put that change in #1525? |
I don't mind overstuffing. It will be good to get @ddavis-stsci 's eyes on this though since he has some of the history. |
If I generate 2 associations using 1 fully saturated and 1 non-saturated input:
in 2 different orders and run each through the
ExposurePipeline
I get 2 different results:I failed to find a description of this in the documentation.
Is it expected that the elp will stop processing inputs when it encounters the first saturated input?
The text was updated successfully, but these errors were encountered: