Remove register_interrupt_flags
and PeripheralInterruptFlags
from the interpreter and friends
#180
Labels
➕ improvement
Chores and fixes: the small things.
P-low
Low priority
T-peripheral traits
Topic: Peripheral Traits
what
Background
One of our more debated early design decisions was the question of how to support interrupts from peripherals in the interpreter.
We went through a few different designs:
AtomicBool
s to represent interrupt flagsThe initial motivation for using a callback based model for interrupts was performance but we quickly ran into issues getting such an approach to work in a
no_std
friendly way (for example, having your callback push into an event queue that the interpreter drains works fine on hosted platforms but is tricky on embedded for the usual reasons: synchronization, allocation, etc.) and decided that we'd worry about performance if it provided to be an issue.From there we pivoted to a polling based approach but did so in a somewhat roundabout manner:
interrupt_occurred
/reset_interrupt
(good, not tying peripherals to a particular representation)register_interrupt_flags
function to each peripheral trait (well intentioned but weird)interrupt_occurred
'static
in cases where peripheral implementations pass them to static interrupt service routines so.. we made the Interpreter generic over some lifetime ('int
) representing the flags and allowed it to be passed into the Interpreter so the entire interpreter wouldn't need to be'static
and thus (probably) const-constructible'int
lifetime parameterA Better Solution: Removing
register_interrupt_flags
With the benefit of hindsight, some of the discrepancies in the design we have today are apparent. Whether or not we need to invest in a more performant approach to handling interrupts than polling is still an open question, but I think we have more than enough evidence that it's possible to simplify the design we currently have.
Additional peripheral trait implementations like the shims and the embedded-hal based GPIO implementation have shown that mandating the use of
AtomicBool
s to represent and synchronize interrupt state only serves to burden implementations that do not need synchronization with an ISR while not aiding implementations that do.All of this leads me to believe that the interpreter should not have an opinion about how interrupt state is stored or synchronized and that this should be entirely under the purview of individual peripheral trait implementations. This is effectively the paradigm we already have with the
interrupt_occurred
/reset_interrupt_flag
methods on interrupt supporting traits.It's also worth noting that the one use case that all of the
PeripheralInterruptFlags
machinery was designed for (device implementations of the peripheral traits whose only method of knowing an interrupt has fired in an ISR invocation) is also better served by a separation of thePeripheralInterruptFlags
from the interpreter: under the current system device implementors would have to take care that the interpreter was passed the same set of flags that hardcoded ISRs were written to set.The many upsides include:
register_interrupt_flags
, unwrapping anOption<_>
of flags, etc.)'int
lifetime from many many places throughout the codebase, making many APIs easier to usesteps
register_interrupt_flags
from the interrupt support peripheral traitsPeripheralInterruptFlags
from the interpreter and it's builder'int
lifetime from the codebasePeripheralInterruptFlags
todevice-support
core/traits/src/peripherals/gpio.rs
Line 266 in 5e0f0eb
core/traits/src/peripherals/timers.rs
Line 257 in 5e0f0eb
core/traits/src/peripherals/input.rs
Line 23 in 5e0f0eb
core/traits/src/peripherals/output.rs
Line 18 in 5e0f0eb
open questions
The text was updated successfully, but these errors were encountered: