created | 2023-04-18T15:55:30Z |
---|---|
begin | 2023-04-16T13:35:58Z |
end | 2023-04-16T21:19:26Z |
path | src/sys |
commits | 1 |
date | 2023-04-16T21:19:26Z | |||
---|---|---|---|---|
author | cheloha | |||
files | src/sys/kern/kern_clockintr.c | log | diff | annotate |
src/sys/sys/clockintr.h | log | diff | annotate | |
message |
clockintr: add shadow copy of running clock interrupt to clockintr_queue cq_shadow is a private copy of the running clock interrupt passed to cl_func() during the dispatch loop. It resembles the real clockintr object, though the two are distinct (hence "shadow"). A private copy is useful for two reasons: 1. Scheduling operations performed on cq_shadow (advance, cancel, schedule) are recorded as requests with the CLST_SHADOW_PENDING flag and are normally performed on the real clockintr when cl_func() returns. However, if an outside thread performs a scheduling operation on the real clockintr while cl_func() is running, the CLST_IGNORE_SHADOW flag is set and any scheduling operations requested by the running clock interrupt are ignored. The upshot of this arrangement is that outside scheduling operations have priority over those requested by the running clock interrupt. Because there is no race, periodic clock interrupts can now be safely stopped without employing the serialization mechanisms needed to safely stop periodic timeouts or tasks. 2. &cq->cq_shadow is a unique address, so most clockintr_* API calls made while cl_func() is running now don't need to enter/leave cq_mtx: the API can recognize when it is being called in the midst of clockintr_dispatch(). Tested by mlarkin@. With input from dlg@. In particular, dlg@ expressed some design concerns but then stopped responding. I have changes planned to address some of the concerns. I think if we hit a wall with the current clockintr design we could change the allocation scheme without too much suffering. I don't anticipate there being more than ~20 distinct clock interrupts. |