aboutsummaryrefslogtreecommitdiff
path: root/book/en/src/internals
diff options
context:
space:
mode:
Diffstat (limited to 'book/en/src/internals')
-rw-r--r--book/en/src/internals/access.md2
-rw-r--r--book/en/src/internals/ceilings.md58
-rw-r--r--book/en/src/internals/critical-sections.md156
-rw-r--r--book/en/src/internals/interrupt-configuration.md5
-rw-r--r--book/en/src/internals/late-resources.md23
-rw-r--r--book/en/src/internals/non-reentrancy.md22
-rw-r--r--book/en/src/internals/tasks.md4
-rw-r--r--book/en/src/internals/timer-queue.md27
8 files changed, 152 insertions, 145 deletions
diff --git a/book/en/src/internals/access.md b/book/en/src/internals/access.md
index 513cef1..a4c9ca0 100644
--- a/book/en/src/internals/access.md
+++ b/book/en/src/internals/access.md
@@ -21,7 +21,7 @@ This makes it impossible for the user code to refer to these static variables.
Access to the resources is then given to each task using a `Resources` struct
whose fields correspond to the resources the task has access to. There's one
such struct per task and the `Resources` struct is initialized with either a
-mutable reference (`&mut`) to the static variables or with a resource proxy (see
+unique reference (`&mut-`) to the static variables or with a resource proxy (see
section on [critical sections](critical-sections.html)).
The code below is an example of the kind of source level transformation that
diff --git a/book/en/src/internals/ceilings.md b/book/en/src/internals/ceilings.md
index c13df53..6b0530c 100644
--- a/book/en/src/internals/ceilings.md
+++ b/book/en/src/internals/ceilings.md
@@ -16,61 +16,65 @@ that has a logical priority of `0` whereas `init` is completely omitted from the
analysis -- the reason for that is that `init` never uses (or needs) critical
sections to access static variables.
-In the previous section we showed that a shared resource may appear as a mutable
-reference or behind a proxy depending on the task that has access to it. Which
-version is presented to the task depends on the task priority and the resource
-ceiling. If the task priority is the same as the resource ceiling then the task
-gets a mutable reference to the resource memory, otherwise the task gets a
-proxy -- this also applies to `idle`. `init` is special: it always gets a
-mutable reference to resources.
+In the previous section we showed that a shared resource may appear as a unique
+reference (`&mut-`) or behind a proxy depending on the task that has access to
+it. Which version is presented to the task depends on the task priority and the
+resource ceiling. If the task priority is the same as the resource ceiling then
+the task gets a unique reference (`&mut-`) to the resource memory, otherwise the
+task gets a proxy -- this also applies to `idle`. `init` is special: it always
+gets a unique reference (`&mut-`) to resources.
An example to illustrate the ceiling analysis:
``` rust
#[rtfm::app(device = ..)]
const APP: () = {
- // accessed by `foo` (prio = 1) and `bar` (prio = 2)
- // CEILING = 2
- static mut X: u64 = 0;
-
- // accessed by `idle` (prio = 0)
- // CEILING = 0
- static mut Y: u64 = 0;
+ struct Resources {
+ // accessed by `foo` (prio = 1) and `bar` (prio = 2)
+ // -> CEILING = 2
+ #[init(0)]
+ x: u64,
+
+ // accessed by `idle` (prio = 0)
+ // -> CEILING = 0
+ #[init(0)]
+ y: u64,
+ }
- #[init(resources = [X])]
+ #[init(resources = [x])]
fn init(c: init::Context) {
- // mutable reference because this is `init`
- let x: &mut u64 = c.resources.X;
+ // unique reference because this is `init`
+ let x: &mut u64 = c.resources.x;
- // mutable reference because this is `init`
- let y: &mut u64 = c.resources.Y;
+ // unique reference because this is `init`
+ let y: &mut u64 = c.resources.y;
// ..
}
// PRIORITY = 0
- #[idle(resources = [Y])]
+ #[idle(resources = [y])]
fn idle(c: idle::Context) -> ! {
- // mutable reference because priority (0) == resource ceiling (0)
- let y: &'static mut u64 = c.resources.Y;
+ // unique reference because priority (0) == resource ceiling (0)
+ let y: &'static mut u64 = c.resources.y;
loop {
// ..
}
}
- #[interrupt(binds = UART0, priority = 1, resources = [X])]
+ #[interrupt(binds = UART0, priority = 1, resources = [x])]
fn foo(c: foo::Context) {
// resource proxy because task priority (1) < resource ceiling (2)
- let x: resources::X = c.resources.X;
+ let x: resources::x = c.resources.x;
// ..
}
- #[interrupt(binds = UART1, priority = 2, resources = [X])]
+ #[interrupt(binds = UART1, priority = 2, resources = [x])]
fn bar(c: foo::Context) {
- // mutable reference because task priority (2) == resource ceiling (2)
- let x: &mut u64 = c.resources.X;
+ // unique reference because task priority (2) == resource ceiling (2)
+ let x: &mut u64 = c.resources.x;
// ..
}
diff --git a/book/en/src/internals/critical-sections.md b/book/en/src/internals/critical-sections.md
index 54f02ac..8bad6cb 100644
--- a/book/en/src/internals/critical-sections.md
+++ b/book/en/src/internals/critical-sections.md
@@ -1,12 +1,12 @@
# Critical sections
When a resource (static variable) is shared between two, or more, tasks that run
-at different priorities some form of mutual exclusion is required to access the
+at different priorities some form of mutual exclusion is required to mutate the
memory in a data race free manner. In RTFM we use priority-based critical
-sections to guarantee mutual exclusion (see the [Immediate Priority Ceiling
-Protocol][ipcp]).
+sections to guarantee mutual exclusion (see the [Immediate Ceiling Priority
+Protocol][icpp]).
-[ipcp]: https://en.wikipedia.org/wiki/Priority_ceiling_protocol
+[icpp]: https://en.wikipedia.org/wiki/Priority_ceiling_protocol
The critical section consists of temporarily raising the *dynamic* priority of
the task. While a task is within this critical section all the other tasks that
@@ -25,7 +25,7 @@ a data race the *lower priority* task must use a critical section when it needs
to modify the shared memory. On the other hand, the higher priority task can
directly modify the shared memory because it can't be preempted by the lower
priority task. To enforce the use of a critical section on the lower priority
-task we give it a *resource proxy*, whereas we give a mutable reference
+task we give it a *resource proxy*, whereas we give a unique reference
(`&mut-`) to the higher priority task.
The example below shows the different types handed out to each task:
@@ -33,12 +33,15 @@ The example below shows the different types handed out to each task:
``` rust
#[rtfm::app(device = ..)]
const APP: () = {
- static mut X: u64 = 0;
+ struct Resources {
+ #[init(0)]
+ x: u64,
+ }
- #[interrupt(binds = UART0, priority = 1, resources = [X])]
+ #[interrupt(binds = UART0, priority = 1, resources = [x])]
fn foo(c: foo::Context) {
// resource proxy
- let mut x: resources::X = c.resources.X;
+ let mut x: resources::x = c.resources.x;
x.lock(|x: &mut u64| {
// critical section
@@ -46,9 +49,9 @@ const APP: () = {
});
}
- #[interrupt(binds = UART1, priority = 2, resources = [X])]
+ #[interrupt(binds = UART1, priority = 2, resources = [x])]
fn bar(c: foo::Context) {
- let mut x: &mut u64 = c.resources.X;
+ let mut x: &mut u64 = c.resources.x;
*x += 1;
}
@@ -69,14 +72,14 @@ fn bar(c: bar::Context) {
}
pub mod resources {
- pub struct X {
+ pub struct x {
// ..
}
}
pub mod foo {
pub struct Resources {
- pub X: resources::X,
+ pub x: resources::x,
}
pub struct Context {
@@ -87,7 +90,7 @@ pub mod foo {
pub mod bar {
pub struct Resources<'a> {
- pub X: rtfm::Exclusive<'a, u64>, // newtype over `&'a mut u64`
+ pub x: &'a mut u64,
}
pub struct Context {
@@ -97,9 +100,9 @@ pub mod bar {
}
const APP: () = {
- static mut X: u64 = 0;
+ static mut x: u64 = 0;
- impl rtfm::Mutex for resources::X {
+ impl rtfm::Mutex for resources::x {
type T = u64;
fn lock<R>(&mut self, f: impl FnOnce(&mut u64) -> R) -> R {
@@ -111,7 +114,7 @@ const APP: () = {
unsafe fn UART0() {
foo(foo::Context {
resources: foo::Resources {
- X: resources::X::new(/* .. */),
+ x: resources::x::new(/* .. */),
},
// ..
})
@@ -121,7 +124,7 @@ const APP: () = {
unsafe fn UART1() {
bar(bar::Context {
resources: bar::Resources {
- X: rtfm::Exclusive(&mut X),
+ x: &mut x,
},
// ..
})
@@ -158,7 +161,7 @@ In this particular example we could implement the critical section as follows:
> **NOTE:** this is a simplified implementation
``` rust
-impl rtfm::Mutex for resources::X {
+impl rtfm::Mutex for resources::x {
type T = u64;
fn lock<R, F>(&mut self, f: F) -> R
@@ -170,7 +173,7 @@ impl rtfm::Mutex for resources::X {
asm!("msr BASEPRI, 192" : : : "memory" : "volatile");
// run user code within the critical section
- let r = f(&mut implementation_defined_name_for_X);
+ let r = f(&mut x);
// end of critical section: restore dynamic priority to its static value (`1`)
asm!("msr BASEPRI, 0" : : : "memory" : "volatile");
@@ -183,23 +186,23 @@ impl rtfm::Mutex for resources::X {
Here it's important to use the `"memory"` clobber in the `asm!` block. It
prevents the compiler from reordering memory operations across it. This is
-important because accessing the variable `X` outside the critical section would
+important because accessing the variable `x` outside the critical section would
result in a data race.
It's important to note that the signature of the `lock` method prevents nesting
calls to it. This is required for memory safety, as nested calls would produce
-multiple mutable references (`&mut-`) to `X` breaking Rust aliasing rules. See
+multiple unique references (`&mut-`) to `x` breaking Rust aliasing rules. See
below:
``` rust
-#[interrupt(binds = UART0, priority = 1, resources = [X])]
+#[interrupt(binds = UART0, priority = 1, resources = [x])]
fn foo(c: foo::Context) {
// resource proxy
- let mut res: resources::X = c.resources.X;
+ let mut res: resources::x = c.resources.x;
res.lock(|x: &mut u64| {
res.lock(|alias: &mut u64| {
- //~^ error: `res` has already been mutably borrowed
+ //~^ error: `res` has already been uniquely borrowed (`&mut-`)
// ..
});
});
@@ -223,18 +226,22 @@ Consider this program:
``` rust
#[rtfm::app(device = ..)]
const APP: () = {
- static mut X: u64 = 0;
- static mut Y: u64 = 0;
+ struct Resources {
+ #[init(0)]
+ x: u64,
+ #[init(0)]
+ y: u64,
+ }
#[init]
fn init() {
rtfm::pend(Interrupt::UART0);
}
- #[interrupt(binds = UART0, priority = 1, resources = [X, Y])]
+ #[interrupt(binds = UART0, priority = 1, resources = [x, y])]
fn foo(c: foo::Context) {
- let mut x = c.resources.X;
- let mut y = c.resources.Y;
+ let mut x = c.resources.x;
+ let mut y = c.resources.y;
y.lock(|y| {
*y += 1;
@@ -259,12 +266,12 @@ const APP: () = {
})
}
- #[interrupt(binds = UART1, priority = 2, resources = [X])]
+ #[interrupt(binds = UART1, priority = 2, resources = [x])]
fn bar(c: foo::Context) {
// ..
}
- #[interrupt(binds = UART2, priority = 3, resources = [Y])]
+ #[interrupt(binds = UART2, priority = 3, resources = [y])]
fn baz(c: foo::Context) {
// ..
}
@@ -279,13 +286,13 @@ The code generated by the framework looks like this:
// omitted: user code
pub mod resources {
- pub struct X<'a> {
+ pub struct x<'a> {
priority: &'a Cell<u8>,
}
- impl<'a> X<'a> {
+ impl<'a> x<'a> {
pub unsafe fn new(priority: &'a Cell<u8>) -> Self {
- X { priority }
+ x { priority }
}
pub unsafe fn priority(&self) -> &Cell<u8> {
@@ -293,7 +300,7 @@ pub mod resources {
}
}
- // repeat for `Y`
+ // repeat for `y`
}
pub mod foo {
@@ -303,34 +310,35 @@ pub mod foo {
}
pub struct Resources<'a> {
- pub X: resources::X<'a>,
- pub Y: resources::Y<'a>,
+ pub x: resources::x<'a>,
+ pub y: resources::y<'a>,
}
}
const APP: () = {
+ use cortex_m::register::basepri;
+
#[no_mangle]
- unsafe fn UART0() {
+ unsafe fn UART1() {
// the static priority of this interrupt (as specified by the user)
- const PRIORITY: u8 = 1;
+ const PRIORITY: u8 = 2;
// take a snashot of the BASEPRI
- let initial: u8;
- asm!("mrs $0, BASEPRI" : "=r"(initial) : : : "volatile");
+ let initial = basepri::read();
let priority = Cell::new(PRIORITY);
- foo(foo::Context {
- resources: foo::Resources::new(&priority),
+ bar(bar::Context {
+ resources: bar::Resources::new(&priority),
// ..
});
// roll back the BASEPRI to the snapshot value we took before
- asm!("msr BASEPRI, $0" : : "r"(initial) : : "volatile");
+ basepri::write(initial); // same as the `asm!` block we saw before
}
- // similarly for `UART1`
+ // similarly for `UART0` / `foo` and `UART2` / `baz`
- impl<'a> rtfm::Mutex for resources::X<'a> {
+ impl<'a> rtfm::Mutex for resources::x<'a> {
type T = u64;
fn lock<R>(&mut self, f: impl FnOnce(&mut u64) -> R) -> R {
@@ -342,26 +350,24 @@ const APP: () = {
if current < CEILING {
// raise dynamic priority
self.priority().set(CEILING);
- let hw = logical2hw(CEILING);
- asm!("msr BASEPRI, $0" : : "r"(hw) : "memory" : "volatile");
+ basepri::write(logical2hw(CEILING));
- let r = f(&mut X);
+ let r = f(&mut y);
// restore dynamic priority
- let hw = logical2hw(current);
- asm!("msr BASEPRI, $0" : : "r"(hw) : "memory" : "volatile");
+ basepri::write(logical2hw(current));
self.priority().set(current);
r
} else {
// dynamic priority is high enough
- f(&mut X)
+ f(&mut y)
}
}
}
}
- // repeat for `Y`
+ // repeat for resource `y`
};
```
@@ -373,38 +379,38 @@ fn foo(c: foo::Context) {
// NOTE: BASEPRI contains the value `0` (its reset value) at this point
// raise dynamic priority to `3`
- unsafe { asm!("msr BASEPRI, 160" : : : "memory" : "volatile") }
+ unsafe { basepri::write(160) }
- // the two operations on `Y` are merged into one
- Y += 2;
+ // the two operations on `y` are merged into one
+ y += 2;
- // BASEPRI is not modified to access `X` because the dynamic priority is high enough
- X += 1;
+ // BASEPRI is not modified to access `x` because the dynamic priority is high enough
+ x += 1;
// lower (restore) the dynamic priority to `1`
- unsafe { asm!("msr BASEPRI, 224" : : : "memory" : "volatile") }
+ unsafe { basepri::write(224) }
// mid-point
// raise dynamic priority to `2`
- unsafe { asm!("msr BASEPRI, 192" : : : "memory" : "volatile") }
+ unsafe { basepri::write(192) }
- X += 1;
+ x += 1;
// raise dynamic priority to `3`
- unsafe { asm!("msr BASEPRI, 160" : : : "memory" : "volatile") }
+ unsafe { basepri::write(160) }
- Y += 1;
+ y += 1;
// lower (restore) the dynamic priority to `2`
- unsafe { asm!("msr BASEPRI, 192" : : : "memory" : "volatile") }
+ unsafe { basepri::write(192) }
- // NOTE: it would be sound to merge this operation on X with the previous one but
+ // NOTE: it would be sound to merge this operation on `x` with the previous one but
// compiler fences are coarse grained and prevent such optimization
- X += 1;
+ x += 1;
// lower (restore) the dynamic priority to `1`
- unsafe { asm!("msr BASEPRI, 224" : : : "memory" : "volatile") }
+ unsafe { basepri::write(224) }
// NOTE: BASEPRI contains the value `224` at this point
// the UART0 handler will restore the value to `0` before returning
@@ -425,7 +431,10 @@ handler through preemption. This is best observed in the following example:
``` rust
#[rtfm::app(device = ..)]
const APP: () = {
- static mut X: u64 = 0;
+ struct Resources {
+ #[init(0)]
+ x: u64,
+ }
#[init]
fn init() {
@@ -444,11 +453,11 @@ const APP: () = {
// this function returns to `idle`
}
- #[task(binds = UART1, priority = 2, resources = [X])]
+ #[task(binds = UART1, priority = 2, resources = [x])]
fn bar() {
// BASEPRI is `0` (dynamic priority = 2)
- X.lock(|x| {
+ x.lock(|x| {
// BASEPRI is raised to `160` (dynamic priority = 3)
// ..
@@ -470,7 +479,7 @@ const APP: () = {
}
}
- #[task(binds = UART2, priority = 3, resources = [X])]
+ #[task(binds = UART2, priority = 3, resources = [x])]
fn baz() {
// ..
}
@@ -493,8 +502,7 @@ const APP: () = {
const PRIORITY: u8 = 2;
// take a snashot of the BASEPRI
- let initial: u8;
- asm!("mrs $0, BASEPRI" : "=r"(initial) : : : "volatile");
+ let initial = basepri::read();
let priority = Cell::new(PRIORITY);
bar(bar::Context {
@@ -503,7 +511,7 @@ const APP: () = {
});
// BUG: FORGOT to roll back the BASEPRI to the snapshot value we took before
- // asm!("msr BASEPRI, $0" : : "r"(initial) : : "volatile");
+ basepri::write(initial);
}
};
```
diff --git a/book/en/src/internals/interrupt-configuration.md b/book/en/src/internals/interrupt-configuration.md
index b34b308..98a98e5 100644
--- a/book/en/src/internals/interrupt-configuration.md
+++ b/book/en/src/internals/interrupt-configuration.md
@@ -12,7 +12,7 @@ configuration is done before the `init` function runs.
This example gives you an idea of the code that the RTFM framework runs:
``` rust
-#[rtfm::app(device = ..)]
+#[rtfm::app(device = lm3s6965)]
const APP: () = {
#[init]
fn init(c: init::Context) {
@@ -39,8 +39,7 @@ The framework generates an entry point that looks like this:
unsafe fn main() -> ! {
// transforms a logical priority into a hardware / NVIC priority
fn logical2hw(priority: u8) -> u8 {
- // this value comes from the device crate
- const NVIC_PRIO_BITS: u8 = ..;
+ use lm3s6965::NVIC_PRIO_BITS;
// the NVIC encodes priority in the higher bits of a bit
// also a bigger numbers means lower priority
diff --git a/book/en/src/internals/late-resources.md b/book/en/src/internals/late-resources.md
index 71157f2..8724fbb 100644
--- a/book/en/src/internals/late-resources.md
+++ b/book/en/src/internals/late-resources.md
@@ -11,21 +11,22 @@ initialize late resources.
``` rust
#[rtfm::app(device = ..)]
const APP: () = {
- // late resource
- static mut X: Thing = {};
+ struct Resources {
+ x: Thing,
+ }
#[init]
fn init() -> init::LateResources {
// ..
init::LateResources {
- X: Thing::new(..),
+ x: Thing::new(..),
}
}
- #[task(binds = UART0, resources = [X])]
+ #[task(binds = UART0, resources = [x])]
fn foo(c: foo::Context) {
- let x: &mut Thing = c.resources.X;
+ let x: &mut Thing = c.resources.x;
x.frob();
@@ -50,7 +51,7 @@ fn foo(c: foo::Context) {
// Public API
pub mod init {
pub struct LateResources {
- pub X: Thing,
+ pub x: Thing,
}
// ..
@@ -58,7 +59,7 @@ pub mod init {
pub mod foo {
pub struct Resources<'a> {
- pub X: &'a mut Thing,
+ pub x: &'a mut Thing,
}
pub struct Context<'a> {
@@ -70,7 +71,7 @@ pub mod foo {
/// Implementation details
const APP: () = {
// uninitialized static
- static mut X: MaybeUninit<Thing> = MaybeUninit::uninit();
+ static mut x: MaybeUninit<Thing> = MaybeUninit::uninit();
#[no_mangle]
unsafe fn main() -> ! {
@@ -81,7 +82,7 @@ const APP: () = {
let late = init(..);
// initialization of late resources
- X.write(late.X);
+ x.as_mut_ptr().write(late.x);
cortex_m::interrupt::enable(); //~ compiler fence
@@ -94,8 +95,8 @@ const APP: () = {
unsafe fn UART0() {
foo(foo::Context {
resources: foo::Resources {
- // `X` has been initialized at this point
- X: &mut *X.as_mut_ptr(),
+ // `x` has been initialized at this point
+ x: &mut *x.as_mut_ptr(),
},
// ..
})
diff --git a/book/en/src/internals/non-reentrancy.md b/book/en/src/internals/non-reentrancy.md
index 408a012..f1ce2cb 100644
--- a/book/en/src/internals/non-reentrancy.md
+++ b/book/en/src/internals/non-reentrancy.md
@@ -13,24 +13,20 @@ are discouraged from directly invoking an interrupt handler.
``` rust
#[rtfm::app(device = ..)]
const APP: () = {
- static mut X: u64 = 0;
-
#[init]
fn init(c: init::Context) { .. }
- #[interrupt(binds = UART0, resources = [X])]
+ #[interrupt(binds = UART0)]
fn foo(c: foo::Context) {
- let x: &mut u64 = c.resources.X;
+ static mut X: u64 = 0;
- *x = 1;
+ let x: &mut u64 = X;
- //~ `bar` can preempt `foo` at this point
+ // ..
- *x = 2;
+ //~ `bar` can preempt `foo` at this point
- if *x == 2 {
- // something
- }
+ // ..
}
#[interrupt(binds = UART1, priority = 2)]
@@ -40,15 +36,15 @@ const APP: () = {
}
// this interrupt handler will invoke task handler `foo` resulting
- // in mutable aliasing of the static variable `X`
+ // in aliasing of the static variable `X`
unsafe { UART0() }
}
};
```
The RTFM framework must generate the interrupt handler code that calls the user
-defined task handlers. We are careful in making these handlers `unsafe` and / or
-impossible to call from user code.
+defined task handlers. We are careful in making these handlers impossible to
+call from user code.
The above example expands into:
diff --git a/book/en/src/internals/tasks.md b/book/en/src/internals/tasks.md
index 432c2e6..dd3638a 100644
--- a/book/en/src/internals/tasks.md
+++ b/book/en/src/internals/tasks.md
@@ -19,7 +19,7 @@ task.
The ready queue is a SPSC (Single Producer Single Consumer) lock-free queue. The
task dispatcher owns the consumer endpoint of the queue; the producer endpoint
-is treated as a resource shared by the tasks that can `spawn` other tasks.
+is treated as a resource contended by the tasks that can `spawn` other tasks.
## The task dispatcher
@@ -244,7 +244,7 @@ const APP: () = {
baz_INPUTS[index as usize].write(message);
lock(self.priority(), RQ1_CEILING, || {
- // put the task in the ready queu
+ // put the task in the ready queue
RQ1.split().1.enqueue_unchecked(Ready {
task: T1::baz,
index,
diff --git a/book/en/src/internals/timer-queue.md b/book/en/src/internals/timer-queue.md
index 436f421..e0242f0 100644
--- a/book/en/src/internals/timer-queue.md
+++ b/book/en/src/internals/timer-queue.md
@@ -47,7 +47,7 @@ mod foo {
}
const APP: () = {
- use rtfm::Instant;
+ type Instant = <path::to::user::monotonic::timer as rtfm::Monotonic>::Instant;
// all tasks that can be `schedule`-d
enum T {
@@ -158,15 +158,14 @@ way it will run at the right priority.
handler; basically, `enqueue_unchecked` delegates the task of setting up a new
timeout interrupt to the `SysTick` handler.
-## Resolution and range of `Instant` and `Duration`
+## Resolution and range of `cyccnt::Instant` and `cyccnt::Duration`
-In the current implementation the `DWT`'s (Data Watchpoint and Trace) cycle
-counter is used as a monotonic timer. `Instant::now` returns a snapshot of this
-timer; these DWT snapshots (`Instant`s) are used to sort entries in the timer
-queue. The cycle counter is a 32-bit counter clocked at the core clock
-frequency. This counter wraps around every `(1 << 32)` clock cycles; there's no
-interrupt associated to this counter so nothing worth noting happens when it
-wraps around.
+RTFM provides a `Monotonic` implementation based on the `DWT`'s (Data Watchpoint
+and Trace) cycle counter. `Instant::now` returns a snapshot of this timer; these
+DWT snapshots (`Instant`s) are used to sort entries in the timer queue. The
+cycle counter is a 32-bit counter clocked at the core clock frequency. This
+counter wraps around every `(1 << 32)` clock cycles; there's no interrupt
+associated to this counter so nothing worth noting happens when it wraps around.
To order `Instant`s in the queue we need to compare two 32-bit integers. To
account for the wrap-around behavior we use the difference between two
@@ -264,11 +263,11 @@ The ceiling analysis would go like this:
## Changes in the `spawn` implementation
-When the "timer-queue" feature is enabled the `spawn` implementation changes a
-bit to track the baseline of tasks. As you saw in the `schedule` implementation
-there's an `INSTANTS` buffers used to store the time at which a task was
-scheduled to run; this `Instant` is read in the task dispatcher and passed to
-the user code as part of the task context.
+When the `schedule` API is used the `spawn` implementation changes a bit to
+track the baseline of tasks. As you saw in the `schedule` implementation there's
+an `INSTANTS` buffers used to store the time at which a task was scheduled to
+run; this `Instant` is read in the task dispatcher and passed to the user code
+as part of the task context.
``` rust
const APP: () = {