From 1f51b10297e9cbb4797aa1ed8be6a2b84c9f2e07 Mon Sep 17 00:00:00 2001 From: Per Lindgren Date: Sat, 28 Jan 2023 21:57:43 +0100 Subject: Book: Major rework for RTIC v2 --- book/en/src/SUMMARY.md | 19 ++-- book/en/src/by-example.md | 12 +- book/en/src/by-example/app.md | 24 ++-- book/en/src/by-example/app_idle.md | 29 ++--- book/en/src/by-example/app_init.md | 25 ++--- book/en/src/by-example/app_minimal.md | 16 ++- book/en/src/by-example/app_priorities.md | 30 ++--- book/en/src/by-example/app_task.md | 17 ++- book/en/src/by-example/channel.md | 112 +++++++++++++++++++ book/en/src/by-example/delay.md | 116 +++++++++++++++++++ book/en/src/by-example/hardware_tasks.md | 30 ++--- book/en/src/by-example/resources.md | 136 +++++++++-------------- book/en/src/by-example/software_tasks.md | 106 +++++++++++++----- book/en/src/by-example/starting_a_project.md | 2 + book/en/src/preface.md | 159 ++++++++++++++++++++++++--- book/en/src/rtic_vs.md | 31 ++++++ 16 files changed, 629 insertions(+), 235 deletions(-) create mode 100644 book/en/src/by-example/channel.md create mode 100644 book/en/src/by-example/delay.md create mode 100644 book/en/src/rtic_vs.md (limited to 'book/en/src') diff --git a/book/en/src/SUMMARY.md b/book/en/src/SUMMARY.md index 853f3a5..407be6d 100644 --- a/book/en/src/SUMMARY.md +++ b/book/en/src/SUMMARY.md @@ -4,15 +4,13 @@ - [RTIC by example](./by-example.md) - [The `app`](./by-example/app.md) + - [Hardware tasks & `pend`](./by-example/hardware_tasks.md) + - [Software tasks & `spawn`](./by-example/software_tasks.md) - [Resources](./by-example/resources.md) - [The init task](./by-example/app_init.md) - [The idle task](./by-example/app_idle.md) - - [Defining tasks](./by-example/app_task.md) - - [Hardware tasks](./by-example/hardware_tasks.md) - - [Software tasks & `spawn`](./by-example/software_tasks.md) - - [Message passing & `capacity`](./by-example/message_passing.md) - - [Task priorities](./by-example/app_priorities.md) - - [Monotonic & `spawn_{at/after}`](./by-example/monotonic.md) + - [Channel based communication](./by-example/channel.md) + - [Tasks with delay](./by-example/delay.md) - [Starting a new project](./by-example/starting_a_project.md) - [The minimal app](./by-example/app_minimal.md) - [Tips & Tricks](./by-example/tips.md) @@ -23,13 +21,13 @@ - [Inspecting generated code](./by-example/tips_view_code.md) - [Running tasks from RAM](./by-example/tips_from_ram.md) +- [RTIC vs. the world](./rtic_vs.md) - [Awesome RTIC examples](./awesome_rtic.md) - [Migration Guides](./migration.md) - [v0.5.x to v1.0.x](./migration/migration_v5.md) - [v0.4.x to v0.5.x](./migration/migration_v4.md) - [RTFM to RTIC](./migration/migration_rtic.md) - [Under the hood](./internals.md) - - [Cortex-M architectures](./internals/targets.md) @@ -38,3 +36,10 @@ + + + \ No newline at end of file diff --git a/book/en/src/by-example.md b/book/en/src/by-example.md index 419a4ba..a2e5b27 100644 --- a/book/en/src/by-example.md +++ b/book/en/src/by-example.md @@ -1,14 +1,15 @@ # RTIC by example -This part of the book introduces the Real-Time Interrupt-driven Concurrency (RTIC) framework -to new users by walking them through examples of increasing complexity. +This part of the book introduces the RTIC framework to new users by walking them through examples of increasing complexity. All examples in this part of the book are accessible at the [GitHub repository][repoexamples]. The examples are runnable on QEMU (emulating a Cortex M3 target), thus no special hardware required to follow along. -[repoexamples]: https://github.com/rtic-rs/cortex-m-rtic/tree/master/examples +[repoexamples]: https://github.com/rtic-rs/rtic/tree/master/examples + +## Running an example To run the examples with QEMU you will need the `qemu-system-arm` program. Check [the embedded Rust book] for instructions on how to set up an @@ -28,11 +29,12 @@ $ cargo run --target thumbv7m-none-eabi --example locals Yields this output: ``` console -{{#include ../../../ci/expected/locals.run}} +{{#include ../../../rtic/ci/expected/locals.run}} ``` > **NOTE**: You can choose target device by passing a target > triple to cargo (e.g. `cargo run --example init --target thumbv7m-none-eabi`) or > configure a default target in `.cargo/config.toml`. > -> For running the examples, we use a Cortex M3 emulated in QEMU, so the target is `thumbv7m-none-eabi`. \ No newline at end of file +> For running the examples, we (typically) use a Cortex M3 emulated in QEMU, so the target is `thumbv7m-none-eabi`. +> Since the M3 architecture is backwards compatible to the M0/M0+ architecture, you may also use the `thumbv6m-none-eabi`, in case you want to inspect generated assembly code for the M0/M0+ architecture. diff --git a/book/en/src/by-example/app.md b/book/en/src/by-example/app.md index 2c6aca7..cef8288 100644 --- a/book/en/src/by-example/app.md +++ b/book/en/src/by-example/app.md @@ -2,25 +2,31 @@ ## Requirements on the `app` attribute -All RTIC applications use the [`app`] attribute (`#[app(..)]`). This attribute -only applies to a `mod`-item containing the RTIC application. The `app` -attribute has a mandatory `device` argument that takes a *path* as a value. -This must be a full path pointing to a -*peripheral access crate* (PAC) generated using [`svd2rust`] **v0.14.x** or -newer. +All RTIC applications use the [`app`] attribute (`#[app(..)]`). This attribute only applies to a `mod`-item containing the RTIC application. The `app` attribute has a mandatory `device` argument that takes a *path* as a value. This must be a full path pointing to a *peripheral access crate* (PAC) generated using [`svd2rust`] **v0.14.x** or newer. -The `app` attribute will expand into a suitable entry point and thus replaces -the use of the [`cortex_m_rt::entry`] attribute. +The `app` attribute will expand into a suitable entry point and thus replaces the use of the [`cortex_m_rt::entry`] attribute. [`app`]: ../../../api/cortex_m_rtic_macros/attr.app.html [`svd2rust`]: https://crates.io/crates/svd2rust [`cortex_m_rt::entry`]: ../../../api/cortex_m_rt_macros/attr.entry.html +## Structure and zero-cost concurrency + +An RTIC `app` is an executable system model for since-core applications, declaring a set of `local` and `shared` resources operated on by a set of `init`, `idle`, *hardware* and *software* tasks. In short the `init` task runs before any other task returning the set of `local` and `shared` resources. Tasks run preemptively based on their associated static priority, `idle` has the lowest priority (and can be used for background work, and/or to put the system to sleep until woken by some event). Hardware tasks are bound to underlying hardware interrupts, while software tasks are scheduled by asynchronous executors (one for each software task priority). + +At compile time the task/resource model is analyzed under the Stack Resource Policy (SRP) and executable code generated with the following outstanding properties: + +- guaranteed race-free resource access and deadlock-free execution on a single-shared stack + - hardware task scheduling is performed directly by the hardware, and + - software task scheduling is performed by auto generated async executors tailored to the application. + +Overall, the generated code infers no additional overhead in comparison to a hand-written implementation, thus in Rust terms RTIC offers a zero-cost abstraction to concurrency. + ## An RTIC application example To give a flavour of RTIC, the following example contains commonly used features. In the following sections we will go through each feature in detail. ``` rust -{{#include ../../../../examples/common.rs}} +{{#include ../../../../rtic/examples/common.rs}} ``` diff --git a/book/en/src/by-example/app_idle.md b/book/en/src/by-example/app_idle.md index 537902a..4856ee1 100644 --- a/book/en/src/by-example/app_idle.md +++ b/book/en/src/by-example/app_idle.md @@ -1,52 +1,47 @@ # The background task `#[idle]` -A function marked with the `idle` attribute can optionally appear in the -module. This becomes the special *idle task* and must have signature -`fn(idle::Context) -> !`. +A function marked with the `idle` attribute can optionally appear in the module. This becomes the special *idle task* and must have signature `fn(idle::Context) -> !`. -When present, the runtime will execute the `idle` task after `init`. Unlike -`init`, `idle` will run *with interrupts enabled* and must never return, -as the `-> !` function signature indicates. +When present, the runtime will execute the `idle` task after `init`. Unlike `init`, `idle` will run *with interrupts enabled* and must never return, as the `-> !` function signature indicates. [The Rust type `!` means “never”][nevertype]. [nevertype]: https://doc.rust-lang.org/core/primitive.never.html -Like in `init`, locally declared resources will have `'static` lifetimes that -are safe to access. +Like in `init`, locally declared resources will have `'static` lifetimes that are safe to access. The example below shows that `idle` runs after `init`. ``` rust -{{#include ../../../../examples/idle.rs}} +{{#include ../../../../rtic/examples/idle.rs}} ``` ``` console $ cargo run --target thumbv7m-none-eabi --example idle -{{#include ../../../../ci/expected/idle.run}} +{{#include ../../../../rtic/ci/expected/idle.run}} ``` By default, the RTIC `idle` task does not try to optimize for any specific targets. -A common useful optimization is to enable the [SLEEPONEXIT] and allow the MCU -to enter sleep when reaching `idle`. +A common useful optimization is to enable the [SLEEPONEXIT] and allow the MCU to enter sleep when reaching `idle`. ->**Caution** some hardware unless configured disables the debug unit during sleep mode. +>**Caution**: some hardware unless configured disables the debug unit during sleep mode. > >Consult your hardware specific documentation as this is outside the scope of RTIC. The following example shows how to enable sleep by setting the -[`SLEEPONEXIT`][SLEEPONEXIT] and providing a custom `idle` task replacing the -default [`nop()`][NOP] with [`wfi()`][WFI]. +[`SLEEPONEXIT`][SLEEPONEXIT] and providing a custom `idle` task replacing the default [`nop()`][NOP] with [`wfi()`][WFI]. [SLEEPONEXIT]: https://developer.arm.com/docs/100737/0100/power-management/sleep-mode/sleep-on-exit-bit [WFI]: https://developer.arm.com/documentation/dui0662/b/The-Cortex-M0--Instruction-Set/Miscellaneous-instructions/WFI [NOP]: https://developer.arm.com/documentation/dui0662/b/The-Cortex-M0--Instruction-Set/Miscellaneous-instructions/NOP ``` rust -{{#include ../../../../examples/idle-wfi.rs}} +{{#include ../../../../rtic/examples/idle-wfi.rs}} ``` ``` console $ cargo run --target thumbv7m-none-eabi --example idle-wfi -{{#include ../../../../ci/expected/idle-wfi.run}} +{{#include ../../../../rtic/ci/expected/idle-wfi.run}} ``` + +> **Notice**: The `idle` task cannot be used together with *software* tasks running at priority zero. The reason is that `idle` is running as a non-returning Rust function at priority zero. Thus there would be no way for an executor at priority zero to give control to *software* tasks at the same priority. diff --git a/book/en/src/by-example/app_init.md b/book/en/src/by-example/app_init.md index 5bf6200..62fb55b 100644 --- a/book/en/src/by-example/app_init.md +++ b/book/en/src/by-example/app_init.md @@ -1,35 +1,28 @@ # App initialization and the `#[init]` task An RTIC application requires an `init` task setting up the system. The corresponding `init` function must have the -signature `fn(init::Context) -> (Shared, Local, init::Monotonics)`, where `Shared` and `Local` are resource +signature `fn(init::Context) -> (Shared, Local)`, where `Shared` and `Local` are the resource structures defined by the user. -The `init` task executes after system reset, [after an optionally defined `pre-init` code section][pre-init] and an always occurring internal RTIC -initialization. - +The `init` task executes after system reset (after the optionally defined [pre-init] and internal RTIC +initialization). The `init` task runs *with interrupts disabled* and has exclusive access to Cortex-M (the +`bare_metal::CriticalSection` token is available as `cs`) while device specific peripherals are available through +the `core` and `device` fields of `init::Context`. [pre-init]: https://docs.rs/cortex-m-rt/latest/cortex_m_rt/attr.pre_init.html - -The `init` and optional `pre-init` tasks runs *with interrupts disabled* and have exclusive access to Cortex-M (the -`bare_metal::CriticalSection` token is available as `cs`). - -Device specific peripherals are available through the `core` and `device` fields of `init::Context`. - ## Example -The example below shows the types of the `core`, `device` and `cs` fields, and showcases the use of a `local` -variable with `'static` lifetime. -Such variables can be delegated from the `init` task to other tasks of the RTIC application. +The example below shows the types of the `core`, `device` and `cs` fields, and showcases the use of a `local` variable with `'static` lifetime. Such variables can be delegated from the `init` task to other tasks of the RTIC application. -The `device` field is only available when the `peripherals` argument is set to the default value `true`. +The `device` field is available when the `peripherals` argument is set to the default value `true`. In the rare case you want to implement an ultra-slim application you can explicitly set `peripherals` to `false`. ``` rust -{{#include ../../../../examples/init.rs}} +{{#include ../../../../rtic/examples/init.rs}} ``` Running the example will print `init` to the console and then exit the QEMU process. ``` console $ cargo run --target thumbv7m-none-eabi --example init -{{#include ../../../../ci/expected/init.run}} +{{#include ../../../../rtic/ci/expected/init.run}} ``` diff --git a/book/en/src/by-example/app_minimal.md b/book/en/src/by-example/app_minimal.md index d0ff40a..f241089 100644 --- a/book/en/src/by-example/app_minimal.md +++ b/book/en/src/by-example/app_minimal.md @@ -3,5 +3,19 @@ This is the smallest possible RTIC application: ``` rust -{{#include ../../../../examples/smallest.rs}} +{{#include ../../../../rtic/examples/smallest.rs}} ``` + +RTIC is designed with resource efficiency in mind. RTIC itself does not rely on any dynamic memory allocation, thus RAM requirement is dependent only on the application. The flash memory footprint is below 1kB including the interrupt vector table. + +For a minimal example you can expect something like: +``` console +$ cargo size --example smallest --target thumbv7m-none-eabi --release +Finished release [optimized] target(s) in 0.07s + text data bss dec hex filename + 924 0 0 924 39c smallest +``` + + diff --git a/book/en/src/by-example/app_priorities.md b/book/en/src/by-example/app_priorities.md index 8cee749..f03ebf7 100644 --- a/book/en/src/by-example/app_priorities.md +++ b/book/en/src/by-example/app_priorities.md @@ -4,23 +4,18 @@ The `priority` argument declares the static priority of each `task`. -For Cortex-M, tasks can have priorities in the range `1..=(1 << NVIC_PRIO_BITS)` -where `NVIC_PRIO_BITS` is a constant defined in the `device` crate. +For Cortex-M, tasks can have priorities in the range `1..=(1 << NVIC_PRIO_BITS)` where `NVIC_PRIO_BITS` is a constant defined in the `device` crate. -Omitting the `priority` argument the task priority defaults to `1`. -The `idle` task has a non-configurable static priority of `0`, the lowest priority. +Omitting the `priority` argument the task priority defaults to `1`. The `idle` task has a non-configurable static priority of `0`, the lowest priority. > A higher number means a higher priority in RTIC, which is the opposite from what > Cortex-M does in the NVIC peripheral. > Explicitly, this means that number `10` has a **higher** priority than number `9`. -The highest static priority task takes precedence when more than one -task are ready to execute. +The highest static priority task takes precedence when more than one task are ready to execute. The following scenario demonstrates task prioritization: -Spawning a higher priority task A during execution of a lower priority task B suspends -task B. Task A has higher priority thus preempting task B which gets suspended -until task A completes execution. Thus, when task A completes task B resumes execution. +Spawning a higher priority task A during execution of a lower priority task B pends task A. Task A has higher priority thus preempting task B which gets suspended until task A completes execution. Thus, when task A completes task B resumes execution. ```text Task Priority @@ -39,23 +34,17 @@ Task Priority The following example showcases the priority based scheduling of tasks: ``` rust -{{#include ../../../../examples/preempt.rs}} +{{#include ../../../../rtic/examples/preempt.rs}} ``` ``` console $ cargo run --target thumbv7m-none-eabi --example preempt -{{#include ../../../../ci/expected/preempt.run}} +{{#include ../../../../rtic/ci/expected/preempt.run}} ``` -Note that the task `bar` does *not* preempt task `baz` because its priority -is the *same* as `baz`'s. The higher priority task `bar` runs before `foo` -when `baz`returns. When `bar` returns `foo` can resume. +Note that the task `bar` does *not* preempt task `baz` because its priority is the *same* as `baz`'s. The higher priority task `bar` runs before `foo` when `baz`returns. When `bar` returns `foo` can resume. -One more note about priorities: choosing a priority higher than what the device -supports will result in a compilation error. - -The error is cryptic due to limitations in the Rust language -if `priority = 9` for task `uart0_interrupt` in `example/common.rs` this looks like: +One more note about priorities: choosing a priority higher than what the device supports will result in a compilation error. The error is cryptic due to limitations in the Rust language, if `priority = 9` for task `uart0_interrupt` in `example/common.rs` this looks like: ```text error[E0080]: evaluation of constant value failed @@ -68,5 +57,4 @@ if `priority = 9` for task `uart0_interrupt` in `example/common.rs` this looks l ``` -The error message incorrectly points to the starting point of the macro, but at least the -value subtracted (in this case 9) will suggest which task causes the error. +The error message incorrectly points to the starting point of the macro, but at least the value subtracted (in this case 9) will suggest which task causes the error. diff --git a/book/en/src/by-example/app_task.md b/book/en/src/by-example/app_task.md index d83f1ff..e0c67ad 100644 --- a/book/en/src/by-example/app_task.md +++ b/book/en/src/by-example/app_task.md @@ -1,21 +1,18 @@ + + # Defining tasks with `#[task]` Tasks, defined with `#[task]`, are the main mechanism of getting work done in RTIC. Tasks can -* Be spawned (now or in the future, also by themselves) -* Receive messages (passing messages between tasks) -* Be prioritized, allowing preemptive multitasking +* Be spawned (now or in the future) +* Receive messages (message passing) +* Prioritized allowing preemptive multitasking * Optionally bind to a hardware interrupt -RTIC makes a distinction between “software tasks” and “hardware tasks”. - -*Hardware tasks* are tasks that are bound to a specific interrupt vector in the MCU while software tasks are not. - -This means that if a hardware task is bound to, lets say, a UART RX interrupt, the task will be run every -time that interrupt triggers, usually when a character is received. +RTIC makes a distinction between “software tasks” and “hardware tasks”. Hardware tasks are tasks that are bound to a specific interrupt vector in the MCU while software tasks are not. -*Software tasks* are explicitly spawned in a task, either immediately or using the Monotonic timer mechanism. +This means that if a hardware task is bound to an UART RX interrupt the task will run every time this interrupt triggers, usually when a character is received. In the coming pages we will explore both tasks and the different options available. diff --git a/book/en/src/by-example/channel.md b/book/en/src/by-example/channel.md new file mode 100644 index 0000000..99bfedd --- /dev/null +++ b/book/en/src/by-example/channel.md @@ -0,0 +1,112 @@ +# Communication over channels. + +Channels can be used to communicate data between running *software* tasks. The channel is essentially a wait queue, allowing tasks with multiple producers and a single receiver. A channel is constructed in the `init` task and backed by statically allocated memory. Send and receive endpoints are distributed to *software* tasks: + +```rust +... +const CAPACITY: usize = 5; +#[init] + fn init(_: init::Context) -> (Shared, Local) { + let (s, r) = make_channel!(u32, CAPACITY); + receiver::spawn(r).unwrap(); + sender1::spawn(s.clone()).unwrap(); + sender2::spawn(s.clone()).unwrap(); + ... +``` + +In this case the channel holds data of `u32` type with a capacity of 5 elements. + +## Sending data + +The `send` method post a message on the channel as shown below: + +```rust +#[task] +async fn sender1(_c: sender1::Context, mut sender: Sender<'static, u32, CAPACITY>) { + hprintln!("Sender 1 sending: 1"); + sender.send(1).await.unwrap(); +} +``` + +## Receiving data + +The receiver can `await` incoming messages: + +```rust +#[task] +async fn receiver(_c: receiver::Context, mut receiver: Receiver<'static, u32, CAPACITY>) { + while let Ok(val) = receiver.recv().await { + hprintln!("Receiver got: {}", val); + ... + } +} +``` + +For a complete example: + +``` rust +{{#include ../../../../rtic/examples/async-channel.rs}} +``` + +``` console +$ cargo run --target thumbv7m-none-eabi --example async-channel --features test-critical-section +{{#include ../../../../rtic/ci/expected/async-channel.run}} +``` + +Also sender endpoint can be awaited. In case there the channel capacity has not been reached, `await` the sender can progress immediately, while in the case the capacity is reached, the sender is blocked until there is free space in the queue. In this way data is never lost. + +In the below example the `CAPACITY` has been reduced to 1, forcing sender tasks to wait until the data in the channel has been received. + +``` rust +{{#include ../../../../rtic/examples/async-channel-done.rs}} +``` + +Looking at the output, we find that `Sender 2` will wait until the data sent by `Sender 1` as been received. + +> **NOTICE** *Software* tasks at the same priority are executed asynchronously to each other, thus **NO** strict order can be assumed. (The presented order here applies only to the current implementation, and may change between RTIC framework releases.) + +``` console +$ cargo run --target thumbv7m-none-eabi --example async-channel-done --features test-critical-section +{{#include ../../../../rtic/ci/expected/async-channel-done.run}} +``` + +## Error handling + +In case all senders have been dropped `await` on an empty receiver channel results in an error. This allows to gracefully implement different types of shutdown operations. + +``` rust +{{#include ../../../../rtic/examples/async-channel-no-sender.rs}} +``` + +``` console +$ cargo run --target thumbv7m-none-eabi --example async-channel-no-sender --features test-critical-section +{{#include ../../../../rtic/ci/expected/async-channel-no-sender.run}} +``` + +Similarly, `await` on a send channel results in an error in case the receiver has been dropped. This allows to gracefully implement application level error handling. + +The resulting error returns the data back to the sender, allowing the sender to take appropriate action (e.g., storing the data to later retry sending it). + +``` rust +{{#include ../../../../rtic/examples/async-channel-no-receiver.rs}} +``` + +``` console +$ cargo run --target thumbv7m-none-eabi --example async-channel-no-receiver --features test-critical-section +{{#include ../../../../rtic/ci/expected/async-channel-no-receiver.run}} +``` + + + +## Try API + +In cases you wish the sender to proceed even in case the channel is full. To that end, a `try_send` API is provided. + +``` rust +{{#include ../../../../rtic/examples/async-channel-try.rs}} +``` + +``` console +$ cargo run --target thumbv7m-none-eabi --example async-channel-try --features test-critical-section +{{#include ../../../../rtic/ci/expected/async-channel-try.run}} +``` \ No newline at end of file diff --git a/book/en/src/by-example/delay.md b/book/en/src/by-example/delay.md new file mode 100644 index 0000000..d35d9da --- /dev/null +++ b/book/en/src/by-example/delay.md @@ -0,0 +1,116 @@ +# Tasks with delay + +A convenient way to express *miniminal* timing requirements is by means of delaying progression. + +This can be achieved by instantiating a monotonic timer: + +```rust +... +rtic_monotonics::make_systick_timer_queue!(TIMER); + +#[init] +fn init(cx: init::Context) -> (Shared, Local) { + let systick = Systick::start(cx.core.SYST, 12_000_000); + TIMER.initialize(systick); + ... +``` + +A *software* task can `await` the delay to expire: + +```rust +#[task] +async fn foo(_cx: foo::Context) { + ... + TIMER.delay(100.millis()).await; + ... +``` + +Technically, the timer queue is implemented as a list based priority queue, where list-nodes are statically allocated as part of the underlying task `Future`. Thus, the timer queue is infallible at run-time (its size and allocation is determined at compile time). + +For a complete example: + +``` rust +{{#include ../../../../rtic/examples/async-delay.rs}} +``` + +``` console +$ cargo run --target thumbv7m-none-eabi --example async-delay --features test-critical-section +{{#include ../../../../rtic/ci/expected/async-delay.run}} +``` + +## Timeout + +Rust `Futures` (underlying Rust `async`/`await`) are composable. This makes it possible to `select` in between `Futures` that have completed. + +A common use case is transactions with associated timeout. In the examples shown below, we introduce a fake HAL device which performs some transaction. We have modelled the time it takes based on the input parameter (`n`) as `350ms + n * 100ms)`. + +Using the `select_biased` macro from the `futures` crate it may look like this: + +```rust +// Call hal with short relative timeout using `select_biased` +select_biased! { + v = hal_get(&TIMER, 1).fuse() => hprintln!("hal returned {}", v), + _ = TIMER.delay(200.millis()).fuse() => hprintln!("timeout", ), // this will finish first +} +``` + +Assuming the `hal_get` will take 450ms to finish, a short timeout of 200ms will expire. + +```rust +// Call hal with long relative timeout using `select_biased` +select_biased! { + v = hal_get(&TIMER, 1).fuse() => hprintln!("hal returned {}", v), // hal finish first + _ = TIMER.delay(1000.millis()).fuse() => hprintln!("timeout", ), +} +``` + +By extending the timeout to 1000ms, the `hal_get` will finish first. + +Using `select_biased` any number of futures can be combined, so its very powerful. However, as the timeout pattern is frequently used, it is directly supported by the RTIC [rtc-monotonics] and [rtic-time] crates. The second example from above using `timeout_after`: + +```rust +// Call hal with long relative timeout using monotonic `timeout_after` +match TIMER.timeout_after(1000.millis(), hal_get(&TIMER, 1)).await { + Ok(v) => hprintln!("hal returned {}", v), + _ => hprintln!("timeout"), +} +``` + +In cases you want exact control over time without drift. For this purpose we can use exact points in time using `Instance`, and spans of time using `Duration`. Operations on the `Instance` and `Duration` types are given by the [fugit] crate. + +[fugit]: https://crates.io/crates/fugit + +```rust +// get the current time instance +let mut instant = TIMER.now(); + +// do this 3 times +for n in 0..3 { + // exact point in time without drift + instant += 1000.millis(); + TIMER.delay_until(instant).await; + + // exact point it time for timeout + let timeout = instant + 500.millis(); + hprintln!("now is {:?}, timeout at {:?}", TIMER.now(), timeout); + + match TIMER.timeout_at(timeout, hal_get(&TIMER, n)).await { + Ok(v) => hprintln!("hal returned {} at time {:?}", v, TIMER.now()), + _ => hprintln!("timeout"), + } +} +``` + +`instant = TIMER.now()` gives the baseline (i.e., the exact current point in time). We want to call `hal_get` after 1000ms relative to this exact point in time. This can be accomplished by `TIMER.delay_until(instant).await;`. We define the absolute point in time for the `timeout`, and call `TIMER.timeout_at(timeout, hal_get(&TIMER, n)).await`. For the first loop iteration `n == 0`, and the `hal_get` will take 350ms (and finishes before the timeout). For the second iteration `n == 1`, and `hal_get` will take 450ms (and again succeeds to finish before the timeout). For the third iteration `n == 2` (`hal_get` will take 5500ms to finish). In this case we will run into a timeout. + + +The complete example: + +``` rust +{{#include ../../../../rtic/examples/async-timeout.rs}} +``` + +``` console +$ cargo run --target thumbv7m-none-eabi --example async-timeout --features test-critical-section +{{#include ../../../../rtic/ci/expected/async-timeout.run}} +``` diff --git a/book/en/src/by-example/hardware_tasks.md b/book/en/src/by-example/hardware_tasks.md index 2d405d3..e3e51ac 100644 --- a/book/en/src/by-example/hardware_tasks.md +++ b/book/en/src/by-example/hardware_tasks.md @@ -1,39 +1,27 @@ # Hardware tasks -At its core RTIC is using a hardware interrupt controller ([ARM NVIC on cortex-m][NVIC]) -to schedule and start execution of tasks. All tasks except `pre-init`, `#[init]` and `#[idle]` -run as interrupt handlers. +At its core RTIC is using the hardware interrupt controller ([ARM NVIC on cortex-m][NVIC]) to perform scheduling and executing tasks, and all (*hardware*) tasks except `#[init]` and `#[idle]` run as interrupt handlers. This also means that you can manually bind tasks to interrupt handlers. -Hardware tasks are explicitly bound to interrupt handlers. +To bind an interrupt use the `#[task]` attribute argument `binds = InterruptName`. This task becomes the interrupt handler for this hardware interrupt vector. -To bind a task to an interrupt, use the `#[task]` attribute argument `binds = InterruptName`. -This task then becomes the interrupt handler for this hardware interrupt vector. +All tasks bound to an explicit interrupt are *hardware tasks* since they start execution in reaction to a hardware event (interrupt). -All tasks bound to an explicit interrupt are called *hardware tasks* since they -start execution in reaction to a hardware event. +Specifying a non-existing interrupt name will cause a compilation error. The interrupt names are commonly defined by [PAC or HAL][pacorhal] crates. -Specifying a non-existing interrupt name will cause a compilation error. The interrupt names -are commonly defined by [PAC or HAL][pacorhal] crates. +Any available interrupt vector should work, but different hardware might have added special properties to select interrupt priority levels, such as the [nRF “softdevice”](https://github.com/rtic-rs/cortex-m-rtic/issues/434). -Any available interrupt vector should work. Specific devices may bind -specific interrupt priorities to specific interrupt vectors outside -user code control. See for example the -[nRF “softdevice”](https://github.com/rtic-rs/cortex-m-rtic/issues/434). - -Beware of using interrupt vectors that are used internally by hardware features; -RTIC is unaware of such hardware specific details. +Beware of re-purposing interrupt vectors used internally by hardware features, RTIC is unaware of such hardware specific details. [pacorhal]: https://docs.rust-embedded.org/book/start/registers.html [NVIC]: https://developer.arm.com/documentation/100166/0001/Nested-Vectored-Interrupt-Controller/NVIC-functional-description/NVIC-interrupts -The example below demonstrates the use of the `#[task(binds = InterruptName)]` attribute to declare a -hardware task bound to an interrupt handler. +The example below demonstrates the use of the `#[task(binds = InterruptName)]` attribute to declare a hardware task bound to an interrupt handler. In the example the interrupt triggering task execution is manually pended (`rtic::pend(Interrupt::UART0)`). However, in the typical case, interrupts are pended by the hardware peripheral. RTIC does not interfere with mechanisms for clearing peripheral interrupts, so any hardware specific implementation is completely up to the implementer. ``` rust -{{#include ../../../../examples/hardware.rs}} +{{#include ../../../../rtic/examples/hardware.rs}} ``` ``` console $ cargo run --target thumbv7m-none-eabi --example hardware -{{#include ../../../../ci/expected/hardware.run}} +{{#include ../../../../rtic/ci/expected/hardware.run}} ``` diff --git a/book/en/src/by-example/resources.md b/book/en/src/by-example/resources.md index 30089d3..ea67b26 100644 --- a/book/en/src/by-example/resources.md +++ b/book/en/src/by-example/resources.md @@ -1,176 +1,138 @@ # Resource usage -The RTIC framework manages shared and task local resources allowing persistent data -storage and safe accesses without the use of `unsafe` code. +The RTIC framework manages shared and task local resources allowing persistent data storage and safe accesses without the use of `unsafe` code. -RTIC resources are visible only to functions declared within the `#[app]` module and the framework -gives the user complete control (on a per-task basis) over resource accessibility. +RTIC resources are visible only to functions declared within the `#[app]` module and the framework gives the user complete control (on a per-task basis) over resource accessibility. -Declaration of system-wide resources is done by annotating **two** `struct`s within the `#[app]` module -with the attribute `#[local]` and `#[shared]`. -Each field in these structures corresponds to a different resource (identified by field name). -The difference between these two sets of resources will be covered below. +Declaration of system-wide resources is done by annotating **two** `struct`s within the `#[app]` module with the attribute `#[local]` and `#[shared]`. Each field in these structures corresponds to a different resource (identified by field name). The difference between these two sets of resources will be covered below. -Each task must declare the resources it intends to access in its corresponding metadata attribute -using the `local` and `shared` arguments. Each argument takes a list of resource identifiers. -The listed resources are made available to the context under the `local` and `shared` fields of the -`Context` structure. +Each task must declare the resources it intends to access in its corresponding metadata attribute using the `local` and `shared` arguments. Each argument takes a list of resource identifiers. The listed resources are made available to the context under the `local` and `shared` fields of the `Context` structure. -The `init` task returns the initial values for the system-wide (`#[shared]` and `#[local]`) -resources, and the set of initialized timers used by the application. The monotonic timers will be -further discussed in [Monotonic & `spawn_{at/after}`](./monotonic.md). +The `init` task returns the initial values for the system-wide (`#[shared]` and `#[local]`) resources. + + ## `#[local]` resources -`#[local]` resources are locally accessible to a specific task, meaning that only that task can -access the resource and does so without locks or critical sections. This allows for the resources, -commonly drivers or large objects, to be initialized in `#[init]` and then be passed to a specific -task. +`#[local]` resources accessible only to a single task. This task is given unique access to the resource without the use of locks or critical sections. -Thus, a task `#[local]` resource can only be accessed by one singular task. -Attempting to assign the same `#[local]` resource to more than one task is a compile-time error. +This allows for the resources, commonly drivers or large objects, to be initialized in `#[init]` and then be passed to a specific task. (Thus, a task `#[local]` resource can only be accessed by one single task.) Attempting to assign the same `#[local]` resource to more than one task is a compile-time error. -Types of `#[local]` resources must implement a [`Send`] trait as they are being sent from `init` -to a target task, crossing a thread boundary. +Types of `#[local]` resources must implement [`Send`] trait as they are being sent from `init` to the target task and thus crossing the *thread* boundary. [`Send`]: https://doc.rust-lang.org/stable/core/marker/trait.Send.html -The example application shown below contains two tasks where each task has access to its own -`#[local]` resource; the `idle` task has its own `#[local]` as well. +The example application shown below contains three tasks `foo`, `bar` and `idle`, each having access to its own `#[local]` resource. ``` rust -{{#include ../../../../examples/locals.rs}} +{{#include ../../../../rtic/examples/locals.rs}} ``` Running the example: ``` console $ cargo run --target thumbv7m-none-eabi --example locals -{{#include ../../../../ci/expected/locals.run}} +{{#include ../../../../rtic/ci/expected/locals.run}} ``` -Local resources in `#[init]` and `#[idle]` have `'static` -lifetimes. This is safe since both tasks are not re-entrant. - ### Task local initialized resources -Local resources can also be specified directly in the resource claim like so: -`#[task(local = [my_var: TYPE = INITIAL_VALUE, ...])]`; this allows for creating locals which do no need to be -initialized in `#[init]`. +A special use-case of local resources are the ones specified directly in the task declaration, `#[task(local = [my_var: TYPE = INITIAL_VALUE, ...])]`. This allows for creating locals which do no need to be initialized in `#[init]`. Moreover, local resources in `#[init]` and `#[idle]` have `'static` lifetimes, this is safe since both are not re-entrant. -Types of `#[task(local = [..])]` resources have to be neither [`Send`] nor [`Sync`] as they -are not crossing any thread boundary. +Types of `#[task(local = [..])]` resources have to be neither [`Send`] nor [`Sync`] as they are not crossing any thread boundary. [`Sync`]: https://doc.rust-lang.org/stable/core/marker/trait.Sync.html In the example below the different uses and lifetimes are shown: ``` rust -{{#include ../../../../examples/declared_locals.rs}} +{{#include ../../../../rtic/examples/declared_locals.rs}} ``` - +You can run the application, but as the example is designed merely to showcase the lifetime properties there is no output (it suffices to build the application). + +``` console +$ cargo build --target thumbv7m-none-eabi --example declared_locals +``` + ## `#[shared]` resources and `lock` -Critical sections are required to access `#[shared]` resources in a data race-free manner and to -achieve this the `shared` field of the passed `Context` implements the [`Mutex`] trait for each -shared resource accessible to the task. This trait has only one method, [`lock`], which runs its -closure argument in a critical section. +Critical sections are required to access `#[shared]` resources in a data race-free manner and to achieve this the `shared` field of the passed `Context` implements the [`Mutex`] trait for each shared resource accessible to the task. This trait has only one method, [`lock`], which runs its closure argument in a critical section. [`Mutex`]: ../../../api/rtic/trait.Mutex.html [`lock`]: ../../../api/rtic/trait.Mutex.html#method.lock -The critical section created by the `lock` API is based on dynamic priorities: it temporarily -raises the dynamic priority of the context to a *ceiling* priority that prevents other tasks from -preempting the critical section. This synchronization protocol is known as the -[Immediate Ceiling Priority Protocol (ICPP)][icpp], and complies with -[Stack Resource Policy (SRP)][srp] based scheduling of RTIC. +The critical section created by the `lock` API is based on dynamic priorities: it temporarily raises the dynamic priority of the context to a *ceiling* priority that prevents other tasks from preempting the critical section. This synchronization protocol is known as the [Immediate Ceiling Priority Protocol (ICPP)][icpp], and complies with [Stack Resource Policy (SRP)][srp] based scheduling of RTIC. [icpp]: https://en.wikipedia.org/wiki/Priority_ceiling_protocol [srp]: https://en.wikipedia.org/wiki/Stack_Resource_Policy -In the example below we have three interrupt handlers with priorities ranging from one to three. -The two handlers with the lower priorities contend for a `shared` resource and need to succeed in locking the -resource in order to access its data. The highest priority handler, which does not access the `shared` -resource, is free to preempt a critical section created by the lowest priority handler. +In the example below we have three interrupt handlers with priorities ranging from one to three. The two handlers with the lower priorities contend for the `shared` resource and need to lock the resource for accessing the data. The highest priority handler, which do not access the `shared` resource, is free to preempt the critical section created by the lowest priority handler. ``` rust -{{#include ../../../../examples/lock.rs}} +{{#include ../../../../rtic/examples/lock.rs}} ``` ``` console $ cargo run --target thumbv7m-none-eabi --example lock -{{#include ../../../../ci/expected/lock.run}} +{{#include ../../../../rtic/ci/expected/lock.run}} ``` Types of `#[shared]` resources have to be [`Send`]. ## Multi-lock -As an extension to `lock`, and to reduce rightward drift, locks can be taken as tuples. The -following examples show this in use: +As an extension to `lock`, and to reduce rightward drift, locks can be taken as tuples. The following examples show this in use: ``` rust -{{#include ../../../../examples/multilock.rs}} +{{#include ../../../../rtic/examples/multilock.rs}} ``` ``` console $ cargo run --target thumbv7m-none-eabi --example multilock -{{#include ../../../../ci/expected/multilock.run}} +{{#include ../../../../rtic/ci/expected/multilock.run}} ``` ## Only shared (`&-`) access -By default, the framework assumes that all tasks require exclusive access (`&mut-`) to resources, -but it is possible to specify that a task only requires shared access (`&-`) to a resource using the -`&resource_name` syntax in the `shared` list. +By default, the framework assumes that all tasks require exclusive mutable access (`&mut-`) to resources, but it is possible to specify that a task only requires shared access (`&-`) to a resource using the `&resource_name` syntax in the `shared` list. -The advantage of specifying shared access (`&-`) to a resource is that no locks are required to -access the resource even if the resource is contended by more than one task running at different -priorities. The downside is that the task only gets a shared reference (`&-`) to the resource, -limiting the operations it can perform on it, but where a shared reference is enough this approach -reduces the number of required locks. In addition to simple immutable data, this shared access can -be useful where the resource type safely implements interior mutability, with appropriate locking -or atomic operations of its own. +The advantage of specifying shared access (`&-`) to a resource is that no locks are required to access the resource even if the resource is contended by more than one task running at different priorities. The downside is that the task only gets a shared reference (`&-`) to the resource, limiting the operations it can perform on it, but where a shared reference is enough this approach reduces the number of required locks. In addition to simple immutable data, this shared access can be useful where the resource type safely implements interior mutability, with appropriate locking or atomic operations of its own. -Note that in this release of RTIC it is not possible to request both exclusive access (`&mut-`) -and shared access (`&-`) to the *same* resource from different tasks. Attempting to do so will -result in a compile error. +Note that in this release of RTIC it is not possible to request both exclusive access (`&mut-`) and shared access (`&-`) to the *same* resource from different tasks. Attempting to do so will result in a compile error. -In the example below a key (e.g. a cryptographic key) is loaded (or created) at runtime and then -used from two tasks that run at different priorities without any kind of lock. +In the example below a key (e.g. a cryptographic key) is loaded (or created) at runtime (returned by `init`) and then used from two tasks that run at different priorities without any kind of lock. ``` rust -{{#include ../../../../examples/only-shared-access.rs}} +{{#include ../../../../rtic/examples/only-shared-access.rs}} ``` ``` console $ cargo run --target thumbv7m-none-eabi --example only-shared-access -{{#include ../../../../ci/expected/only-shared-access.run}} +{{#include ../../../../rtic/ci/expected/only-shared-access.run}} ``` -## Lock-free resource access of shared resources +## Lock-free access of shared resources -A critical section is *not* required to access a `#[shared]` resource that's only accessed by tasks -running at the *same* priority. In this case, you can opt out of the `lock` API by adding the -`#[lock_free]` field-level attribute to the resource declaration (see example below). Note that -this is merely a convenience to reduce needless resource locking code, because even if the +A critical section is *not* required to access a `#[shared]` resource that's only accessed by tasks running at the *same* priority. In this case, you can opt out of the `lock` API by adding the `#[lock_free]` field-level attribute to the resource declaration (see example below). + + + +To adhere to the Rust [aliasing] rule, a resource may be either accessed through multiple immutable references or a singe mutable reference (but not both at the same time). + +[aliasing]: https://doc.rust-lang.org/nomicon/aliasing.html -Also worth noting: using `#[lock_free]` on resources shared by -tasks running at different priorities will result in a *compile-time* error -- not using the `lock` -API would be a data race in that case. +Using `#[lock_free]` on resources shared by tasks running at different priorities will result in a *compile-time* error -- not using the `lock` API would violate the aforementioned alias rule. Similarly, for each priority there can be only a single *software* task accessing a shared resource (as an `async` task may yield execution to other *software* or *hardware* tasks running at the same priority). However, under this single-task restriction, we make the observation that the resource is in effect no longer `shared` but rather `local`. Thus, using a `#[lock_free]` shared resource will result in a *compile-time* error -- where applicable, use a `#[local]` resource instead. ``` rust -{{#include ../../../../examples/lock-free.rs}} +{{#include ../../../../rtic/examples/lock-free.rs}} ``` ``` console $ cargo run --target thumbv7m-none-eabi --example lock-free -{{#include ../../../../ci/expected/lock-free.run}} +{{#include ../../../../rtic/ci/expected/lock-free.run}} ``` diff --git a/book/en/src/by-example/software_tasks.md b/book/en/src/by-example/software_tasks.md index 8ee185b..2752707 100644 --- a/book/en/src/by-example/software_tasks.md +++ b/book/en/src/by-example/software_tasks.md @@ -1,47 +1,99 @@ # Software tasks & spawn -The RTIC concept of a software task shares a lot with that of [hardware tasks](./hardware_tasks.md) -with the core difference that a software task is not explicitly bound to a specific -interrupt vector, but rather bound to a “dispatcher” interrupt vector running -at the intended priority of the software task (see below). +The RTIC concept of a *software* task shares a lot with that of [hardware tasks](./hardware_tasks.md) with the core difference that a software task is not explicitly bound to a specific +interrupt vector, but rather to a “dispatcher” interrupt vector running at the same priority as the software task. -Thus, software tasks are tasks which are not *directly* bound to an interrupt vector. +Similarly to *hardware* tasks, the `#[task]` attribute used on a function declare it as a task. The absence of a `binds = InterruptName` argument to the attribute declares the function as a *software task*. -The `#[task]` attributes used on a function determine if it is -software tasks, specifically the absence of a `binds = InterruptName` -argument to the attribute definition. +The static method `task_name::spawn()` spawns (starts) a software task and given that there are no higher priority tasks running the task will start executing directly. -The static method `task_name::spawn()` spawns (schedules) a software -task by registering it with a specific dispatcher. If there are no -higher priority tasks available to the scheduler (which serves a set -of dispatchers), the task will start executing directly. +The *software* task itself is given as an `async` Rust function, which allows the user to optionally `await` future events. This allows to blend reactive programming (by means of *hardware* tasks) with sequential programming (by means of *software* tasks). -All software tasks at the same priority level share an interrupt handler bound to their dispatcher. -What differentiates software and hardware tasks is the usage of either a dispatcher or a bound interrupt vector. +Whereas, *hardware* tasks are assumed to run-to-completion (and return), *software* tasks may be started (`spawned`) once and run forever, with the side condition that any loop (execution path) is broken by at least one `await` (yielding operation). -The interrupt vectors used as dispatchers cannot be used by hardware tasks. +All *software* tasks at the same priority level shares an interrupt handler acting as an async executor dispatching the software tasks. -Availability of a set of “free” (not in use by hardware tasks) and usable interrupt vectors allows the framework -to dispatch software tasks via dedicated interrupt handlers. +This list of dispatchers, `dispatchers = [FreeInterrupt1, FreeInterrupt2, ...]` is an argument to the `#[app]` attribute, where you define the set of free and usable interrupts. -This set of dispatchers, `dispatchers = [FreeInterrupt1, FreeInterrupt2, ...]` is an -argument to the `#[app]` attribute. +Each interrupt vector acting as dispatcher gets assigned to one priority level meaning that the list of dispatchers need to cover all priority levels used by software tasks. -Each interrupt vector acting as dispatcher gets assigned to a unique priority level meaning that -the list of dispatchers needs to cover all priority levels used by software tasks. +Example: The `dispatchers =` argument needs to have at least 3 entries for an application using three different priorities for software tasks. -Example: The `dispatchers =` argument needs to have at least 3 entries for an application using -three different priorities for software tasks. - -The framework will give a compilation error if there are not enough dispatchers provided. +The framework will give a compilation error if there are not enough dispatchers provided, or if a clash occurs between the list of dispatchers and interrupts bound to *hardware* tasks. See the following example: ``` rust -{{#include ../../../../examples/spawn.rs}} +{{#include ../../../../rtic/examples/spawn.rs}} ``` ``` console $ cargo run --target thumbv7m-none-eabi --example spawn -{{#include ../../../../ci/expected/spawn.run}} +{{#include ../../../../rtic/ci/expected/spawn.run}} +``` +You may `spawn` a *software* task again, given that it has run-to-completion (returned). + +In the below example, we `spawn` the *software* task `foo` from the `idle` task. Since the default priority of the *software* task is 1 (higher than `idle`), the dispatcher will execute `foo` (preempting `idle`). Since `foo` runs-to-completion. It is ok to `spawn` the `foo` task again. + +Technically the async executor will `poll` the `foo` *future* which in this case leaves the *future* in a *completed* state. + +``` rust +{{#include ../../../../rtic/examples/spawn_loop.rs}} +``` + +``` console +$ cargo run --target thumbv7m-none-eabi --example spawn_loop +{{#include ../../../../rtic/ci/expected/spawn_loop.run}} +``` + +An attempt to `spawn` an already spawned task (running) task will result in an error. Notice, the that the error is reported before the `foo` task is actually run. This is since, the actual execution of the *software* task is handled by the dispatcher interrupt (`SSIO`), which is not enabled until we exit the `init` task. (Remember, `init` runs in a critical section, i.e. all interrupts being disabled.) + +Technically, a `spawn` to a *future* that is not in *completed* state is considered an error. + +``` rust +{{#include ../../../../rtic/examples/spawn_err.rs}} +``` + +``` console +$ cargo run --target thumbv7m-none-eabi --example spawn_err +{{#include ../../../../rtic/ci/expected/spawn_err.run}} +``` + +## Passing arguments +You can also pass arguments at spawn as follows. + +``` rust +{{#include ../../../../rtic/examples/spawn_arguments.rs}} ``` + +``` console +$ cargo run --target thumbv7m-none-eabi --example spawn_arguments +{{#include ../../../../rtic/ci/expected/spawn_arguments.run}} +``` + +## Priority zero tasks + +In RTIC tasks run preemptively to each other, with priority zero (0) the lowest priority. You can use priority zero tasks for background work, without any strict real-time requirements. + +Conceptually, one can see such tasks as running in the `main` thread of the application, thus the resources associated are not required the [Send] bound. + +[Send]: https://doc.rust-lang.org/nomicon/send-and-sync.html + + +``` rust +{{#include ../../../../rtic/examples/zero-prio-task.rs}} +``` + +``` console +$ cargo run --target thumbv7m-none-eabi --example zero-prio-task +{{#include ../../../../rtic/ci/expected/zero-prio-task.run}} +``` + +> **Notice**: *software* task at zero priority cannot co-exist with the [idle] task. The reason is that `idle` is running as a non-returning Rust function at priority zero. Thus there would be no way for an executor at priority zero to give control to *software* tasks at the same priority. + +--- + +Application side safety: Technically, the RTIC framework ensures that `poll` is never executed on any *software* task with *completed* future, thus adhering to the soundness rules of async Rust. + + + diff --git a/book/en/src/by-example/starting_a_project.md b/book/en/src/by-example/starting_a_project.md index fe7be57..8638f90 100644 --- a/book/en/src/by-example/starting_a_project.md +++ b/book/en/src/by-example/starting_a_project.md @@ -10,6 +10,8 @@ If you are targeting ARMv6-M or ARMv8-M-base architecture, check out the section This will give you an RTIC application with support for RTT logging with [`defmt`] and stack overflow protection using [`flip-link`]. There is also a multitude of examples provided by the community: +For inspiration you may look at the below resources. For now they cover RTIC 1.0.x, but will be updated with RTIC 2.0.x examples over time. + - [`rtic-examples`] - Multiple projects - [https://github.com/kalkyl/f411-rtic](https://github.com/kalkyl/f411-rtic) - ... More to come diff --git a/book/en/src/preface.md b/book/en/src/preface.md index 6041dfe..3f47cb3 100644 --- a/book/en/src/preface.md +++ b/book/en/src/preface.md @@ -1,7 +1,7 @@
RTIC
-

Real-Time Interrupt-driven Concurrency

+

The Embedded Rust RTOS

A concurrency framework for building real-time systems

@@ -10,29 +10,160 @@ This book contains user level documentation for the Real-Time Interrupt-driven Concurrency (RTIC) framework. The API reference is available [here](../../api/). -Formerly known as Real-Time For the Masses. + -This is the documentation of v1.0.x of RTIC; for the documentation of version +This is the documentation of v2.0.x (pre-release) of RTIC 2. -* v0.5.x go [here](/0.5). -* v0.4.x go [here](/0.4). +## RTIC - The Past, current and Future + +This section gives a background to the RTIC model. Feel free to skip to section [RTIC the model](preface.md#rtic-the-model) for a TL;DR. + +The RTIC framework takes the outset from real-time systems research at Luleå University of Technology (LTU) Sweden. RTIC is inspired by the concurrency model of the [Timber] language, the [RTFM-SRP] based scheduler, the [RTFM-core] language and [Abstract Timer] implementation. For a full list of related research see [TODO]. + +[Timber]: https://timber-lang.org/ +[RTFM-SRP]: https://www.diva-portal.org/smash/get/diva2:1005680/FULLTEXT01.pdf +[RTFM-core]: https://ltu.diva-portal.org/smash/get/diva2:1013248/FULLTEXT01.pdf +[AbstractTimer]: https://ltu.diva-portal.org/smash/get/diva2:1013030/FULLTEXT01.pdf + +## Stack Resource Policy based Scheduling + +Stack Resource Policy (SRP) based concurrency and resource management is at heart of the RTIC framework. The [SRP] model itself extends on [Priority Inheritance Protocols], and provides a set of outstanding properties for single core scheduling. To name a few: + +- preemptive deadlock and race-free scheduling +- resource efficiency + - tasks execute on a single shared stack + - tasks run-to-completion with wait free access to shared resources +- predictable scheduling, with bounded priority inversion by a single (named) critical section +- theoretical underpinning amenable to static analysis (e.g., for task response times and overall schedulability) + +SRP comes with a set of system wide requirements: +- each task is associated a static priority, +- tasks execute on a single-core, +- tasks must be run-to-completion, and +- resources must be claimed/locked in LIFO order. + +[SRP]: https://link.springer.com/article/10.1007/BF00365393 +[Priority Inheritance Protocols]: https://ieeexplore.ieee.org/document/57058 + +## SRP analysis + +SRP based scheduling requires the set of static priority tasks and their access to shared resources to be known in order to compute a static *ceiling* (𝝅) for each resource. The static resource *ceiling* 𝝅(r) reflects the maximum static priority of any task that accesses the resource `r`. + +### Example + +Assume two tasks `A` (with priority `p(A) = 2`) and `B` (with priority `p(B) = 4`) both accessing the shared resource `R`. The static ceiling of `R` is 4 (computed from `𝝅(R) = max(p(A) = 2, p(B) = 4) = 4`). + +A graph representation of the example: + +```mermaid +graph LR + A["p(A) = 2"] --> R + B["p(A) = 4"] --> R + R["𝝅(R) = 4"] +``` + +## RTIC the hardware accelerated real-time scheduler + +SRP itself is compatible both to dynamic and static priority scheduling. For the implementation of RTIC we leverage on the underlying hardware for accelerated static priority scheduling. + +In the case of the `ARM Cortex-M` architecture, each interrupt vector entry `v[i]` is associated a function pointer (`v[i].fn`), and a static priority (`v[i].priority`), an enabled- (`v[i].enabled`) and a pending-bit (`v[i].pending`). + +An interrupt `i` is scheduled (run) by the hardware under the conditions: +1. is `pended` and `enabled` and has a priority higher than the (optional `BASEPRI`) register, and +1. has the highest priority among interrupts meeting 1. + +The first condition (1) can be seen a filter allowing RTIC to take control over which tasks should be allowed to start (and which should be prevented from starting). + +The SPR model for single-core static scheduling on the other hand states that a task should be scheduled (run) under the conditions: +1. it is `requested` to run and has a static priority higher than the current system ceiling (𝜫) +1. it has the highest static priority among tasks meeting 1. + +The similarities are striking and it is not by chance/luck/coincidence. The hardware was cleverly designed with real-time scheduling in mind. + +In order to map the SRP scheduling onto the hardware we need to have a closer look on the system ceiling (𝜫). Under SRP 𝜫 is computed as the maximum priority ceiling of the currently held resources, and will thus change dynamically during the system operation. + +## Example + +Assume the task model above. Starting from an idle system, 𝜫 is 0, (no task is holding any resource). Assume that `A` is requested for execution, it will immediately be scheduled. Assume that `A` claims (locks) the resource `R`. During the claim (lock of `R`) any request `B` will be blocked from starting (by 𝜫 = `max(𝝅(R) = 4) = 4`, `p(B) = 4`, thus SRP scheduling condition 1 is not met). + +## Mapping + +The mapping of static priority SRP based scheduling to the Cortex M hardware is straightforward: -## Is RTIC an RTOS? +- each task `t` are mapped to an interrupt vector index `i` with a corresponding function `v[i].fn = t` and given the static priority `v[i].priority = p(t)`. +- the current system ceiling is mapped to the `BASEPRI` register or implemented through masking the interrupt enable bits accordingly. -A common question is whether RTIC is an RTOS or not, and depending on your background the -answer may vary. From RTIC's developers point of view; RTIC is a hardware accelerated -RTOS that utilizes the NVIC in Cortex-M MCUs to perform scheduling, rather than the more -classical software kernel. +## Example -Another common view from the community is that RTIC is a concurrency framework as there -is no software kernel and that it relies on external HALs. +For the running example, a snapshot of the ARM Cortex M [NVIC] may have the following configuration (after task `A` has been pended for execution.) ---- +| Index | Fn | Priority | Enabled | Pended | +| ----- | --- | -------- | ------- | ------ | +| 0 | A | 2 | true | true | +| 1 | B | 4 | true | false | + +[NVIC]: https://developer.arm.com/documentation/ddi0337/h/nested-vectored-interrupt-controller/about-the-nvic + +(As discussed later, the assignment of interrupt and exception vectors is up to the user.) + + +A claim (lock(r)) will change the current system ceiling (𝜫) and can be implemented as a *named* critical section: + - old_ceiling = 𝜫, 𝜫 = 𝝅(r) + - execute code within critical section + - old_ceiling = 𝜫 + +This amounts to a resource protection mechanism requiring only two machine instructions on enter and one on exit the critical section for managing the `BASEPRI` register. For architectures lacking `BASEPRI`, we can implement the system ceiling through a set of machine instructions for disabling/enabling interrupts on entry/exit for the named critical section. The number of machine instructions vary depending on the number of mask registers that needs to be updated (a single machine operation can operate on up to 32 interrupts, so for the M0/M0+ architecture a single instruction suffice). RTIC will determine the ceiling values and masking constants at compile time, thus all operations is in Rust terms zero-cost. + +In this way RTIC fuses SRP based preemptive scheduling with a zero-cost hardware accelerated implementation, resulting in "best in class" guarantees and performance. + +Given that the approach is dead simple, how come SRP and hardware accelerated scheduling is not adopted by any other mainstream RTOS? + +The answer is simple, the commonly adopted threading model does not lend itself well to static analysis - there is no known way to extract the task/resource dependencies from the source code at compile time (thus ceilings cannot be efficiently computed and the LIFO resource locking requirement cannot be ensured). Thus SRP based scheduling is in the general case out of reach for any thread based RTOS. + +## RTIC into the Future + +Asynchronous programming in various forms are getting increased popularity and language support. Rust natively provides an `async`/`await` API for cooperative multitasking and the compiler generates the necessary boilerplate for storing and retrieving execution contexts (i.e., managing the set of local variables that spans each `await`). + +The Rust standard library provides collections for dynamically allocated data-structures (useful to manage execution contexts at run-time. However, in the setting of resource constrained real-time systems, dynamic allocations are problematic (both regarding performance and reliability - Rust runs into a *panic* on an out-of-memory condition). Thus, static allocation is king! + +RTIC provides a mechanism for `async`/`await` that relies solely on static allocations. However, the implementation relies on the `#![feature(type_alias_impl_trait)]` (TAIT) which is undergoing stabilization (thus RTIC 2.0.x currently requires a *nightly* toolchain). Technically, using TAIT, the compiler determines the size of each execution context allowing static allocation. + +From a modelling perspective `async/await` lifts the run-to-completion requirement of SRP, and each section of code between two yield points (`await`s) can be seen as an individual task. The compiler will reject any attempt to `await` while holding a resource (not doing so would break the strict LIFO requirement on resource usage under SRP). + +So with the technical stuff out of the way, what does `async/await` bring to the RTIC table? + +The answer is - improved ergonomics! In cases you want a task to perform a sequence of requests (and await their results in order to progress). Without `async`/`await` the programmer would be forced to split the task into individual sub-tasks and maintain some sort of state encoding (and manually progress by selecting sub-task). Using `async/await` each yield point (`await`) essentially represents a state, and the progression mechanism is built automatically for you at compile time by means of `Futures`. + +Rust `async`/`await` support is still incomplete and/or under development (e.g., there are no stable way to express `async` closures, precluding use in iterator patterns). Nevertheless, Rust `async`/`await` is production ready and covers most common use cases. + +An important property is that futures are composable, thus you can await either, all, or any combination of possible futures (allowing e.g., timeouts and/or asynchronous errors to be promptly handled). For more details and examples see Section [todo]. + +## RTIC the model + +An RTIC `app` is a declarative and executable system model for single-core applications, defining a set of (`local` and `shared`) resources operated on by a set of (`init`, `idle`, *hardware* and *software*) tasks. In short the `init` task runs before any other task returning a set of resources (`local` and `shared`). Tasks run preemptively based on their associated static priority, `idle` has the lowest priority (and can be used for background work, and/or to put the system to sleep until woken by some event). Hardware tasks are bound to underlying hardware interrupts, while software tasks are scheduled by asynchronous executors (one for each software task priority). + +At compile time the task/resource model is analyzed under SRP and executable code generated with the following outstanding properties: + +- guaranteed race-free resource access and deadlock-free execution on a single-shared stack (thanks to SRP) + - hardware task scheduling is performed directly by the hardware, and + - software task scheduling is performed by auto generated async executors tailored to the application. + +The RTIC API design ensures that both SRP requirements and Rust soundness rules are upheld at all times, thus the executable model is correct by construction. Overall, the generated code infers no additional overhead in comparison to a hand-written implementation, thus in Rust terms RTIC offers a zero-cost abstraction to concurrency. + + + diff --git a/book/en/src/rtic_vs.md b/book/en/src/rtic_vs.md new file mode 100644 index 0000000..2f8c8d5 --- /dev/null +++ b/book/en/src/rtic_vs.md @@ -0,0 +1,31 @@ +# RTIC vs. the world + +RTIC aims to provide the lowest level of abstraction needed for developing robust and reliable embedded software. + +It provides a minimal set of required mechanisms for safe sharing of mutable resources among interrupts and asynchronously executing tasks. The scheduling primitives leverages on the underlying hardware for unparalleled performance and predictability, in effect RTIC provides in Rust terms a zero-cost abstraction to concurrent real-time programming. + + + +## Comparison regarding safety and security + +Comparing RTIC to traditional a Real-Time Operating System (RTOS) is hard. Firstly, a traditional RTOS typically comes with no guarantees regarding system safety, even the most hardened kernels like the formally verified [seL4] kernel. Their claims to integrity, confidentiality, and availability regards only the kernel itself (under additional assumptions its configuration and environment). They even state: + +"An OS kernel, verified or not, does not automatically make a system secure. In fact, any system, no matter how secure, can be used in insecure ways." + +[seL4]: https://sel4.systems/ + +### Security by design + +In the world of information security we commonly find: + +- confidentiality, protecting the information from being exposed to an unauthorized party, +- integrity, referring to accuracy and completeness of data, and +- availability, referring to data being accessible to authorized users. + +Obviously, a traditional OS can guarantee neither confidentiality nor integrity, as both requires the security critical code to be trusted. Regarding availability, this typically boils down to the usage of system resources. Any OS that allows for dynamic allocation of resources, relies on that the application correctly handles allocations/de-allocations, and cases of allocation failures. + +Thus their claim is correct, security is completely out of hands for the OS, the best we can hope for is that it does not add further vulnerabilities. + +RTIC on the other hand holds your back. The declarative system wide model gives you a static set of tasks and resources, with precise control over what data is shared and between which parties. Moreover, Rust as a programming language comes with strong properties regarding integrity (compile time aliasing, mutability and lifetime guarantees, together with ensured data validity). + +Using RTIC these properties propagate to the system wide model, without interference of other applications running. The RTIC kernel is internally infallible without any need of dynamically allocated data. \ No newline at end of file -- cgit v1.2.3 From 63f3d784fe519d248c89b64006dbc13d83e07360 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Henrik=20Tj=C3=A4der?= Date: Wed, 1 Feb 2023 01:15:56 +0100 Subject: Revert accidental removal of editorial changes --- book/en/src/by-example/app_init.md | 18 +++++++++--------- book/en/src/by-example/app_priorities.md | 4 +++- book/en/src/by-example/app_task.md | 15 ++++++++++----- book/en/src/by-example/hardware_tasks.md | 14 ++++++++------ book/en/src/by-example/resources.md | 12 +++++++----- book/en/src/by-example/software_tasks.md | 7 ++----- 6 files changed, 39 insertions(+), 31 deletions(-) (limited to 'book/en/src') diff --git a/book/en/src/by-example/app_init.md b/book/en/src/by-example/app_init.md index 62fb55b..3767bd7 100644 --- a/book/en/src/by-example/app_init.md +++ b/book/en/src/by-example/app_init.md @@ -1,19 +1,19 @@ # App initialization and the `#[init]` task An RTIC application requires an `init` task setting up the system. The corresponding `init` function must have the -signature `fn(init::Context) -> (Shared, Local)`, where `Shared` and `Local` are the resource -structures defined by the user. - -The `init` task executes after system reset (after the optionally defined [pre-init] and internal RTIC -initialization). The `init` task runs *with interrupts disabled* and has exclusive access to Cortex-M (the -`bare_metal::CriticalSection` token is available as `cs`) while device specific peripherals are available through -the `core` and `device` fields of `init::Context`. -[pre-init]: https://docs.rs/cortex-m-rt/latest/cortex_m_rt/attr.pre_init.html +signature `fn(init::Context) -> (Shared, Local)`, where `Shared` and `Local` are resource structures defined by the user. + +The `init` task executes after system reset, [after an optionally defined `pre-init` code section][pre-init] and an always occurring internal RTIC initialization. [pre-init]: https://docs.rs/cortex-m-rt/latest/cortex_m_rt/attr.pre_init.html + +The `init` and optional `pre-init` tasks runs *with interrupts disabled* and have exclusive access to Cortex-M (the `bare_metal::CriticalSection` token is available as `cs`). + +Device specific peripherals are available through the `core` and `device` fields of `init::Context`. + ## Example The example below shows the types of the `core`, `device` and `cs` fields, and showcases the use of a `local` variable with `'static` lifetime. Such variables can be delegated from the `init` task to other tasks of the RTIC application. -The `device` field is available when the `peripherals` argument is set to the default value `true`. +The `device` field is only available when the `peripherals` argument is set to the default value `true`. In the rare case you want to implement an ultra-slim application you can explicitly set `peripherals` to `false`. ``` rust diff --git a/book/en/src/by-example/app_priorities.md b/book/en/src/by-example/app_priorities.md index f03ebf7..9d27658 100644 --- a/book/en/src/by-example/app_priorities.md +++ b/book/en/src/by-example/app_priorities.md @@ -15,7 +15,7 @@ Omitting the `priority` argument the task priority defaults to `1`. The `idle` t The highest static priority task takes precedence when more than one task are ready to execute. The following scenario demonstrates task prioritization: -Spawning a higher priority task A during execution of a lower priority task B pends task A. Task A has higher priority thus preempting task B which gets suspended until task A completes execution. Thus, when task A completes task B resumes execution. +Spawning a higher priority task A during execution of a lower priority task B suspends task B. Task A has higher priority thus preempting task B which gets suspended until task A completes execution. Thus, when task A completes task B resumes execution. ```text Task Priority @@ -46,6 +46,8 @@ Note that the task `bar` does *not* preempt task `baz` because its priority is t One more note about priorities: choosing a priority higher than what the device supports will result in a compilation error. The error is cryptic due to limitations in the Rust language, if `priority = 9` for task `uart0_interrupt` in `example/common.rs` this looks like: +The error is cryptic due to limitations in the Rust language if `priority = 9` for task `uart0_interrupt` in `example/common.rs` this looks like: + ```text error[E0080]: evaluation of constant value failed --> examples/common.rs:10:1 diff --git a/book/en/src/by-example/app_task.md b/book/en/src/by-example/app_task.md index e0c67ad..b2731f6 100644 --- a/book/en/src/by-example/app_task.md +++ b/book/en/src/by-example/app_task.md @@ -6,13 +6,18 @@ Tasks, defined with `#[task]`, are the main mechanism of getting work done in RT Tasks can -* Be spawned (now or in the future) -* Receive messages (message passing) -* Prioritized allowing preemptive multitasking +* Be spawned (now or in the future, also by themselves) +* Receive messages (passing messages between tasks) +* Be prioritized, allowing preemptive multitasking * Optionally bind to a hardware interrupt -RTIC makes a distinction between “software tasks” and “hardware tasks”. Hardware tasks are tasks that are bound to a specific interrupt vector in the MCU while software tasks are not. +RTIC makes a distinction between “software tasks” and “hardware tasks”. -This means that if a hardware task is bound to an UART RX interrupt the task will run every time this interrupt triggers, usually when a character is received. +*Hardware tasks* are tasks that are bound to a specific interrupt vector in the MCU while software tasks are not. + +This means that if a hardware task is bound to, lets say, a UART RX interrupt, the task will be run every +time that interrupt triggers, usually when a character is received. + +*Software tasks* are explicitly spawned in a task, either immediately or using the Monotonic timer mechanism. In the coming pages we will explore both tasks and the different options available. diff --git a/book/en/src/by-example/hardware_tasks.md b/book/en/src/by-example/hardware_tasks.md index e3e51ac..cb20a7c 100644 --- a/book/en/src/by-example/hardware_tasks.md +++ b/book/en/src/by-example/hardware_tasks.md @@ -1,21 +1,23 @@ # Hardware tasks -At its core RTIC is using the hardware interrupt controller ([ARM NVIC on cortex-m][NVIC]) to perform scheduling and executing tasks, and all (*hardware*) tasks except `#[init]` and `#[idle]` run as interrupt handlers. This also means that you can manually bind tasks to interrupt handlers. +At its core RTIC is using a hardware interrupt controller ([ARM NVIC on cortex-m][NVIC]) to schedule and start execution of tasks. All tasks except `pre-init`, `#[init]` and `#[idle]` run as interrupt handlers. -To bind an interrupt use the `#[task]` attribute argument `binds = InterruptName`. This task becomes the interrupt handler for this hardware interrupt vector. +Hardware tasks are explicitly bound to interrupt handlers. -All tasks bound to an explicit interrupt are *hardware tasks* since they start execution in reaction to a hardware event (interrupt). +To bind a task to an interrupt, use the `#[task]` attribute argument `binds = InterruptName`. This task then becomes the interrupt handler for this hardware interrupt vector. + +All tasks bound to an explicit interrupt are called *hardware tasks* since they start execution in reaction to a hardware event. Specifying a non-existing interrupt name will cause a compilation error. The interrupt names are commonly defined by [PAC or HAL][pacorhal] crates. -Any available interrupt vector should work, but different hardware might have added special properties to select interrupt priority levels, such as the [nRF “softdevice”](https://github.com/rtic-rs/cortex-m-rtic/issues/434). +Any available interrupt vector should work. Specific devices may bind specific interrupt priorities to specific interrupt vectors outside user code control. See for example the [nRF “softdevice”](https://github.com/rtic-rs/cortex-m-rtic/issues/434). -Beware of re-purposing interrupt vectors used internally by hardware features, RTIC is unaware of such hardware specific details. +Beware of using interrupt vectors that are used internally by hardware features; RTIC is unaware of such hardware specific details. [pacorhal]: https://docs.rust-embedded.org/book/start/registers.html [NVIC]: https://developer.arm.com/documentation/100166/0001/Nested-Vectored-Interrupt-Controller/NVIC-functional-description/NVIC-interrupts -The example below demonstrates the use of the `#[task(binds = InterruptName)]` attribute to declare a hardware task bound to an interrupt handler. In the example the interrupt triggering task execution is manually pended (`rtic::pend(Interrupt::UART0)`). However, in the typical case, interrupts are pended by the hardware peripheral. RTIC does not interfere with mechanisms for clearing peripheral interrupts, so any hardware specific implementation is completely up to the implementer. +The example below demonstrates the use of the `#[task(binds = InterruptName)]` attribute to declare a hardware task bound to an interrupt handler. ``` rust {{#include ../../../../rtic/examples/hardware.rs}} diff --git a/book/en/src/by-example/resources.md b/book/en/src/by-example/resources.md index ea67b26..2dd7cb7 100644 --- a/book/en/src/by-example/resources.md +++ b/book/en/src/by-example/resources.md @@ -15,11 +15,11 @@ further discussed in [Monotonic & `spawn_{at/after}`](./monotonic.md). --> ## `#[local]` resources -`#[local]` resources accessible only to a single task. This task is given unique access to the resource without the use of locks or critical sections. +`#[local]` resources are locally accessible to a specific task, meaning that only that task can access the resource and does so without locks or critical sections. This allows for the resources, commonly drivers or large objects, to be initialized in `#[init]` and then be passed to a specific task. -This allows for the resources, commonly drivers or large objects, to be initialized in `#[init]` and then be passed to a specific task. (Thus, a task `#[local]` resource can only be accessed by one single task.) Attempting to assign the same `#[local]` resource to more than one task is a compile-time error. +Thus, a task `#[local]` resource can only be accessed by one singular task. Attempting to assign the same `#[local]` resource to more than one task is a compile-time error. -Types of `#[local]` resources must implement [`Send`] trait as they are being sent from `init` to the target task and thus crossing the *thread* boundary. +Types of `#[local]` resources must implement a [`Send`] trait as they are being sent from `init` to a target task, crossing a thread boundary. [`Send`]: https://doc.rust-lang.org/stable/core/marker/trait.Send.html @@ -36,9 +36,11 @@ $ cargo run --target thumbv7m-none-eabi --example locals {{#include ../../../../rtic/ci/expected/locals.run}} ``` +Local resources in `#[init]` and `#[idle]` have `'static` lifetimes. This is safe since both tasks are not re-entrant. + ### Task local initialized resources -A special use-case of local resources are the ones specified directly in the task declaration, `#[task(local = [my_var: TYPE = INITIAL_VALUE, ...])]`. This allows for creating locals which do no need to be initialized in `#[init]`. Moreover, local resources in `#[init]` and `#[idle]` have `'static` lifetimes, this is safe since both are not re-entrant. +Local resources can also be specified directly in the resource claim like so: `#[task(local = [my_var: TYPE = INITIAL_VALUE, ...])]`; this allows for creating locals which do no need to be initialized in `#[init]`. Types of `#[task(local = [..])]` resources have to be neither [`Send`] nor [`Sync`] as they are not crossing any thread boundary. @@ -69,7 +71,7 @@ The critical section created by the `lock` API is based on dynamic priorities: i [icpp]: https://en.wikipedia.org/wiki/Priority_ceiling_protocol [srp]: https://en.wikipedia.org/wiki/Stack_Resource_Policy -In the example below we have three interrupt handlers with priorities ranging from one to three. The two handlers with the lower priorities contend for the `shared` resource and need to lock the resource for accessing the data. The highest priority handler, which do not access the `shared` resource, is free to preempt the critical section created by the lowest priority handler. +In the example below we have three interrupt handlers with priorities ranging from one to three. The two handlers with the lower priorities contend for a `shared` resource and need to succeed in locking the resource in order to access its data. The highest priority handler, which does not access the `shared` resource, is free to preempt a critical section created by the lowest priority handler. ``` rust {{#include ../../../../rtic/examples/lock.rs}} diff --git a/book/en/src/by-example/software_tasks.md b/book/en/src/by-example/software_tasks.md index 2752707..828c3fd 100644 --- a/book/en/src/by-example/software_tasks.md +++ b/book/en/src/by-example/software_tasks.md @@ -1,7 +1,7 @@ # Software tasks & spawn -The RTIC concept of a *software* task shares a lot with that of [hardware tasks](./hardware_tasks.md) with the core difference that a software task is not explicitly bound to a specific -interrupt vector, but rather to a “dispatcher” interrupt vector running at the same priority as the software task. +The RTIC concept of a software task shares a lot with that of [hardware tasks](./hardware_tasks.md) with the core difference that a software task is not explicitly bound to a specific +interrupt vector, but rather bound to a “dispatcher” interrupt vector running at the intended priority of the software task (see below). Similarly to *hardware* tasks, the `#[task]` attribute used on a function declare it as a task. The absence of a `binds = InterruptName` argument to the attribute declares the function as a *software task*. @@ -94,6 +94,3 @@ $ cargo run --target thumbv7m-none-eabi --example zero-prio-task --- Application side safety: Technically, the RTIC framework ensures that `poll` is never executed on any *software* task with *completed* future, thus adhering to the soundness rules of async Rust. - - - -- cgit v1.2.3 From 3886f4e964c67d2aa0ce2ae3be60293cbd5dfd79 Mon Sep 17 00:00:00 2001 From: Per Lindgren Date: Wed, 1 Feb 2023 11:49:11 +0100 Subject: Monotonic book update --- book/en/src/SUMMARY.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'book/en/src') diff --git a/book/en/src/SUMMARY.md b/book/en/src/SUMMARY.md index 407be6d..28c9862 100644 --- a/book/en/src/SUMMARY.md +++ b/book/en/src/SUMMARY.md @@ -10,7 +10,7 @@ - [The init task](./by-example/app_init.md) - [The idle task](./by-example/app_idle.md) - [Channel based communication](./by-example/channel.md) - - [Tasks with delay](./by-example/delay.md) + - [Delay and Timeout](./by-example/delay.md) - [Starting a new project](./by-example/starting_a_project.md) - [The minimal app](./by-example/app_minimal.md) - [Tips & Tricks](./by-example/tips.md) -- cgit v1.2.3 From 6dc46ce1c6eb54e453c7dfb46eda44a596648329 Mon Sep 17 00:00:00 2001 From: Per Lindgren Date: Wed, 1 Feb 2023 11:50:08 +0100 Subject: Monotonic book --- book/en/src/by-example/tips_monotonic_impl.md | 28 +++++++++++---------------- 1 file changed, 11 insertions(+), 17 deletions(-) (limited to 'book/en/src') diff --git a/book/en/src/by-example/tips_monotonic_impl.md b/book/en/src/by-example/tips_monotonic_impl.md index 7c3449b..57b0a01 100644 --- a/book/en/src/by-example/tips_monotonic_impl.md +++ b/book/en/src/by-example/tips_monotonic_impl.md @@ -1,35 +1,29 @@ # Implementing a `Monotonic` timer for scheduling -The framework is flexible because it can use any timer which has compare-match and optionally -supporting overflow interrupts for scheduling. -The single requirement to make a timer usable with RTIC is implementing the -[`rtic_monotonic::Monotonic`] trait. - -Implementing time counting that supports large time spans is generally **difficult**, in RTIC 0.5 -implementing time handling was a common problem. -Moreover, the relation between time and timers used for scheduling was difficult to understand. - -For RTIC 1.0 we instead assume the user has a time library, e.g. [`fugit`] or [`embedded_time`], -as the basis for all time-based operations when implementing `Monotonic`. -These libraries make it much easier to correctly implement the `Monotonic` trait, allowing the use of +The framework is flexible because it can use any timer which has compare-match and optionally supporting overflow interrupts for scheduling. The single requirement to make a timer usable with RTIC is implementing the [`rtic-time::Monotonic`] trait. + +For RTIC 1.0 and 2.0 we instead assume the user has a time library, e.g. [`fugit`] or [`embedded_time`], as the basis for all time-based operations when implementing `Monotonic`. These libraries make it much easier to correctly implement the `Monotonic` trait, allowing the use of almost any timer in the system for scheduling. -The trait documents the requirements for each method, -and for inspiration here is a list of `Monotonic` implementations: +The trait documents the requirements for each method, and for inspiration +there is a reference implementation based on the `SysTick` timer available on all ARM Cortex M MCUs. + +- [`Systick based`], runs at a fixed interrupt (tick) rate - with some overhead but simple and provides support for large time spans + +Here is a list of `Monotonic` implementations for RTIC 1.0: - [`STM32F411 series`], implemented for the 32-bit timers - [`Nordic nRF52 series Timer`], implemented for the 32-bit timers - [`Nordic nRF52 series RTC`], implemented for the RTCs -- [`Systick based`], runs at a fixed interrupt (tick) rate - with some overhead but simple and with support for large time spans - [`DWT and Systick based`], a more efficient (tickless) implementation - requires both `SysTick` and `DWT`, supports both high resolution and large time spans If you know of more implementations feel free to add them to this list. -[`rtic_monotonic::Monotonic`]: https://docs.rs/rtic-monotonic/ +[`rtic_time::Monotonic`]: https://docs.rs/rtic_time/ [`fugit`]: https://docs.rs/fugit/ [`embedded_time`]: https://docs.rs/embedded_time/ [`STM32F411 series`]: https://github.com/kalkyl/f411-rtic/blob/a696fce7d6d19fda2356c37642c4d53547982cca/src/mono.rs [`Nordic nRF52 series Timer`]: https://github.com/kalkyl/nrf-play/blob/47f4410d4e39374c18ff58dc17c25159085fb526/src/mono.rs [`Nordic nRF52 series RTC`]: https://gist.github.com/korken89/fe94a475726414dd1bce031c76adc3dd -[`Systick based`]: https://github.com/rtic-rs/systick-monotonic +[`Systick based`]: https://github.com/rtic-monotonics [`DWT and Systick based`]: https://github.com/rtic-rs/dwt-systick-monotonic -- cgit v1.2.3 From 14fdca130f8c3ab598b30cfb7e70f8712ea42fb8 Mon Sep 17 00:00:00 2001 From: Emil Fresk Date: Wed, 1 Feb 2023 19:34:25 +0100 Subject: Minor book fix --- book/en/src/preface.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'book/en/src') diff --git a/book/en/src/preface.md b/book/en/src/preface.md index 3f47cb3..c6638ab 100644 --- a/book/en/src/preface.md +++ b/book/en/src/preface.md @@ -1,7 +1,7 @@
RTIC
-

The Embedded Rust RTOS

+

The hardware accelerated Rust RTOS

A concurrency framework for building real-time systems

-- cgit v1.2.3 From 89632f9b22d33bef08b2f98554e263c8a1d7cfa0 Mon Sep 17 00:00:00 2001 From: Per Lindgren Date: Wed, 1 Feb 2023 19:46:58 +0100 Subject: book polish --- book/en/src/by-example/app_idle.md | 6 ++++++ book/en/src/by-example/app_init.md | 3 +++ book/en/src/by-example/app_minimal.md | 3 +++ book/en/src/by-example/channel.md | 24 +++++++++++++++++++----- book/en/src/by-example/delay.md | 12 ++++++++++-- book/en/src/by-example/hardware_tasks.md | 3 +++ book/en/src/by-example/resources.md | 15 +++++++++++++++ book/en/src/by-example/software_tasks.md | 15 +++++++++++++++ book/en/src/by-example/tips_destructureing.md | 9 +++++---- book/en/src/by-example/tips_indirection.md | 23 +++++++++-------------- 10 files changed, 88 insertions(+), 25 deletions(-) (limited to 'book/en/src') diff --git a/book/en/src/by-example/app_idle.md b/book/en/src/by-example/app_idle.md index 4856ee1..cbfd7ba 100644 --- a/book/en/src/by-example/app_idle.md +++ b/book/en/src/by-example/app_idle.md @@ -17,6 +17,9 @@ The example below shows that `idle` runs after `init`. ``` console $ cargo run --target thumbv7m-none-eabi --example idle +``` + +``` console {{#include ../../../../rtic/ci/expected/idle.run}} ``` @@ -41,6 +44,9 @@ The following example shows how to enable sleep by setting the ``` console $ cargo run --target thumbv7m-none-eabi --example idle-wfi +``` + +``` console {{#include ../../../../rtic/ci/expected/idle-wfi.run}} ``` diff --git a/book/en/src/by-example/app_init.md b/book/en/src/by-example/app_init.md index 3767bd7..fb37387 100644 --- a/book/en/src/by-example/app_init.md +++ b/book/en/src/by-example/app_init.md @@ -24,5 +24,8 @@ Running the example will print `init` to the console and then exit the QEMU proc ``` console $ cargo run --target thumbv7m-none-eabi --example init +``` + +``` console {{#include ../../../../rtic/ci/expected/init.run}} ``` diff --git a/book/en/src/by-example/app_minimal.md b/book/en/src/by-example/app_minimal.md index f241089..714f543 100644 --- a/book/en/src/by-example/app_minimal.md +++ b/book/en/src/by-example/app_minimal.md @@ -11,6 +11,9 @@ RTIC is designed with resource efficiency in mind. RTIC itself does not rely on For a minimal example you can expect something like: ``` console $ cargo size --example smallest --target thumbv7m-none-eabi --release +``` + +``` console Finished release [optimized] target(s) in 0.07s text data bss dec hex filename 924 0 0 924 39c smallest diff --git a/book/en/src/by-example/channel.md b/book/en/src/by-example/channel.md index 99bfedd..1f9510a 100644 --- a/book/en/src/by-example/channel.md +++ b/book/en/src/by-example/channel.md @@ -2,7 +2,7 @@ Channels can be used to communicate data between running *software* tasks. The channel is essentially a wait queue, allowing tasks with multiple producers and a single receiver. A channel is constructed in the `init` task and backed by statically allocated memory. Send and receive endpoints are distributed to *software* tasks: -```rust +``` rust ... const CAPACITY: usize = 5; #[init] @@ -20,7 +20,7 @@ In this case the channel holds data of `u32` type with a capacity of 5 elements The `send` method post a message on the channel as shown below: -```rust +``` rust #[task] async fn sender1(_c: sender1::Context, mut sender: Sender<'static, u32, CAPACITY>) { hprintln!("Sender 1 sending: 1"); @@ -32,7 +32,7 @@ async fn sender1(_c: sender1::Context, mut sender: Sender<'static, u32, CAPACITY The receiver can `await` incoming messages: -```rust +``` rust #[task] async fn receiver(_c: receiver::Context, mut receiver: Receiver<'static, u32, CAPACITY>) { while let Ok(val) = receiver.recv().await { @@ -42,6 +42,8 @@ async fn receiver(_c: receiver::Context, mut receiver: Receiver<'static, u32, CA } ``` +Channels are implemented using a small (global) *Critical Section* (CS) for protection against race-conditions. The user must provide an CS implementation. Compiling the examples given the `--features test-critical-section` gives one possible implementation. + For a complete example: ``` rust @@ -50,6 +52,9 @@ For a complete example: ``` console $ cargo run --target thumbv7m-none-eabi --example async-channel --features test-critical-section +``` + +``` console {{#include ../../../../rtic/ci/expected/async-channel.run}} ``` @@ -79,7 +84,10 @@ In case all senders have been dropped `await` on an empty receiver channel resul ``` ``` console -$ cargo run --target thumbv7m-none-eabi --example async-channel-no-sender --features test-critical-section +$ cargo run --target thumbv7m-none-eabi --example async-channel-no-sender --features test-critical-section +``` + +``` console {{#include ../../../../rtic/ci/expected/async-channel-no-sender.run}} ``` @@ -93,6 +101,9 @@ The resulting error returns the data back to the sender, allowing the sender to ``` console $ cargo run --target thumbv7m-none-eabi --example async-channel-no-receiver --features test-critical-section +``` + +``` console {{#include ../../../../rtic/ci/expected/async-channel-no-receiver.run}} ``` @@ -107,6 +118,9 @@ In cases you wish the sender to proceed even in case the channel is full. To tha ``` ``` console -$ cargo run --target thumbv7m-none-eabi --example async-channel-try --features test-critical-section +$ cargo run --target thumbv7m-none-eabi --example async-channel-try --features test-critical-section +``` + +``` console {{#include ../../../../rtic/ci/expected/async-channel-try.run}} ``` \ No newline at end of file diff --git a/book/en/src/by-example/delay.md b/book/en/src/by-example/delay.md index d35d9da..8d05d7c 100644 --- a/book/en/src/by-example/delay.md +++ b/book/en/src/by-example/delay.md @@ -4,7 +4,7 @@ A convenient way to express *miniminal* timing requirements is by means of delay This can be achieved by instantiating a monotonic timer: -```rust +``` rust ... rtic_monotonics::make_systick_timer_queue!(TIMER); @@ -17,7 +17,7 @@ fn init(cx: init::Context) -> (Shared, Local) { A *software* task can `await` the delay to expire: -```rust +``` rust #[task] async fn foo(_cx: foo::Context) { ... @@ -27,6 +27,8 @@ async fn foo(_cx: foo::Context) { Technically, the timer queue is implemented as a list based priority queue, where list-nodes are statically allocated as part of the underlying task `Future`. Thus, the timer queue is infallible at run-time (its size and allocation is determined at compile time). +Similarly the channels implementation, the timer-queue implementation relies on a global *Critical Section* (CS) for race protection. For the examples a CS implementation is provided by adding `--features test-critical-section` to the build options. + For a complete example: ``` rust @@ -35,6 +37,9 @@ For a complete example: ``` console $ cargo run --target thumbv7m-none-eabi --example async-delay --features test-critical-section +``` + +``` console {{#include ../../../../rtic/ci/expected/async-delay.run}} ``` @@ -112,5 +117,8 @@ The complete example: ``` console $ cargo run --target thumbv7m-none-eabi --example async-timeout --features test-critical-section +``` + +``` console {{#include ../../../../rtic/ci/expected/async-timeout.run}} ``` diff --git a/book/en/src/by-example/hardware_tasks.md b/book/en/src/by-example/hardware_tasks.md index cb20a7c..c902267 100644 --- a/book/en/src/by-example/hardware_tasks.md +++ b/book/en/src/by-example/hardware_tasks.md @@ -25,5 +25,8 @@ The example below demonstrates the use of the `#[task(binds = InterruptName)]` a ``` console $ cargo run --target thumbv7m-none-eabi --example hardware +``` + +``` console {{#include ../../../../rtic/ci/expected/hardware.run}} ``` diff --git a/book/en/src/by-example/resources.md b/book/en/src/by-example/resources.md index 2dd7cb7..0bf5d11 100644 --- a/book/en/src/by-example/resources.md +++ b/book/en/src/by-example/resources.md @@ -33,6 +33,9 @@ Running the example: ``` console $ cargo run --target thumbv7m-none-eabi --example locals +``` + +``` console {{#include ../../../../rtic/ci/expected/locals.run}} ``` @@ -79,6 +82,9 @@ In the example below we have three interrupt handlers with priorities ranging fr ``` console $ cargo run --target thumbv7m-none-eabi --example lock +``` + +``` console {{#include ../../../../rtic/ci/expected/lock.run}} ``` @@ -94,6 +100,9 @@ As an extension to `lock`, and to reduce rightward drift, locks can be taken as ``` console $ cargo run --target thumbv7m-none-eabi --example multilock +``` + +``` console {{#include ../../../../rtic/ci/expected/multilock.run}} ``` @@ -113,6 +122,9 @@ In the example below a key (e.g. a cryptographic key) is loaded (or created) at ``` console $ cargo run --target thumbv7m-none-eabi --example only-shared-access +``` + +``` console {{#include ../../../../rtic/ci/expected/only-shared-access.run}} ``` @@ -136,5 +148,8 @@ Using `#[lock_free]` on resources shared by tasks running at different prioritie ``` console $ cargo run --target thumbv7m-none-eabi --example lock-free +``` + +``` console {{#include ../../../../rtic/ci/expected/lock-free.run}} ``` diff --git a/book/en/src/by-example/software_tasks.md b/book/en/src/by-example/software_tasks.md index 828c3fd..0efc57b 100644 --- a/book/en/src/by-example/software_tasks.md +++ b/book/en/src/by-example/software_tasks.md @@ -29,6 +29,9 @@ See the following example: ``` console $ cargo run --target thumbv7m-none-eabi --example spawn +``` + +``` console {{#include ../../../../rtic/ci/expected/spawn.run}} ``` You may `spawn` a *software* task again, given that it has run-to-completion (returned). @@ -43,6 +46,9 @@ Technically the async executor will `poll` the `foo` *future* which in this case ``` console $ cargo run --target thumbv7m-none-eabi --example spawn_loop +``` + +``` console {{#include ../../../../rtic/ci/expected/spawn_loop.run}} ``` @@ -56,6 +62,9 @@ Technically, a `spawn` to a *future* that is not in *completed* state is conside ``` console $ cargo run --target thumbv7m-none-eabi --example spawn_err +``` + +``` console {{#include ../../../../rtic/ci/expected/spawn_err.run}} ``` @@ -68,6 +77,9 @@ You can also pass arguments at spawn as follows. ``` console $ cargo run --target thumbv7m-none-eabi --example spawn_arguments +``` + +``` console {{#include ../../../../rtic/ci/expected/spawn_arguments.run}} ``` @@ -86,6 +98,9 @@ Conceptually, one can see such tasks as running in the `main` thread of the appl ``` console $ cargo run --target thumbv7m-none-eabi --example zero-prio-task +``` + +``` console {{#include ../../../../rtic/ci/expected/zero-prio-task.run}} ``` diff --git a/book/en/src/by-example/tips_destructureing.md b/book/en/src/by-example/tips_destructureing.md index 4637b48..ab27987 100644 --- a/book/en/src/by-example/tips_destructureing.md +++ b/book/en/src/by-example/tips_destructureing.md @@ -1,14 +1,15 @@ # Resource de-structure-ing Destructuring task resources might help readability if a task takes multiple -resources. -Here are two examples on how to split up the resource struct: +resources. Here are two examples on how to split up the resource struct: ``` rust -{{#include ../../../../examples/destructure.rs}} +{{#include ../../../../rtic/examples/destructure.rs}} ``` ``` console $ cargo run --target thumbv7m-none-eabi --example destructure -{{#include ../../../../ci/expected/destructure.run}} +``` +``` console +{{#include ../../../../rtic/ci/expected/destructure.run}} ``` diff --git a/book/en/src/by-example/tips_indirection.md b/book/en/src/by-example/tips_indirection.md index 567a5e7..0de14a6 100644 --- a/book/en/src/by-example/tips_indirection.md +++ b/book/en/src/by-example/tips_indirection.md @@ -1,31 +1,26 @@ # Using indirection for faster message passing -Message passing always involves copying the payload from the sender into a -static variable and then from the static variable into the receiver. Thus -sending a large buffer, like a `[u8; 128]`, as a message involves two expensive +Message passing always involves copying the payload from the sender into a static variable and then from the static variable into the receiver. Thus sending a large buffer, like a `[u8; 128]`, as a message involves two expensive `memcpy`s. -Indirection can minimize message passing overhead: -instead of sending the buffer by value, one can send an owning pointer into the -buffer. +Indirection can minimize message passing overhead: instead of sending the buffer by value, one can send an owning pointer into the buffer. -One can use a global memory allocator to achieve indirection (`alloc::Box`, -`alloc::Rc`, etc.), which requires using the nightly channel as of Rust v1.37.0, -or one can use a statically allocated memory pool like [`heapless::Pool`]. +One can use a global memory allocator to achieve indirection (`alloc::Box`, `alloc::Rc`, etc.), which requires using the nightly channel as of Rust v1.37.0, or one can use a statically allocated memory pool like [`heapless::Pool`]. [`heapless::Pool`]: https://docs.rs/heapless/0.5.0/heapless/pool/index.html -As this example of approach goes completely outside of RTIC resource -model with shared and local the program would rely on the correctness -of the memory allocator, in this case `heapless::pool`. +As this example of approach goes completely outside of RTIC resource model with shared and local the program would rely on the correctness of the memory allocator, in this case `heapless::pool`. Here's an example where `heapless::Pool` is used to "box" buffers of 128 bytes. ``` rust -{{#include ../../../../examples/pool.rs}} +{{#include ../../../../rtic/examples/pool.rs}} ``` ``` console $ cargo run --target thumbv7m-none-eabi --example pool -{{#include ../../../../ci/expected/pool.run}} +``` + +``` console +{{#include ../../../../rtic/ci/expected/pool.run}} ``` -- cgit v1.2.3 From aa6baafa568b08a77a31c17c078a6166d16a2ee9 Mon Sep 17 00:00:00 2001 From: Per Lindgren Date: Wed, 1 Feb 2023 21:21:31 +0100 Subject: book remove ramfunc, remove migration --- book/en/src/SUMMARY.md | 6 ++--- book/en/src/by-example/tips_from_ram.md | 33 +++++++++++++------------ book/en/src/by-example/tips_static_lifetimes.md | 15 ++++++----- book/en/src/by-example/tips_view_code.md | 25 +++++++++---------- 4 files changed, 39 insertions(+), 40 deletions(-) (limited to 'book/en/src') diff --git a/book/en/src/SUMMARY.md b/book/en/src/SUMMARY.md index 28c9862..65c72c4 100644 --- a/book/en/src/SUMMARY.md +++ b/book/en/src/SUMMARY.md @@ -19,15 +19,15 @@ - [Avoid copies when message passing](./by-example/tips_indirection.md) - [`'static` super-powers](./by-example/tips_static_lifetimes.md) - [Inspecting generated code](./by-example/tips_view_code.md) - - [Running tasks from RAM](./by-example/tips_from_ram.md) + - [RTIC vs. the world](./rtic_vs.md) - [Awesome RTIC examples](./awesome_rtic.md) -- [Migration Guides](./migration.md) + diff --git a/book/en/src/by-example/tips_from_ram.md b/book/en/src/by-example/tips_from_ram.md index fc47803..f6b2173 100644 --- a/book/en/src/by-example/tips_from_ram.md +++ b/book/en/src/by-example/tips_from_ram.md @@ -1,33 +1,28 @@ # Running tasks from RAM -The main goal of moving the specification of RTIC applications to attributes in -RTIC v0.4.0 was to allow inter-operation with other attributes. For example, the -`link_section` attribute can be applied to tasks to place them in RAM; this can +The main goal of moving the specification of RTIC applications to attributes in RTIC v0.4.0 was to allow inter-operation with other attributes. For example, the `link_section` attribute can be applied to tasks to place them in RAM; this can improve performance in some cases. -> **IMPORTANT**: In general, the `link_section`, `export_name` and `no_mangle` -> attributes are powerful but also easy to misuse. Incorrectly using any of -> these attributes can cause undefined behavior; you should always prefer to use -> safe, higher level attributes around them like `cortex-m-rt`'s `interrupt` and -> `exception` attributes. +> **IMPORTANT**: In general, the `link_section`, `export_name` and `no_mangle` attributes are powerful but also easy to misuse. Incorrectly using any of these attributes can cause undefined behavior; you should always prefer to use safe, higher level attributes around them like `cortex-m-rt`'s `interrupt` and `exception` attributes. > -> In the particular case of RAM functions there's no -> safe abstraction for it in `cortex-m-rt` v0.6.5 but there's an [RFC] for -> adding a `ramfunc` attribute in a future release. +> In the particular case of RAM functions there's no safe abstraction for it in `cortex-m-rt` v0.6.5 but there's an [RFC] for adding a `ramfunc` attribute in a future release. [RFC]: https://github.com/rust-embedded/cortex-m-rt/pull/100 The example below shows how to place the higher priority task, `bar`, in RAM. ``` rust -{{#include ../../../../examples/ramfunc.rs}} +{{#include ../../../../rtic/examples/ramfunc.rs}} ``` Running this program produces the expected output. ``` console $ cargo run --target thumbv7m-none-eabi --example ramfunc -{{#include ../../../../ci/expected/ramfunc.run}} +``` + +``` console +{{#include ../../../../rtic/ci/expected/ramfunc.run}} ``` One can look at the output of `cargo-nm` to confirm that `bar` ended in RAM @@ -35,10 +30,16 @@ One can look at the output of `cargo-nm` to confirm that `bar` ended in RAM ``` console $ cargo nm --example ramfunc --release | grep ' foo::' -{{#include ../../../../ci/expected/ramfunc.run.grep.foo}} ``` ``` console -$ cargo nm --example ramfunc --release | grep ' bar::' -{{#include ../../../../ci/expected/ramfunc.run.grep.bar}} +{{#include ../../../../rtic/ci/expected/ramfunc.run.grep.foo}} +``` + +``` console +$ cargo nm --example ramfunc --target thumbv7m-none-eabi --release | grep '*bar::' +``` + +``` console +{{#include ../../../../rtic/ci/expected/ramfunc.run.grep.bar}} ``` diff --git a/book/en/src/by-example/tips_static_lifetimes.md b/book/en/src/by-example/tips_static_lifetimes.md index dadd9c9..0eaa59f 100644 --- a/book/en/src/by-example/tips_static_lifetimes.md +++ b/book/en/src/by-example/tips_static_lifetimes.md @@ -2,23 +2,22 @@ In `#[init]` and `#[idle]` `local` resources have `'static` lifetime. -Useful when pre-allocating and/or splitting resources between tasks, drivers -or some other object. -This comes in handy when drivers, such as USB drivers, need to allocate memory and -when using splittable data structures such as [`heapless::spsc::Queue`]. +Useful when pre-allocating and/or splitting resources between tasks, drivers or some other object. This comes in handy when drivers, such as USB drivers, need to allocate memory and when using splittable data structures such as [`heapless::spsc::Queue`]. -In the following example two different tasks share a [`heapless::spsc::Queue`] -for lock-free access to the shared queue. +In the following example two different tasks share a [`heapless::spsc::Queue`] for lock-free access to the shared queue. [`heapless::spsc::Queue`]: https://docs.rs/heapless/0.7.5/heapless/spsc/struct.Queue.html ``` rust -{{#include ../../../../examples/static.rs}} +{{#include ../../../../rtic/examples/static.rs}} ``` Running this program produces the expected output. ``` console $ cargo run --target thumbv7m-none-eabi --example static -{{#include ../../../../ci/expected/static.run}} +``` + +``` console +{{#include ../../../../rtic/ci/expected/static.run}} ``` diff --git a/book/en/src/by-example/tips_view_code.md b/book/en/src/by-example/tips_view_code.md index 736b7ac..b4a9066 100644 --- a/book/en/src/by-example/tips_view_code.md +++ b/book/en/src/by-example/tips_view_code.md @@ -1,21 +1,19 @@ # Inspecting generated code -`#[rtic::app]` is a procedural macro that produces support code. If for some -reason you need to inspect the code generated by this macro you have two -options: +`#[rtic::app]` is a procedural macro that produces support code. If for some reason you need to inspect the code generated by this macro you have two options: -You can inspect the file `rtic-expansion.rs` inside the `target` directory. This -file contains the expansion of the `#[rtic::app]` item (not your whole program!) -of the *last built* (via `cargo build` or `cargo check`) RTIC application. The -expanded code is not pretty printed by default, so you'll want to run `rustfmt` -on it before you read it. +You can inspect the file `rtic-expansion.rs` inside the `target` directory. This file contains the expansion of the `#[rtic::app]` item (not your whole program!) of the *last built* (via `cargo build` or `cargo check`) RTIC application. The expanded code is not pretty printed by default, so you'll want to run `rustfmt` on it before you read it. ``` console -$ cargo build --example foo +$ cargo build --example smallest --target thumbv7m-none-eabi +``` +``` console $ rustfmt target/rtic-expansion.rs +``` -tail target/rtic-expansion.rs +``` console +$ tail target/rtic-expansion.rs ``` ``` rust @@ -36,13 +34,14 @@ mod app { } ``` -Or, you can use the [`cargo-expand`] sub-command. This sub-command will expand -*all* the macros, including the `#[rtic::app]` attribute, and modules in your -crate and print the output to the console. +Or, you can use the [`cargo-expand`] sub-command. This sub-command will expand *all* the macros, including the `#[rtic::app]` attribute, and modules in your crate and print the output to the console. [`cargo-expand`]: https://crates.io/crates/cargo-expand ``` console # produces the same output as before +``` + +``` console cargo expand --example smallest | tail ``` -- cgit v1.2.3 From fc6343b65c79b287ba1884514698e59f87a3d47d Mon Sep 17 00:00:00 2001 From: perlindgren Date: Wed, 1 Feb 2023 22:37:42 +0100 Subject: Apply suggestions from code review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Thanks for all suggestions, awesome! Co-authored-by: Henrik Tjäder --- book/en/src/by-example/channel.md | 8 ++++---- book/en/src/preface.md | 12 ++++++------ book/en/src/rtic_vs.md | 4 +++- 3 files changed, 13 insertions(+), 11 deletions(-) (limited to 'book/en/src') diff --git a/book/en/src/by-example/channel.md b/book/en/src/by-example/channel.md index 1f9510a..c020870 100644 --- a/book/en/src/by-example/channel.md +++ b/book/en/src/by-example/channel.md @@ -58,9 +58,9 @@ $ cargo run --target thumbv7m-none-eabi --example async-channel --features test- {{#include ../../../../rtic/ci/expected/async-channel.run}} ``` -Also sender endpoint can be awaited. In case there the channel capacity has not been reached, `await` the sender can progress immediately, while in the case the capacity is reached, the sender is blocked until there is free space in the queue. In this way data is never lost. +Also sender endpoint can be awaited. In case the channel capacity has not yet been reached, `await`-ing the sender can progress immediately, while in the case the capacity is reached, the sender is blocked until there is free space in the queue. In this way data is never lost. -In the below example the `CAPACITY` has been reduced to 1, forcing sender tasks to wait until the data in the channel has been received. +In the following example the `CAPACITY` has been reduced to 1, forcing sender tasks to wait until the data in the channel has been received. ``` rust {{#include ../../../../rtic/examples/async-channel-done.rs}} @@ -77,7 +77,7 @@ $ cargo run --target thumbv7m-none-eabi --example async-channel-done --features ## Error handling -In case all senders have been dropped `await` on an empty receiver channel results in an error. This allows to gracefully implement different types of shutdown operations. +In case all senders have been dropped `await`-ing on an empty receiver channel results in an error. This allows to gracefully implement different types of shutdown operations. ``` rust {{#include ../../../../rtic/examples/async-channel-no-sender.rs}} @@ -91,7 +91,7 @@ $ cargo run --target thumbv7m-none-eabi --example async-channel-no-sender --feat {{#include ../../../../rtic/ci/expected/async-channel-no-sender.run}} ``` -Similarly, `await` on a send channel results in an error in case the receiver has been dropped. This allows to gracefully implement application level error handling. +Similarly, `await`-ing on a send channel results in an error in case the receiver has been dropped. This allows to gracefully implement application level error handling. The resulting error returns the data back to the sender, allowing the sender to take appropriate action (e.g., storing the data to later retry sending it). diff --git a/book/en/src/preface.md b/book/en/src/preface.md index c6638ab..6b859a2 100644 --- a/book/en/src/preface.md +++ b/book/en/src/preface.md @@ -16,7 +16,7 @@ This book contains user level documentation for the Real-Time Interrupt-driven C -This is the documentation of v2.0.x (pre-release) of RTIC 2. +This is the documentation for RTIC v2.x. ## RTIC - The Past, current and Future @@ -27,11 +27,11 @@ The RTIC framework takes the outset from real-time systems research at Luleå Un [Timber]: https://timber-lang.org/ [RTFM-SRP]: https://www.diva-portal.org/smash/get/diva2:1005680/FULLTEXT01.pdf [RTFM-core]: https://ltu.diva-portal.org/smash/get/diva2:1013248/FULLTEXT01.pdf -[AbstractTimer]: https://ltu.diva-portal.org/smash/get/diva2:1013030/FULLTEXT01.pdf +[Abstract Timer]: https://ltu.diva-portal.org/smash/get/diva2:1013030/FULLTEXT01.pdf ## Stack Resource Policy based Scheduling -Stack Resource Policy (SRP) based concurrency and resource management is at heart of the RTIC framework. The [SRP] model itself extends on [Priority Inheritance Protocols], and provides a set of outstanding properties for single core scheduling. To name a few: +[Stack Resource Policy (SRP)][SRP] based concurrency and resource management is at heart of the RTIC framework. The SRP model itself extends on [Priority Inheritance Protocols], and provides a set of outstanding properties for single core scheduling. To name a few: - preemptive deadlock and race-free scheduling - resource efficiency @@ -68,7 +68,7 @@ graph LR ## RTIC the hardware accelerated real-time scheduler -SRP itself is compatible both to dynamic and static priority scheduling. For the implementation of RTIC we leverage on the underlying hardware for accelerated static priority scheduling. +SRP itself is compatible with both dynamic and static priority scheduling. For the implementation of RTIC we leverage on the underlying hardware for accelerated static priority scheduling. In the case of the `ARM Cortex-M` architecture, each interrupt vector entry `v[i]` is associated a function pointer (`v[i].fn`), and a static priority (`v[i].priority`), an enabled- (`v[i].enabled`) and a pending-bit (`v[i].pending`). @@ -84,7 +84,7 @@ The SPR model for single-core static scheduling on the other hand states that a The similarities are striking and it is not by chance/luck/coincidence. The hardware was cleverly designed with real-time scheduling in mind. -In order to map the SRP scheduling onto the hardware we need to have a closer look on the system ceiling (𝜫). Under SRP 𝜫 is computed as the maximum priority ceiling of the currently held resources, and will thus change dynamically during the system operation. +In order to map the SRP scheduling onto the hardware we need to take a closer look at the system ceiling (𝜫). Under SRP 𝜫 is computed as the maximum priority ceiling of the currently held resources, and will thus change dynamically during the system operation. ## Example @@ -99,7 +99,7 @@ The mapping of static priority SRP based scheduling to the Cortex M hardware is ## Example -For the running example, a snapshot of the ARM Cortex M [NVIC] may have the following configuration (after task `A` has been pended for execution.) +For the running example, a snapshot of the ARM Cortex M [Nested Vectored Interrupt Controller (NVIC)][NVIC] may have the following configuration (after task `A` has been pended for execution.) | Index | Fn | Priority | Enabled | Pended | | ----- | --- | -------- | ------- | ------ | diff --git a/book/en/src/rtic_vs.md b/book/en/src/rtic_vs.md index 2f8c8d5..454b239 100644 --- a/book/en/src/rtic_vs.md +++ b/book/en/src/rtic_vs.md @@ -10,7 +10,9 @@ It provides a minimal set of required mechanisms for safe sharing of mutable res Comparing RTIC to traditional a Real-Time Operating System (RTOS) is hard. Firstly, a traditional RTOS typically comes with no guarantees regarding system safety, even the most hardened kernels like the formally verified [seL4] kernel. Their claims to integrity, confidentiality, and availability regards only the kernel itself (under additional assumptions its configuration and environment). They even state: -"An OS kernel, verified or not, does not automatically make a system secure. In fact, any system, no matter how secure, can be used in insecure ways." +"An OS kernel, verified or not, does not automatically make a system secure. In fact, any system, no matter how secure, can be used in insecure ways." - [seL4 FAQ][sel4faq] + +[sel4faq]: https://docs.sel4.systems/projects/sel4/frequently-asked-questions.html [seL4]: https://sel4.systems/ -- cgit v1.2.3 From 0f513e1e20304eaf876f46a6ea2b66c76b9c38aa Mon Sep 17 00:00:00 2001 From: Per Lindgren Date: Thu, 2 Feb 2023 22:05:36 +0100 Subject: book/example polish --- book/en/src/by-example/delay.md | 49 ++++++++++++++++++++++------------------- 1 file changed, 26 insertions(+), 23 deletions(-) (limited to 'book/en/src') diff --git a/book/en/src/by-example/delay.md b/book/en/src/by-example/delay.md index 8d05d7c..f286363 100644 --- a/book/en/src/by-example/delay.md +++ b/book/en/src/by-example/delay.md @@ -6,13 +6,14 @@ This can be achieved by instantiating a monotonic timer: ``` rust ... -rtic_monotonics::make_systick_timer_queue!(TIMER); +rtic_monotonics::make_systick_handler!(); #[init] -fn init(cx: init::Context) -> (Shared, Local) { - let systick = Systick::start(cx.core.SYST, 12_000_000); - TIMER.initialize(systick); - ... +fn init(cx: init::Context) -> (Shared, Local) { + hprintln!("init"); + + Systick::start(cx.core.SYST, 12_000_000); + ... ``` A *software* task can `await` the delay to expire: @@ -21,8 +22,10 @@ A *software* task can `await` the delay to expire: #[task] async fn foo(_cx: foo::Context) { ... - TIMER.delay(100.millis()).await; + Systick::delay(100.millis()).await; ... +} + ``` Technically, the timer queue is implemented as a list based priority queue, where list-nodes are statically allocated as part of the underlying task `Future`. Thus, the timer queue is infallible at run-time (its size and allocation is determined at compile time). @@ -51,21 +54,21 @@ A common use case is transactions with associated timeout. In the examples shown Using the `select_biased` macro from the `futures` crate it may look like this: -```rust +``` rust // Call hal with short relative timeout using `select_biased` select_biased! { - v = hal_get(&TIMER, 1).fuse() => hprintln!("hal returned {}", v), - _ = TIMER.delay(200.millis()).fuse() => hprintln!("timeout", ), // this will finish first + v = hal_get(1).fuse() => hprintln!("hal returned {}", v), + _ = Systick::delay(200.millis()).fuse() => hprintln!("timeout", ), // this will finish first } ``` Assuming the `hal_get` will take 450ms to finish, a short timeout of 200ms will expire. -```rust +``` rust // Call hal with long relative timeout using `select_biased` select_biased! { - v = hal_get(&TIMER, 1).fuse() => hprintln!("hal returned {}", v), // hal finish first - _ = TIMER.delay(1000.millis()).fuse() => hprintln!("timeout", ), + v = hal_get(1).fuse() => hprintln!("hal returned {}", v), // hal finish first + _ = Systick::delay(1000.millis()).fuse() => hprintln!("timeout", ), } ``` @@ -73,9 +76,9 @@ By extending the timeout to 1000ms, the `hal_get` will finish first. Using `select_biased` any number of futures can be combined, so its very powerful. However, as the timeout pattern is frequently used, it is directly supported by the RTIC [rtc-monotonics] and [rtic-time] crates. The second example from above using `timeout_after`: -```rust +``` rust // Call hal with long relative timeout using monotonic `timeout_after` -match TIMER.timeout_after(1000.millis(), hal_get(&TIMER, 1)).await { +match Systick::timeout_after(1000.millis(), hal_get(1)).await { Ok(v) => hprintln!("hal returned {}", v), _ => hprintln!("timeout"), } @@ -85,28 +88,28 @@ In cases you want exact control over time without drift. For this purpose we can [fugit]: https://crates.io/crates/fugit -```rust +``` rust // get the current time instance -let mut instant = TIMER.now(); +let mut instant = Systick::now(); // do this 3 times for n in 0..3 { - // exact point in time without drift + // absolute point in time without drift instant += 1000.millis(); - TIMER.delay_until(instant).await; + Systick::delay_until(instant).await; - // exact point it time for timeout + // absolute point it time for timeout let timeout = instant + 500.millis(); - hprintln!("now is {:?}, timeout at {:?}", TIMER.now(), timeout); + hprintln!("now is {:?}, timeout at {:?}", Systick::now(), timeout); - match TIMER.timeout_at(timeout, hal_get(&TIMER, n)).await { - Ok(v) => hprintln!("hal returned {} at time {:?}", v, TIMER.now()), + match Systick::timeout_at(timeout, hal_get(n)).await { + Ok(v) => hprintln!("hal returned {} at time {:?}", v, Systick::now()), _ => hprintln!("timeout"), } } ``` -`instant = TIMER.now()` gives the baseline (i.e., the exact current point in time). We want to call `hal_get` after 1000ms relative to this exact point in time. This can be accomplished by `TIMER.delay_until(instant).await;`. We define the absolute point in time for the `timeout`, and call `TIMER.timeout_at(timeout, hal_get(&TIMER, n)).await`. For the first loop iteration `n == 0`, and the `hal_get` will take 350ms (and finishes before the timeout). For the second iteration `n == 1`, and `hal_get` will take 450ms (and again succeeds to finish before the timeout). For the third iteration `n == 2` (`hal_get` will take 5500ms to finish). In this case we will run into a timeout. +`instant = Systick::now()` gives the baseline (i.e., the absolute current point in time). We want to call `hal_get` after 1000ms relative to this absolute point in time. This can be accomplished by `Systick::delay_until(instant).await;`. We define the absolute point in time for the `timeout`, and call `Systick::timeout_at(timeout, hal_get(n)).await`. For the first loop iteration `n == 0`, and the `hal_get` will take 350ms (and finishes before the timeout). For the second iteration `n == 1`, and `hal_get` will take 450ms (and again succeeds to finish before the timeout). For the third iteration `n == 2` (`hal_get` will take 5500ms to finish). In this case we will run into a timeout. The complete example: -- cgit v1.2.3 From 5fadc0704205fd9cda3b75eb5e2319496e98b48c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Henrik=20Tj=C3=A4der?= Date: Fri, 3 Feb 2023 20:26:00 +0100 Subject: Update book/en/src/by-example/app.md --- book/en/src/by-example/app.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'book/en/src') diff --git a/book/en/src/by-example/app.md b/book/en/src/by-example/app.md index cef8288..8840bdb 100644 --- a/book/en/src/by-example/app.md +++ b/book/en/src/by-example/app.md @@ -12,7 +12,7 @@ The `app` attribute will expand into a suitable entry point and thus replaces th ## Structure and zero-cost concurrency -An RTIC `app` is an executable system model for since-core applications, declaring a set of `local` and `shared` resources operated on by a set of `init`, `idle`, *hardware* and *software* tasks. In short the `init` task runs before any other task returning the set of `local` and `shared` resources. Tasks run preemptively based on their associated static priority, `idle` has the lowest priority (and can be used for background work, and/or to put the system to sleep until woken by some event). Hardware tasks are bound to underlying hardware interrupts, while software tasks are scheduled by asynchronous executors (one for each software task priority). +An RTIC `app` is an executable system model for single-core applications, declaring a set of `local` and `shared` resources operated on by a set of `init`, `idle`, *hardware* and *software* tasks. In short the `init` task runs before any other task returning the set of `local` and `shared` resources. Tasks run preemptively based on their associated static priority, `idle` has the lowest priority (and can be used for background work, and/or to put the system to sleep until woken by some event). Hardware tasks are bound to underlying hardware interrupts, while software tasks are scheduled by asynchronous executors (one for each software task priority). At compile time the task/resource model is analyzed under the Stack Resource Policy (SRP) and executable code generated with the following outstanding properties: -- cgit v1.2.3 From ace010f4e9a7cf1d8b90e9a05eb1b7ea583c2c81 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Henrik=20Tj=C3=A4der?= Date: Fri, 3 Feb 2023 22:25:23 +0100 Subject: Book: Touchup README and preface --- book/en/src/preface.md | 46 ++++++++++++++++++++-------------------------- 1 file changed, 20 insertions(+), 26 deletions(-) (limited to 'book/en/src') diff --git a/book/en/src/preface.md b/book/en/src/preface.md index 6b859a2..5f6856d 100644 --- a/book/en/src/preface.md +++ b/book/en/src/preface.md @@ -10,13 +10,21 @@ This book contains user level documentation for the Real-Time Interrupt-driven Concurrency (RTIC) framework. The API reference is available [here](../../api/). - +This is the documentation for RTIC v2.x. - +{{#include ../../../README.md:59}} - +Older releases: +[RTIC v1.x](/1.0) | [RTIC v0.5.x (unsupported)](/0.5) | [RTFM v0.4.x (unsupported)](/0.4) -This is the documentation for RTIC v2.x. +{{#include ../../../README.md:7:12}} + +## Is RTIC an RTOS? + +A common question is whether RTIC is an RTOS or not, and depending on your background the answer may vary. From RTIC's developers point of view; RTIC is a hardware accelerated RTOS that utilizes the hardware such as the NVIC on Cortex-M MCUs, CLIC on RISC-V etc. to perform scheduling, rather than the more classical software kernel. + +Another common view from the community is that RTIC is a concurrency framework as there +is no software kernel and that it relies on external HALs. ## RTIC - The Past, current and Future @@ -40,7 +48,7 @@ The RTIC framework takes the outset from real-time systems research at Luleå Un - predictable scheduling, with bounded priority inversion by a single (named) critical section - theoretical underpinning amenable to static analysis (e.g., for task response times and overall schedulability) -SRP comes with a set of system wide requirements: +SRP comes with a set of system-wide requirements: - each task is associated a static priority, - tasks execute on a single-core, - tasks must be run-to-completion, and @@ -122,21 +130,21 @@ In this way RTIC fuses SRP based preemptive scheduling with a zero-cost hardware Given that the approach is dead simple, how come SRP and hardware accelerated scheduling is not adopted by any other mainstream RTOS? -The answer is simple, the commonly adopted threading model does not lend itself well to static analysis - there is no known way to extract the task/resource dependencies from the source code at compile time (thus ceilings cannot be efficiently computed and the LIFO resource locking requirement cannot be ensured). Thus SRP based scheduling is in the general case out of reach for any thread based RTOS. +The answer is simple, the commonly adopted threading model does not lend itself well to static analysis - there is no known way to extract the task/resource dependencies from the source code at compile time (thus ceilings cannot be efficiently computed and the LIFO resource locking requirement cannot be ensured). Thus, SRP based scheduling is in the general case out of reach for any thread based RTOS. ## RTIC into the Future Asynchronous programming in various forms are getting increased popularity and language support. Rust natively provides an `async`/`await` API for cooperative multitasking and the compiler generates the necessary boilerplate for storing and retrieving execution contexts (i.e., managing the set of local variables that spans each `await`). -The Rust standard library provides collections for dynamically allocated data-structures (useful to manage execution contexts at run-time. However, in the setting of resource constrained real-time systems, dynamic allocations are problematic (both regarding performance and reliability - Rust runs into a *panic* on an out-of-memory condition). Thus, static allocation is king! +The Rust standard library provides collections for dynamically allocated data-structures which are useful to manage execution contexts at run-time. However, in the setting of resource constrained real-time systems, dynamic allocations are problematic (both regarding performance and reliability - Rust runs into a *panic* on an out-of-memory condition). Thus, static allocation is the preferable approach! -RTIC provides a mechanism for `async`/`await` that relies solely on static allocations. However, the implementation relies on the `#![feature(type_alias_impl_trait)]` (TAIT) which is undergoing stabilization (thus RTIC 2.0.x currently requires a *nightly* toolchain). Technically, using TAIT, the compiler determines the size of each execution context allowing static allocation. +RTIC provides a mechanism for `async`/`await` that relies solely on static allocations. However, the implementation relies on the `#![feature(type_alias_impl_trait)]` (TAIT) which is undergoing stabilization (thus RTIC v2.x currently requires a *nightly* toolchain). Technically, using TAIT, the compiler determines the size of each execution context allowing static allocation. From a modelling perspective `async/await` lifts the run-to-completion requirement of SRP, and each section of code between two yield points (`await`s) can be seen as an individual task. The compiler will reject any attempt to `await` while holding a resource (not doing so would break the strict LIFO requirement on resource usage under SRP). -So with the technical stuff out of the way, what does `async/await` bring to the RTIC table? +So with the technical stuff out of the way, what does `async/await` bring to the table? -The answer is - improved ergonomics! In cases you want a task to perform a sequence of requests (and await their results in order to progress). Without `async`/`await` the programmer would be forced to split the task into individual sub-tasks and maintain some sort of state encoding (and manually progress by selecting sub-task). Using `async/await` each yield point (`await`) essentially represents a state, and the progression mechanism is built automatically for you at compile time by means of `Futures`. +The answer is - improved ergonomics! A recurring use case is to have task perform a sequence of requests and then await their results in order to progress. Without `async`/`await` the programmer would be forced to split the task into individual sub-tasks and maintain some sort of state encoding (and manually progress by selecting sub-task). Using `async/await` each yield point (`await`) essentially represents a state, and the progression mechanism is built automatically for you at compile time by means of `Futures`. Rust `async`/`await` support is still incomplete and/or under development (e.g., there are no stable way to express `async` closures, precluding use in iterator patterns). Nevertheless, Rust `async`/`await` is production ready and covers most common use cases. @@ -144,7 +152,7 @@ An important property is that futures are composable, thus you can await either, ## RTIC the model -An RTIC `app` is a declarative and executable system model for single-core applications, defining a set of (`local` and `shared`) resources operated on by a set of (`init`, `idle`, *hardware* and *software*) tasks. In short the `init` task runs before any other task returning a set of resources (`local` and `shared`). Tasks run preemptively based on their associated static priority, `idle` has the lowest priority (and can be used for background work, and/or to put the system to sleep until woken by some event). Hardware tasks are bound to underlying hardware interrupts, while software tasks are scheduled by asynchronous executors (one for each software task priority). +An RTIC `app` is a declarative and executable system model for single-core applications, defining a set of (`local` and `shared`) resources operated on by a set of (`init`, `idle`, *hardware* and *software*) tasks. In short the `init` task runs before any other task returning a set of resources (`local` and `shared`). Tasks run preemptively based on their associated static priority, `idle` has the lowest priority (and can be used for background work, and/or to put the system to sleep until woken by some event). Hardware tasks are bound to underlying hardware interrupts, while software tasks are scheduled by asynchronous executors (one for each software task priority). At compile time the task/resource model is analyzed under SRP and executable code generated with the following outstanding properties: @@ -152,18 +160,4 @@ At compile time the task/resource model is analyzed under SRP and executable cod - hardware task scheduling is performed directly by the hardware, and - software task scheduling is performed by auto generated async executors tailored to the application. -The RTIC API design ensures that both SRP requirements and Rust soundness rules are upheld at all times, thus the executable model is correct by construction. Overall, the generated code infers no additional overhead in comparison to a hand-written implementation, thus in Rust terms RTIC offers a zero-cost abstraction to concurrency. - - - - +The RTIC API design ensures that both SRP requirements and Rust soundness rules are upheld at all times, thus the executable model is correct by construction. Overall, the generated code infers no additional overhead in comparison to a handwritten implementation, thus in Rust terms RTIC offers a zero-cost abstraction to concurrency. -- cgit v1.2.3 From d82f57772459d9bf12bb2c935e3ebc9b93368f51 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Henrik=20Tj=C3=A4der?= Date: Wed, 22 Feb 2023 19:03:51 +0100 Subject: Book: Fix links, proofread targets and starting_a_project --- book/en/src/by-example/starting_a_project.md | 4 ++-- book/en/src/internals/targets.md | 15 ++++++--------- 2 files changed, 8 insertions(+), 11 deletions(-) (limited to 'book/en/src') diff --git a/book/en/src/by-example/starting_a_project.md b/book/en/src/by-example/starting_a_project.md index 8638f90..86d7e71 100644 --- a/book/en/src/by-example/starting_a_project.md +++ b/book/en/src/by-example/starting_a_project.md @@ -3,14 +3,14 @@ A recommendation when starting a RTIC project from scratch is to follow RTIC's [`defmt-app-template`]. -If you are targeting ARMv6-M or ARMv8-M-base architecture, check out the section [Target Architecture](../internals/targets.md) for more information on hardware limitations to be aware of. +If you are targeting ARMv6-M or ARMv8-M-base architecture, check out the section [Target Architecture](../internals/targets.md) for more information on hardware limitations to be aware of. [`defmt-app-template`]: https://github.com/rtic-rs/defmt-app-template This will give you an RTIC application with support for RTT logging with [`defmt`] and stack overflow protection using [`flip-link`]. There is also a multitude of examples provided by the community: -For inspiration you may look at the below resources. For now they cover RTIC 1.0.x, but will be updated with RTIC 2.0.x examples over time. +For inspiration, you may look at the below resources. For now, they cover RTIC v1.x, but will be updated with RTIC v2.x examples over time. - [`rtic-examples`] - Multiple projects - [https://github.com/kalkyl/f411-rtic](https://github.com/kalkyl/f411-rtic) diff --git a/book/en/src/internals/targets.md b/book/en/src/internals/targets.md index 04fd592..efad150 100644 --- a/book/en/src/internals/targets.md +++ b/book/en/src/internals/targets.md @@ -1,7 +1,7 @@ # Target Architecture -While RTIC can currently target all Cortex-m devices there are some key architecure differences that -users should be aware of. Namely the absence of Base Priority Mask Register (`BASEPRI`) which lends +While RTIC can currently target all Cortex-m devices there are some key architecture differences that +users should be aware of. Namely, the absence of Base Priority Mask Register (`BASEPRI`) which lends itself exceptionally well to the hardware priority ceiling support used in RTIC, in the ARMv6-M and ARMv8-M-base architectures, which forces RTIC to use source masking instead. For each implementation of lock and a detailed commentary of pros and cons, see the implementation of @@ -29,7 +29,7 @@ Table 1 below shows a list of Cortex-m processors and which type of critical sec ## Priority Ceiling -This implementation is covered in depth by the [Critical Sections][critical_sections] page of this book. +This is covered by the [Resources][resources] page of this book. ## Source Masking @@ -55,17 +55,14 @@ with B. ``` At time *t1*, task B locks the shared resource by selectively disabling (using the NVIC) all other -tasks which have a priority equal to or less than any task which shares resouces with B. In effect -this creates a virtual priority ceiling, miroring the `BASEPRI` approach described in the -[Critical Sections][critical_Sections] page. Task A is one such task that shares resources with +tasks which have a priority equal to or less than any task which shares resources with B. In effect +this creates a virtual priority ceiling, mirroring the `BASEPRI` approach. Task A is one such task that shares resources with task B. At time *t2*, task A is either spawned by task B or becomes pending through an interrupt condition, but does not yet preempt task B even though its priority is greater. This is because the -NVIC is preventing it from starting due to task A being being disabled. At time *t3*, task B +NVIC is preventing it from starting due to task A being disabled. At time *t3*, task B releases the lock by re-enabling the tasks in the NVIC. Because task A was pending and has a higher priority than task B, it immediately preempts task B and is free to use the shared resource without risk of data race conditions. At time *t4*, task A completes and returns the execution context to B. Since source masking relies on use of the NVIC, core exception sources such as HardFault, SVCall, PendSV, and SysTick cannot share data with other tasks. - -[critical_sections]: https://github.com/rtic-rs/cortex-m-rtic/blob/master/book/en/src/internals/critical-sections.md -- cgit v1.2.3 From 30873042985851206a65a08102a9b09d1c99ee39 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Henrik=20Tj=C3=A4der?= Date: Thu, 23 Feb 2023 19:38:10 +0100 Subject: Book: Reintroduce WIP internals targets --- book/en/src/SUMMARY.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) (limited to 'book/en/src') diff --git a/book/en/src/SUMMARY.md b/book/en/src/SUMMARY.md index 65c72c4..587117c 100644 --- a/book/en/src/SUMMARY.md +++ b/book/en/src/SUMMARY.md @@ -26,8 +26,9 @@ + - [RTFM to RTIC](./migration/migration_rtic.md) --> +- [Under the hood](./internals.md) + - [Cortex-M architectures](./internals/targets.md) -- cgit v1.2.3 From 2f8c7d3083b515403a58acdcd4a3c5fcccfb27d1 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Henrik=20Tj=C3=A4der?= Date: Fri, 24 Feb 2023 22:28:02 +0100 Subject: Book: Fix two broken links --- book/en/src/by-example/app.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'book/en/src') diff --git a/book/en/src/by-example/app.md b/book/en/src/by-example/app.md index 8840bdb..0d977a1 100644 --- a/book/en/src/by-example/app.md +++ b/book/en/src/by-example/app.md @@ -6,9 +6,9 @@ All RTIC applications use the [`app`] attribute (`#[app(..)]`). This attribute o The `app` attribute will expand into a suitable entry point and thus replaces the use of the [`cortex_m_rt::entry`] attribute. -[`app`]: ../../../api/cortex_m_rtic_macros/attr.app.html +[`app`]: ../../../api/rtic_macros/attr.app.html [`svd2rust`]: https://crates.io/crates/svd2rust -[`cortex_m_rt::entry`]: ../../../api/cortex_m_rt_macros/attr.entry.html +[`cortex_m_rt::entry`]: https://docs.rs/cortex-m-rt-macros/latest/cortex_m_rt_macros/attr.entry.html ## Structure and zero-cost concurrency -- cgit v1.2.3 From f03aede2f5a926bdb26d052766492c32454a60dd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Henrik=20Tj=C3=A4der?= Date: Thu, 2 Mar 2023 22:38:25 +0100 Subject: Fixes for repo rename to rtic --- book/en/src/by-example/hardware_tasks.md | 2 +- book/en/src/internals/targets.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) (limited to 'book/en/src') diff --git a/book/en/src/by-example/hardware_tasks.md b/book/en/src/by-example/hardware_tasks.md index c902267..75dd1a4 100644 --- a/book/en/src/by-example/hardware_tasks.md +++ b/book/en/src/by-example/hardware_tasks.md @@ -10,7 +10,7 @@ All tasks bound to an explicit interrupt are called *hardware tasks* since they Specifying a non-existing interrupt name will cause a compilation error. The interrupt names are commonly defined by [PAC or HAL][pacorhal] crates. -Any available interrupt vector should work. Specific devices may bind specific interrupt priorities to specific interrupt vectors outside user code control. See for example the [nRF “softdevice”](https://github.com/rtic-rs/cortex-m-rtic/issues/434). +Any available interrupt vector should work. Specific devices may bind specific interrupt priorities to specific interrupt vectors outside user code control. See for example the [nRF “softdevice”](https://github.com/rtic-rs/rtic/issues/434). Beware of using interrupt vectors that are used internally by hardware features; RTIC is unaware of such hardware specific details. diff --git a/book/en/src/internals/targets.md b/book/en/src/internals/targets.md index efad150..3562eef 100644 --- a/book/en/src/internals/targets.md +++ b/book/en/src/internals/targets.md @@ -7,7 +7,7 @@ ARMv8-M-base architectures, which forces RTIC to use source masking instead. For of lock and a detailed commentary of pros and cons, see the implementation of [lock in src/export.rs][src_export]. -[src_export]: https://github.com/rtic-rs/cortex-m-rtic/blob/master/src/export.rs +[src_export]: https://github.com/rtic-rs/rtic/blob/master/src/export.rs These differences influence how critical sections are realized, but functionality should be the same except that ARMv6-M/ARMv8-M-base cannot have tasks with shared resources bound to exception -- cgit v1.2.3 From 5dc9c7083ddf2481948c9f9a877bd36552074489 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Henrik=20Tj=C3=A4der?= Date: Sat, 4 Mar 2023 21:44:12 +0100 Subject: Book: Tidy up preface --- book/en/src/preface.md | 2 -- 1 file changed, 2 deletions(-) (limited to 'book/en/src') diff --git a/book/en/src/preface.md b/book/en/src/preface.md index 5f6856d..5cba633 100644 --- a/book/en/src/preface.md +++ b/book/en/src/preface.md @@ -12,8 +12,6 @@ This book contains user level documentation for the Real-Time Interrupt-driven C This is the documentation for RTIC v2.x. -{{#include ../../../README.md:59}} - Older releases: [RTIC v1.x](/1.0) | [RTIC v0.5.x (unsupported)](/0.5) | [RTFM v0.4.x (unsupported)](/0.4) -- cgit v1.2.3