From 1f51b10297e9cbb4797aa1ed8be6a2b84c9f2e07 Mon Sep 17 00:00:00 2001 From: Per Lindgren Date: Sat, 28 Jan 2023 21:57:43 +0100 Subject: Book: Major rework for RTIC v2 --- book/en/src/preface.md | 159 ++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 145 insertions(+), 14 deletions(-) (limited to 'book/en/src/preface.md') diff --git a/book/en/src/preface.md b/book/en/src/preface.md index 6041dfe..3f47cb3 100644 --- a/book/en/src/preface.md +++ b/book/en/src/preface.md @@ -1,7 +1,7 @@
RTIC
-

Real-Time Interrupt-driven Concurrency

+

The Embedded Rust RTOS

A concurrency framework for building real-time systems

@@ -10,29 +10,160 @@ This book contains user level documentation for the Real-Time Interrupt-driven Concurrency (RTIC) framework. The API reference is available [here](../../api/). -Formerly known as Real-Time For the Masses. + -This is the documentation of v1.0.x of RTIC; for the documentation of version +This is the documentation of v2.0.x (pre-release) of RTIC 2. -* v0.5.x go [here](/0.5). -* v0.4.x go [here](/0.4). +## RTIC - The Past, current and Future + +This section gives a background to the RTIC model. Feel free to skip to section [RTIC the model](preface.md#rtic-the-model) for a TL;DR. + +The RTIC framework takes the outset from real-time systems research at LuleΓ₯ University of Technology (LTU) Sweden. RTIC is inspired by the concurrency model of the [Timber] language, the [RTFM-SRP] based scheduler, the [RTFM-core] language and [Abstract Timer] implementation. For a full list of related research see [TODO]. + +[Timber]: https://timber-lang.org/ +[RTFM-SRP]: https://www.diva-portal.org/smash/get/diva2:1005680/FULLTEXT01.pdf +[RTFM-core]: https://ltu.diva-portal.org/smash/get/diva2:1013248/FULLTEXT01.pdf +[AbstractTimer]: https://ltu.diva-portal.org/smash/get/diva2:1013030/FULLTEXT01.pdf + +## Stack Resource Policy based Scheduling + +Stack Resource Policy (SRP) based concurrency and resource management is at heart of the RTIC framework. The [SRP] model itself extends on [Priority Inheritance Protocols], and provides a set of outstanding properties for single core scheduling. To name a few: + +- preemptive deadlock and race-free scheduling +- resource efficiency + - tasks execute on a single shared stack + - tasks run-to-completion with wait free access to shared resources +- predictable scheduling, with bounded priority inversion by a single (named) critical section +- theoretical underpinning amenable to static analysis (e.g., for task response times and overall schedulability) + +SRP comes with a set of system wide requirements: +- each task is associated a static priority, +- tasks execute on a single-core, +- tasks must be run-to-completion, and +- resources must be claimed/locked in LIFO order. + +[SRP]: https://link.springer.com/article/10.1007/BF00365393 +[Priority Inheritance Protocols]: https://ieeexplore.ieee.org/document/57058 + +## SRP analysis + +SRP based scheduling requires the set of static priority tasks and their access to shared resources to be known in order to compute a static *ceiling* (𝝅) for each resource. The static resource *ceiling* 𝝅(r) reflects the maximum static priority of any task that accesses the resource `r`. + +### Example + +Assume two tasks `A` (with priority `p(A) = 2`) and `B` (with priority `p(B) = 4`) both accessing the shared resource `R`. The static ceiling of `R` is 4 (computed from `𝝅(R) = max(p(A) = 2, p(B) = 4) = 4`). + +A graph representation of the example: + +```mermaid +graph LR + A["p(A) = 2"] --> R + B["p(A) = 4"] --> R + R["𝝅(R) = 4"] +``` + +## RTIC the hardware accelerated real-time scheduler + +SRP itself is compatible both to dynamic and static priority scheduling. For the implementation of RTIC we leverage on the underlying hardware for accelerated static priority scheduling. + +In the case of the `ARM Cortex-M` architecture, each interrupt vector entry `v[i]` is associated a function pointer (`v[i].fn`), and a static priority (`v[i].priority`), an enabled- (`v[i].enabled`) and a pending-bit (`v[i].pending`). + +An interrupt `i` is scheduled (run) by the hardware under the conditions: +1. is `pended` and `enabled` and has a priority higher than the (optional `BASEPRI`) register, and +1. has the highest priority among interrupts meeting 1. + +The first condition (1) can be seen a filter allowing RTIC to take control over which tasks should be allowed to start (and which should be prevented from starting). + +The SPR model for single-core static scheduling on the other hand states that a task should be scheduled (run) under the conditions: +1. it is `requested` to run and has a static priority higher than the current system ceiling (𝜫) +1. it has the highest static priority among tasks meeting 1. + +The similarities are striking and it is not by chance/luck/coincidence. The hardware was cleverly designed with real-time scheduling in mind. + +In order to map the SRP scheduling onto the hardware we need to have a closer look on the system ceiling (𝜫). Under SRP 𝜫 is computed as the maximum priority ceiling of the currently held resources, and will thus change dynamically during the system operation. + +## Example + +Assume the task model above. Starting from an idle system, 𝜫 is 0, (no task is holding any resource). Assume that `A` is requested for execution, it will immediately be scheduled. Assume that `A` claims (locks) the resource `R`. During the claim (lock of `R`) any request `B` will be blocked from starting (by 𝜫 = `max(𝝅(R) = 4) = 4`, `p(B) = 4`, thus SRP scheduling condition 1 is not met). + +## Mapping + +The mapping of static priority SRP based scheduling to the Cortex M hardware is straightforward: -## Is RTIC an RTOS? +- each task `t` are mapped to an interrupt vector index `i` with a corresponding function `v[i].fn = t` and given the static priority `v[i].priority = p(t)`. +- the current system ceiling is mapped to the `BASEPRI` register or implemented through masking the interrupt enable bits accordingly. -A common question is whether RTIC is an RTOS or not, and depending on your background the -answer may vary. From RTIC's developers point of view; RTIC is a hardware accelerated -RTOS that utilizes the NVIC in Cortex-M MCUs to perform scheduling, rather than the more -classical software kernel. +## Example -Another common view from the community is that RTIC is a concurrency framework as there -is no software kernel and that it relies on external HALs. +For the running example, a snapshot of the ARM Cortex M [NVIC] may have the following configuration (after task `A` has been pended for execution.) ---- +| Index | Fn | Priority | Enabled | Pended | +| ----- | --- | -------- | ------- | ------ | +| 0 | A | 2 | true | true | +| 1 | B | 4 | true | false | + +[NVIC]: https://developer.arm.com/documentation/ddi0337/h/nested-vectored-interrupt-controller/about-the-nvic + +(As discussed later, the assignment of interrupt and exception vectors is up to the user.) + + +A claim (lock(r)) will change the current system ceiling (𝜫) and can be implemented as a *named* critical section: + - old_ceiling = 𝜫, 𝜫 = 𝝅(r) + - execute code within critical section + - old_ceiling = 𝜫 + +This amounts to a resource protection mechanism requiring only two machine instructions on enter and one on exit the critical section for managing the `BASEPRI` register. For architectures lacking `BASEPRI`, we can implement the system ceiling through a set of machine instructions for disabling/enabling interrupts on entry/exit for the named critical section. The number of machine instructions vary depending on the number of mask registers that needs to be updated (a single machine operation can operate on up to 32 interrupts, so for the M0/M0+ architecture a single instruction suffice). RTIC will determine the ceiling values and masking constants at compile time, thus all operations is in Rust terms zero-cost. + +In this way RTIC fuses SRP based preemptive scheduling with a zero-cost hardware accelerated implementation, resulting in "best in class" guarantees and performance. + +Given that the approach is dead simple, how come SRP and hardware accelerated scheduling is not adopted by any other mainstream RTOS? + +The answer is simple, the commonly adopted threading model does not lend itself well to static analysis - there is no known way to extract the task/resource dependencies from the source code at compile time (thus ceilings cannot be efficiently computed and the LIFO resource locking requirement cannot be ensured). Thus SRP based scheduling is in the general case out of reach for any thread based RTOS. + +## RTIC into the Future + +Asynchronous programming in various forms are getting increased popularity and language support. Rust natively provides an `async`/`await` API for cooperative multitasking and the compiler generates the necessary boilerplate for storing and retrieving execution contexts (i.e., managing the set of local variables that spans each `await`). + +The Rust standard library provides collections for dynamically allocated data-structures (useful to manage execution contexts at run-time. However, in the setting of resource constrained real-time systems, dynamic allocations are problematic (both regarding performance and reliability - Rust runs into a *panic* on an out-of-memory condition). Thus, static allocation is king! + +RTIC provides a mechanism for `async`/`await` that relies solely on static allocations. However, the implementation relies on the `#![feature(type_alias_impl_trait)]` (TAIT) which is undergoing stabilization (thus RTIC 2.0.x currently requires a *nightly* toolchain). Technically, using TAIT, the compiler determines the size of each execution context allowing static allocation. + +From a modelling perspective `async/await` lifts the run-to-completion requirement of SRP, and each section of code between two yield points (`await`s) can be seen as an individual task. The compiler will reject any attempt to `await` while holding a resource (not doing so would break the strict LIFO requirement on resource usage under SRP). + +So with the technical stuff out of the way, what does `async/await` bring to the RTIC table? + +The answer is - improved ergonomics! In cases you want a task to perform a sequence of requests (and await their results in order to progress). Without `async`/`await` the programmer would be forced to split the task into individual sub-tasks and maintain some sort of state encoding (and manually progress by selecting sub-task). Using `async/await` each yield point (`await`) essentially represents a state, and the progression mechanism is built automatically for you at compile time by means of `Futures`. + +Rust `async`/`await` support is still incomplete and/or under development (e.g., there are no stable way to express `async` closures, precluding use in iterator patterns). Nevertheless, Rust `async`/`await` is production ready and covers most common use cases. + +An important property is that futures are composable, thus you can await either, all, or any combination of possible futures (allowing e.g., timeouts and/or asynchronous errors to be promptly handled). For more details and examples see Section [todo]. + +## RTIC the model + +An RTIC `app` is a declarative and executable system model for single-core applications, defining a set of (`local` and `shared`) resources operated on by a set of (`init`, `idle`, *hardware* and *software*) tasks. In short the `init` task runs before any other task returning a set of resources (`local` and `shared`). Tasks run preemptively based on their associated static priority, `idle` has the lowest priority (and can be used for background work, and/or to put the system to sleep until woken by some event). Hardware tasks are bound to underlying hardware interrupts, while software tasks are scheduled by asynchronous executors (one for each software task priority). + +At compile time the task/resource model is analyzed under SRP and executable code generated with the following outstanding properties: + +- guaranteed race-free resource access and deadlock-free execution on a single-shared stack (thanks to SRP) + - hardware task scheduling is performed directly by the hardware, and + - software task scheduling is performed by auto generated async executors tailored to the application. + +The RTIC API design ensures that both SRP requirements and Rust soundness rules are upheld at all times, thus the executable model is correct by construction. Overall, the generated code infers no additional overhead in comparison to a hand-written implementation, thus in Rust terms RTIC offers a zero-cost abstraction to concurrency. + + + -- cgit v1.2.3 From 14fdca130f8c3ab598b30cfb7e70f8712ea42fb8 Mon Sep 17 00:00:00 2001 From: Emil Fresk Date: Wed, 1 Feb 2023 19:34:25 +0100 Subject: Minor book fix --- book/en/src/preface.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'book/en/src/preface.md') diff --git a/book/en/src/preface.md b/book/en/src/preface.md index 3f47cb3..c6638ab 100644 --- a/book/en/src/preface.md +++ b/book/en/src/preface.md @@ -1,7 +1,7 @@
RTIC
-

The Embedded Rust RTOS

+

The hardware accelerated Rust RTOS

A concurrency framework for building real-time systems

-- cgit v1.2.3 From fc6343b65c79b287ba1884514698e59f87a3d47d Mon Sep 17 00:00:00 2001 From: perlindgren Date: Wed, 1 Feb 2023 22:37:42 +0100 Subject: Apply suggestions from code review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Thanks for all suggestions, awesome! Co-authored-by: Henrik TjΓ€der --- book/en/src/preface.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) (limited to 'book/en/src/preface.md') diff --git a/book/en/src/preface.md b/book/en/src/preface.md index c6638ab..6b859a2 100644 --- a/book/en/src/preface.md +++ b/book/en/src/preface.md @@ -16,7 +16,7 @@ This book contains user level documentation for the Real-Time Interrupt-driven C -This is the documentation of v2.0.x (pre-release) of RTIC 2. +This is the documentation for RTIC v2.x. ## RTIC - The Past, current and Future @@ -27,11 +27,11 @@ The RTIC framework takes the outset from real-time systems research at LuleΓ₯ Un [Timber]: https://timber-lang.org/ [RTFM-SRP]: https://www.diva-portal.org/smash/get/diva2:1005680/FULLTEXT01.pdf [RTFM-core]: https://ltu.diva-portal.org/smash/get/diva2:1013248/FULLTEXT01.pdf -[AbstractTimer]: https://ltu.diva-portal.org/smash/get/diva2:1013030/FULLTEXT01.pdf +[Abstract Timer]: https://ltu.diva-portal.org/smash/get/diva2:1013030/FULLTEXT01.pdf ## Stack Resource Policy based Scheduling -Stack Resource Policy (SRP) based concurrency and resource management is at heart of the RTIC framework. The [SRP] model itself extends on [Priority Inheritance Protocols], and provides a set of outstanding properties for single core scheduling. To name a few: +[Stack Resource Policy (SRP)][SRP] based concurrency and resource management is at heart of the RTIC framework. The SRP model itself extends on [Priority Inheritance Protocols], and provides a set of outstanding properties for single core scheduling. To name a few: - preemptive deadlock and race-free scheduling - resource efficiency @@ -68,7 +68,7 @@ graph LR ## RTIC the hardware accelerated real-time scheduler -SRP itself is compatible both to dynamic and static priority scheduling. For the implementation of RTIC we leverage on the underlying hardware for accelerated static priority scheduling. +SRP itself is compatible with both dynamic and static priority scheduling. For the implementation of RTIC we leverage on the underlying hardware for accelerated static priority scheduling. In the case of the `ARM Cortex-M` architecture, each interrupt vector entry `v[i]` is associated a function pointer (`v[i].fn`), and a static priority (`v[i].priority`), an enabled- (`v[i].enabled`) and a pending-bit (`v[i].pending`). @@ -84,7 +84,7 @@ The SPR model for single-core static scheduling on the other hand states that a The similarities are striking and it is not by chance/luck/coincidence. The hardware was cleverly designed with real-time scheduling in mind. -In order to map the SRP scheduling onto the hardware we need to have a closer look on the system ceiling (𝜫). Under SRP 𝜫 is computed as the maximum priority ceiling of the currently held resources, and will thus change dynamically during the system operation. +In order to map the SRP scheduling onto the hardware we need to take a closer look at the system ceiling (𝜫). Under SRP 𝜫 is computed as the maximum priority ceiling of the currently held resources, and will thus change dynamically during the system operation. ## Example @@ -99,7 +99,7 @@ The mapping of static priority SRP based scheduling to the Cortex M hardware is ## Example -For the running example, a snapshot of the ARM Cortex M [NVIC] may have the following configuration (after task `A` has been pended for execution.) +For the running example, a snapshot of the ARM Cortex M [Nested Vectored Interrupt Controller (NVIC)][NVIC] may have the following configuration (after task `A` has been pended for execution.) | Index | Fn | Priority | Enabled | Pended | | ----- | --- | -------- | ------- | ------ | -- cgit v1.2.3 From ace010f4e9a7cf1d8b90e9a05eb1b7ea583c2c81 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Henrik=20Tj=C3=A4der?= Date: Fri, 3 Feb 2023 22:25:23 +0100 Subject: Book: Touchup README and preface --- book/en/src/preface.md | 46 ++++++++++++++++++++-------------------------- 1 file changed, 20 insertions(+), 26 deletions(-) (limited to 'book/en/src/preface.md') diff --git a/book/en/src/preface.md b/book/en/src/preface.md index 6b859a2..5f6856d 100644 --- a/book/en/src/preface.md +++ b/book/en/src/preface.md @@ -10,13 +10,21 @@ This book contains user level documentation for the Real-Time Interrupt-driven Concurrency (RTIC) framework. The API reference is available [here](../../api/). - +This is the documentation for RTIC v2.x. - +{{#include ../../../README.md:59}} - +Older releases: +[RTIC v1.x](/1.0) | [RTIC v0.5.x (unsupported)](/0.5) | [RTFM v0.4.x (unsupported)](/0.4) -This is the documentation for RTIC v2.x. +{{#include ../../../README.md:7:12}} + +## Is RTIC an RTOS? + +A common question is whether RTIC is an RTOS or not, and depending on your background the answer may vary. From RTIC's developers point of view; RTIC is a hardware accelerated RTOS that utilizes the hardware such as the NVIC on Cortex-M MCUs, CLIC on RISC-V etc. to perform scheduling, rather than the more classical software kernel. + +Another common view from the community is that RTIC is a concurrency framework as there +is no software kernel and that it relies on external HALs. ## RTIC - The Past, current and Future @@ -40,7 +48,7 @@ The RTIC framework takes the outset from real-time systems research at LuleΓ₯ Un - predictable scheduling, with bounded priority inversion by a single (named) critical section - theoretical underpinning amenable to static analysis (e.g., for task response times and overall schedulability) -SRP comes with a set of system wide requirements: +SRP comes with a set of system-wide requirements: - each task is associated a static priority, - tasks execute on a single-core, - tasks must be run-to-completion, and @@ -122,21 +130,21 @@ In this way RTIC fuses SRP based preemptive scheduling with a zero-cost hardware Given that the approach is dead simple, how come SRP and hardware accelerated scheduling is not adopted by any other mainstream RTOS? -The answer is simple, the commonly adopted threading model does not lend itself well to static analysis - there is no known way to extract the task/resource dependencies from the source code at compile time (thus ceilings cannot be efficiently computed and the LIFO resource locking requirement cannot be ensured). Thus SRP based scheduling is in the general case out of reach for any thread based RTOS. +The answer is simple, the commonly adopted threading model does not lend itself well to static analysis - there is no known way to extract the task/resource dependencies from the source code at compile time (thus ceilings cannot be efficiently computed and the LIFO resource locking requirement cannot be ensured). Thus, SRP based scheduling is in the general case out of reach for any thread based RTOS. ## RTIC into the Future Asynchronous programming in various forms are getting increased popularity and language support. Rust natively provides an `async`/`await` API for cooperative multitasking and the compiler generates the necessary boilerplate for storing and retrieving execution contexts (i.e., managing the set of local variables that spans each `await`). -The Rust standard library provides collections for dynamically allocated data-structures (useful to manage execution contexts at run-time. However, in the setting of resource constrained real-time systems, dynamic allocations are problematic (both regarding performance and reliability - Rust runs into a *panic* on an out-of-memory condition). Thus, static allocation is king! +The Rust standard library provides collections for dynamically allocated data-structures which are useful to manage execution contexts at run-time. However, in the setting of resource constrained real-time systems, dynamic allocations are problematic (both regarding performance and reliability - Rust runs into a *panic* on an out-of-memory condition). Thus, static allocation is the preferable approach! -RTIC provides a mechanism for `async`/`await` that relies solely on static allocations. However, the implementation relies on the `#![feature(type_alias_impl_trait)]` (TAIT) which is undergoing stabilization (thus RTIC 2.0.x currently requires a *nightly* toolchain). Technically, using TAIT, the compiler determines the size of each execution context allowing static allocation. +RTIC provides a mechanism for `async`/`await` that relies solely on static allocations. However, the implementation relies on the `#![feature(type_alias_impl_trait)]` (TAIT) which is undergoing stabilization (thus RTIC v2.x currently requires a *nightly* toolchain). Technically, using TAIT, the compiler determines the size of each execution context allowing static allocation. From a modelling perspective `async/await` lifts the run-to-completion requirement of SRP, and each section of code between two yield points (`await`s) can be seen as an individual task. The compiler will reject any attempt to `await` while holding a resource (not doing so would break the strict LIFO requirement on resource usage under SRP). -So with the technical stuff out of the way, what does `async/await` bring to the RTIC table? +So with the technical stuff out of the way, what does `async/await` bring to the table? -The answer is - improved ergonomics! In cases you want a task to perform a sequence of requests (and await their results in order to progress). Without `async`/`await` the programmer would be forced to split the task into individual sub-tasks and maintain some sort of state encoding (and manually progress by selecting sub-task). Using `async/await` each yield point (`await`) essentially represents a state, and the progression mechanism is built automatically for you at compile time by means of `Futures`. +The answer is - improved ergonomics! A recurring use case is to have task perform a sequence of requests and then await their results in order to progress. Without `async`/`await` the programmer would be forced to split the task into individual sub-tasks and maintain some sort of state encoding (and manually progress by selecting sub-task). Using `async/await` each yield point (`await`) essentially represents a state, and the progression mechanism is built automatically for you at compile time by means of `Futures`. Rust `async`/`await` support is still incomplete and/or under development (e.g., there are no stable way to express `async` closures, precluding use in iterator patterns). Nevertheless, Rust `async`/`await` is production ready and covers most common use cases. @@ -144,7 +152,7 @@ An important property is that futures are composable, thus you can await either, ## RTIC the model -An RTIC `app` is a declarative and executable system model for single-core applications, defining a set of (`local` and `shared`) resources operated on by a set of (`init`, `idle`, *hardware* and *software*) tasks. In short the `init` task runs before any other task returning a set of resources (`local` and `shared`). Tasks run preemptively based on their associated static priority, `idle` has the lowest priority (and can be used for background work, and/or to put the system to sleep until woken by some event). Hardware tasks are bound to underlying hardware interrupts, while software tasks are scheduled by asynchronous executors (one for each software task priority). +An RTIC `app` is a declarative and executable system model for single-core applications, defining a set of (`local` and `shared`) resources operated on by a set of (`init`, `idle`, *hardware* and *software*) tasks. In short the `init` task runs before any other task returning a set of resources (`local` and `shared`). Tasks run preemptively based on their associated static priority, `idle` has the lowest priority (and can be used for background work, and/or to put the system to sleep until woken by some event). Hardware tasks are bound to underlying hardware interrupts, while software tasks are scheduled by asynchronous executors (one for each software task priority). At compile time the task/resource model is analyzed under SRP and executable code generated with the following outstanding properties: @@ -152,18 +160,4 @@ At compile time the task/resource model is analyzed under SRP and executable cod - hardware task scheduling is performed directly by the hardware, and - software task scheduling is performed by auto generated async executors tailored to the application. -The RTIC API design ensures that both SRP requirements and Rust soundness rules are upheld at all times, thus the executable model is correct by construction. Overall, the generated code infers no additional overhead in comparison to a hand-written implementation, thus in Rust terms RTIC offers a zero-cost abstraction to concurrency. - - - - +The RTIC API design ensures that both SRP requirements and Rust soundness rules are upheld at all times, thus the executable model is correct by construction. Overall, the generated code infers no additional overhead in comparison to a handwritten implementation, thus in Rust terms RTIC offers a zero-cost abstraction to concurrency. -- cgit v1.2.3 From 5dc9c7083ddf2481948c9f9a877bd36552074489 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Henrik=20Tj=C3=A4der?= Date: Sat, 4 Mar 2023 21:44:12 +0100 Subject: Book: Tidy up preface --- book/en/src/preface.md | 2 -- 1 file changed, 2 deletions(-) (limited to 'book/en/src/preface.md') diff --git a/book/en/src/preface.md b/book/en/src/preface.md index 5f6856d..5cba633 100644 --- a/book/en/src/preface.md +++ b/book/en/src/preface.md @@ -12,8 +12,6 @@ This book contains user level documentation for the Real-Time Interrupt-driven C This is the documentation for RTIC v2.x. -{{#include ../../../README.md:59}} - Older releases: [RTIC v1.x](/1.0) | [RTIC v0.5.x (unsupported)](/0.5) | [RTFM v0.4.x (unsupported)](/0.4) -- cgit v1.2.3