Steven maintains the stable -rt branch – basically all stables get the rt patch applied to them, in addition the RedHat kernel as well.
Should really not be called Real-Time, but rather Deterministic (DOS!), because it’s not faster. Making it deterministic adds overhead.
Do your homework: the hardware has to be deterministic as well.
Goal of PREEMPT_RT: 100% preemptible kernel: remove disabling of interrupts and other disabling of preemption.
- No preemption: do as much as possible with as little scheduling as possible; only schedule when someone calls schedule().
- Voluntary preemption: might_sleep(): debugging aid to debug preemptible kernel, reused to make the ‘no preemption’ less bad.
- Preemptible kernel: preempt anywhere except in spin_lock – you anyway need to get locking right for SMP, can as well reuse this for preempting the single-processor case.
- Basic RT pre-emption is just a debugging aid for PREEMPT_RT.
- PREEMPT_RT_FULL implements spin_locks as mutexes, and interrupts as threads (so interrupt handlers are scheduled). So this means that an interrupt handler may call schedule().
Disabling preemption makes you the highest priority task in the system – so it’s similar to the BKL.
Priority inversion: to avoid unbounded priority inversion, locks need priority inheritance. But for this to work, every lock needs a single owner (otherwise inheritance is too complex).
Some spin_locks cannot be converted to mutexes, e.g. the lock in the scheduler. This is called a raw_spin_lock. This is already in mainline, even though it’s meaningless in the mainline kernel.
Threaded interrupts: all interrupt handlers are like softirqs, i.e. they run in a thread. But some interrupts have to stay hard, e.g. the timer interrupt. Still prefered to have a per-device interrupt thread: request_threaded_irq(), which has two handlers: the hard irq (that should disable the interrupt) and the soft irq (that actually handles it and re-enables the interrupt). In mainline, set threadirqs in cmdline and all interrupts will be threaded (handled by a big switch in the irqthread).
Don’t use local_irq_disable or preempt_disable, but use local_lock(_irq(save)). local_lock_xxx has just preempt_disable on a non-RT kernel; on RT it is implemented at spin_lock_irqsave. local_loc_xxx has low latency, shows what it is protecting, does not disable interrupts in RT, works on SMP. (note: local_lock_xxx is not yet mainlined.) get_cpu() does a preempt_disable(); on RT, replace with get_cpu_light() which only pins the current thread to the current CPU without disabling preemption.
rwlocks are problematic, because the writer has to wait for all readers and readers can be added while the writer is waiting. Now at least it’s a FIFO, but this creates deadlocks because before the readers wouldn’t block so you could have lock inversion without problems. lockdep doesn’t (yet) detect this scenario. rwlocks should be replaced by RCU (where possible).
Thomas Gleixner thinks that PREEMPT_RT makes Xenomai redundant.