Next: A Few Other Details
Up: Kernel Modules Versus Applications
Previous: Kernel Modules Versus Applications
Contents
Concurrency in the Kernel
- One way in which kernel programming differs greatly from conventional application programming is the issue of concurrency. Most applications, with the notable exception of multithreading applications, typically run sequentially, from the beginning to the end, without any need to worry about what else might be happening to change their environment.
- Kernel code does not run in such a simple world, and even the simplest kernel modules must be written with the idea that many things can be happening at once.
- There are a few sources of concurrency in kernel programming.
- Naturally, Linux systems run multiple processes, more than one of which can be trying to use your driver at the same time.
- Most devices are capable of interrupting the processor; interrupt handlers run asynchronously and can be invoked at the same time that your driver is trying to do something else.
- Several software abstractions (such as kernel timers) run asynchronously as well.
- Moreover, of course, Linux can run on symmetric multiprocessor (SMP) systems, with the result that your driver could be executing concurrently on more than one CPU.
- Finally, in 2.6, kernel code has been made preemptible; this change causes even uniprocessor systems to have many of the same concurrency issues as multiprocessor systems.
- As a result, Linux kernel code, including driver code, must be reentrant (it must be capable of running in more than one context at the same time). Data structures must be carefully designed to keep multiple threads of execution separate, and the code must take care to access shared data in ways that prevent corruption of the data.
- Writing code that handles concurrency and avoids race conditions requires thought and can be tricky.
Next: A Few Other Details
Up: Kernel Modules Versus Applications
Previous: Kernel Modules Versus Applications
Contents
Cem Ozdogan
2007-05-16