- In shared memory multiprocessor architectures, such as SMPs, threads can be used to implement parallelism.
- In the threads model of parallel programming, a single process can have
- multiple concurrent,
- execution paths.
- Most simple analogy for threads is the concept of a single program that includes a number of subroutines:
- a.out (main program) loads and acquires all of the necessary system and user resources to run.
- Main program performs some serial work,
- and then creates a number of tasks (threads) that can be scheduled and run by the OS concurrently.
- Each thread has local data, but also, shares the entire resources of main program.
Figure 3:
Thread shared memory model.
|
- This saves the overhead associated with replicating a program's resources for each thread.
- Each thread also benefits from a global memory view because it shares the memory space of program.
- Any thread can execute any subroutine at the same time as other threads.
- Threads communicate with each other through global memory (updating address locations).
- Changes made by one thread to shared system resources (such as closing a file) will be seen by all other threads.
- This requires synchronization constructs to insure that more than one thread is not updating the same global address at any time.
Figure 4:
Threads Unsafe! Pointers having the same value point to the same data.
|
- Threads can come and go, but main program remains present to provide the necessary shared resources until the application has completed.
- From a programming perspective, threads implementations commonly comprise:
- A library of subroutines that are called from within parallel source code
- A set of compiler directives embedded in either serial or parallel source code
- In both cases, the programmer is responsible for determining all parallelism.
Cem Ozdogan
2010-11-21