1. Three possible levels of concurrency in programs:
• instruction level (executing two or more machine instructions simultaneously),
• statement level (executing two or more high-level language statements simultaneously)
• unit level (executing two or more subprogram units simultaneously)
2. In an SIMD computer, each processor has its own local memory. One processor controls the operation of the other processors. Because all of the processors, except the controller, execute the same instruction at the same time, no synchronization is required in the software.
5. Unit-level concurrency is best supported by MIMD computers.
6. Vector processor have groups of registers that store the operands of a vector operation in which
the same instruction is executed on the whole group of operands simultaneously.
7. Physical concurrency is several program units from the same program that literally execute simultaneously.
Logical concurrency is multiple processors providing actual concurrency, when in fact the actual execution of programs is taking place in interleaved fashion on a single processor.
8. A scheduler manages the sharing of processors among the tasks. If there were never any interruptions and tasks all had the same priority, the scheduler could simply give each task a time slice, such as 0.1 second, and when a task’s turn came, the scheduler could let it execute on a processor for that amount of time.
16. A task descriptor is a data structure that stores all of the relevant information about the execution state of a task.
18. The purpose of a task-ready queue is to be storage of tasks that are ready to run.
21. A binary semaphore is a semaphore that requires only a binary-valued counter.
A counting semaphore is a synchronization object that can have an arbitrarily large number of states.
30. Ada terminate clause, when selected, means that the task is finished with its job but is not yet terminated. Task termination is discussed later in this section.
34. Sleep method in Java blocks the the thread.
35. Yield method in Java surrenders the processor voluntarily as a request from the running thread.
36. The join method in Java is used to force a method to delay its execution until the run method of another thread has completed its execution.
55. Concurrent ML is an extension to ML that includes a fform of threads and a form of synchronous message passing to support concurrency.
56. The use of spawn primitive of CML is to take the function as its parameter and to create a thread.
57. The use of subprograms BeginInvoke and Endinvoke in F# is to call threads asynchronously.
60. What is the type of an F# heap-allocated mutatable variable?
A mutable heap-allocated variable is of type ref
63. The FORALL statement of High-Performance Fortran is to specifies a sequence of assignment statements that may be executed concurrently.
1. Explain clearly why a race condition can create problems for a system.
Race condition can create problems for a system, because two or more tasks are racing to use the shared resource and the behavior of the program depends on which task arrives first (and wins the race).
2. The different ways to handle deadlock:
– Ignoring deadlock
3. Busy waiting is a method whereby a task waits for a given event by continuously checking for that event to occur. What is the main problem with this approach?
Busy-waiting or spinning is a technique in which a process repeatedly checks to see if a condition is true, such as whether keyboard input or a lock is available. Spinning can also be used to generate an arbitrary time delay, a technique that was necessary on systems that lacked a method of waiting a specific length of time. Processor speeds vary greatly from computer to computer, especially as some processors are designed to dynamically adjust speed based on external factors, such as the load on the operating system. Busy waiting may loop forever and it may cause a computer freezing.