当前位置:首页 > 操作系统概念第七版答案(含编程代码)
through all the processes is therefore 10*1.1 + 10.1 (as each I/O-bound task executes for 1 millisecond and then incur the context switch task, whereas the CPU-bound task executes for 10 milliseconds before incurring a context switch). The CPU utilization is therefore 20/21.1 * 100 = 94%.
5.8 Consider a system implementing multilevel queue scheduling. What strategy can a computer user employ to maximize the amount of CPU time allocated to the user’s process?
Answer: The program could maximize the CPU time allocated to it by not fully utilizing its time quantums. It could use a large fraction of its assigned quantum, but relinquish the CPU before the end of the quantum, thereby increasing the priority associated with the process.
5.9 Consider a preemptive priority scheduling algorithm based on dynamically changing priorities. Larger priority numbers imply higher priority. When a process is waiting for the CPU (in the ready queue, but not run- ning), its priority changes at a rate ; when it is running, its priority changes at a rate . All processes are given a priority of 0 when they enter the ready queue. The parameters and can be set to give many different scheduling algorithms. a. What is the algorithm that results from β>α> 0? b. What is the algorithm that results from α<β< 0?
Answer: a. FCFS b. LIFO
5.10 Explain the differences in the degree to which the following scheduling algorithms discriminate in favor of short processes: a. FCFS b. RR
c. Multilevel feedback queues
Answer:
a. FCFS — discriminates against short jobs since any short jobs arriving after long jobs will have a longer waiting time.
b. RR — treats all jobs equally (giving them equal bursts of CPU time) so short jobs will be able to leave the system faster since they will finish first. c. Multilevel feedback queues — work similar to the RR algorithm —they discriminate favorably toward short jobs.
5.11 Using the Windows XP scheduling algorithm, what is the numeric pri- ority of a thread for the following scenarios?
a. A thread in the REALTIME PRIORITY CLASS with a relative priority of HIGHEST. b. A thread in the NORMAL PRIORITY CLASS with a relative priority of NORMAL. c. A thread in the HIGH PRIORITY CLASS with a relative priority of ABOVE NORMAL.
word文档 可自由复制编辑
Answer: a. 26 b. 8 c. 14
5.12 Consider the scheduling algorithm in the Solaris operating system for time sharing threads:
a. What is the time quantum (in milliseconds) for a thread with priority 10? With priority 55? b. Assume a thread with priority 35 has used its entire time quantum without blocking. What new priority will the scheduler assign this thread? c. Assume a thread with priority 35 blocks for I/O before its time quantum has expired. What new priority will the scheduler assign this thread?
Answer: a. 160 and 40 b. 35 c. 54
5.13 The traditional UNIX scheduler enforces an inverse relationship between priority numbers and priorities: The higher the number, the lower the priority. The scheduler recalculates process priorities once per second using the following function:
Priority = (Recent CPU usage / 2) + Base
where base = 60 and recent CPU usage refers to a value indicating how often a process has used the CPU since priorities were last recalculated. Assume that recent CPU usage for process P1 is 40, process P2 is 18, and process P3 is 10. What will be the new priorities for these three processes when priorities are recalculated? Based on this information, does the traditional UNIX scheduler raise or lower the relative priority of a CPU-bound process?
Answer: The priorities assigned to the processes are 80, 69, and 65 respectively. The scheduler lowers the relative priority of CPU-bound processes.
Chapter 6
6.1 The first known correct software solution to the critical-section problem
word文档 可自由复制编辑
for two processes was developed by Dekker. The two processes, P0 and P1 , share the following variables:
boolean flag[2]; /* initially false */ int turn; The structure of process Pi (i == 0 or 1) is shown in Figure 6.25; the other process is Pj (j == 1 or 0). Prove that the algorithm satisfies all three requirements for the critical-section problem.
Answer: This algorithm satisfies the three conditions of mutual ex- clusion. (1) Mutual exclusion is ensured through the use of the flag and turn variables. If both processes set their flag to true, only one will succeed. Namely, the process whose turn it is. The waiting process can only enter its critical section when the other process updates the value of turn.
(2) Progress is provided, again through the flag and turn variables. This algorithm does not provide strict alternation. Rather, if a process wishes to access their critical section, it can set their flag variable to true and enter their critical section. It only sets turn to the value of the other process upon exiting its critical section. If this process wishes to enter its critical section again - before the other process - it repeats the process of entering its critical section and setting turn to the other process upon exiting.
(3) Bounded waiting is preserved through the use of the TTturn variable. Assume two processes wish to enter their respective critical sections. They both set their value of flag to true, however only the thread whose turn it is can proceed, the other thread waits.
If bounded waiting were not preserved, it would therefore be possible that the waiting process would have to wait indefinitely while the first process repeatedly entered - and exited - its critical section. However, Dekker ’s algorithm has a process set the value of turn to the other process, thereby ensuring that the other process will enter its critical section next.
6.2 The first known correct software solution to the critical-section problem for n processes with a lower bound on waiting of n? 1 turns was presented by Eisenberg and McGuire. The processes share the following variables:
enum pstate {idle, want_in, in_cs}; pstate flag[n]; int turn;
All the elements of flag are initially idle; the initial value of turn is immaterial (between 0 and n-1). The structure of process Pi is shown in Figure 6.26. Prove that the algorithm satisfies all three requirements for the critical-section problem.
Answer: This algorithm satisfies the three conditions. Before we show that the three conditions are satisfied, we give a brief explanation of what the algorithm does to ensure mutual exclusion. When a process I requires access to critical section, it first sets its flag variable to want in to indicate its desire. It then performs
word文档 可自由复制编辑
the following steps: (1) It ensures that all processes whose index lies between turn and i are idle. (2)If so, it updates its flag to in cs and checks whether there is already some other process that has updated its flag to in cs. (3) If not and if it is this process’s turn to enter the critical section or if the process indicated by the turn variable is idle, it enters the critical section. Given the above description, we can reason about how the algorithm satisfies the requirements in the following manner:
? Mutual exclusion is ensured: Notice that a process enters the critical section only if the following requirements is satisfied: no other process has its flag variable set to in cs. Since the process sets its own flag variable set to in cs before checking the status of other processes, we are guaranteed that no two processes will enter the critical section simultaneously.
? Progress requirement is satisfied: Consider the situation where multiple processes simultaneously set their flag variables to in cs and then check whether there is any other process has the flag variable set to in cs. When this happens, all processes realize that there are competing processes, enter the next iteration of the outer while (1) loop and reset their flag variables to want in. Now the only process that will set its turn variable to in cs is the process whose index is closest to turn. It is however possible that new processes whose index values are even closer to turn might decide to enter the critical section at this point and therefore might be able to simultaneously set its flag to in cs. These processes would then realize there are competing processes and might restart the process of entering the critical section. However, at each iteration, the index values of processes that set their flag variables to in cs become closer to turn and eventually we reach the following condition:
only one process (say k) sets its flag to in cs and no other process whose index lies between turn and k has set its flag to in cs. This process then gets to enter the critical section.
? Bounded-waiting requirement is met: The bounded waiting requirement is satisfied by the fact that when a process k desires to enter the critical section, its flag is no longer set to idle. Therefore, any process whose index does not lie between turn and k cannot enter the critical section. In the meantime, all processes whose index falls between turn and k and desire to enter the critical section would indeed enter the critical section (due to the fact that the system always makes progress) and the turn value monotonically becomes closer to k. Eventually, either turn becomes k or there are no processes whose index values lie between turn and k, and therefore process k gets to enter the critical section.
6.3 What is the meaning of the term busy waiting? What other kinds of waiting are there in an operating system? Can busy waiting be avoided altogether? Explain your answer.
Answer: Busy waiting means that a process is waiting for a condition to be satisfied in a tight loop without relinquish the processor. Alternatively,a process could wait
word文档 可自由复制编辑
共分享92篇相关文档