1. What is thrashing? Why does it occur? Explain the two
approaches to prevent it from happening
Answer: A process is busy swapping pages in and out consequently
it quickly faults again and again, replacing pages that it must bring back in
immediately. This high paging activity is called thrashing. Thrashing occurs when a process does not have the number of
frames it needs to support pages in active use, it will quickly page-fault. At
this point, it must replace some page. However, since all its pages are in
active use, it must replace a page that will be needed again right away.
The
working-set model is based on
the assumption of locality. This model uses a parameter to define the working-set window. The idea is to
examine the most recent page references. The set of pages in the most recent
page references is the working set. If
a page is in active use, it will be in the working set. If it is no longer
being used, it will drop from the working set time units after its last
reference. Thus, the working set is an approximation of the program’s locality.
Page-fault frequency (PFF) takes
a more direct approach. The specific problem is how to prevent thrashing. Thrashing
has a high page-fault rate. Thus, we want to control the page-fault rate. When
it is too high, we know that the process needs more frames. Conversely, if the
page-fault rate is too low, then the process may have too many frames. We can
establish upper and lower bounds on the desired page-fault rate. If the actual
page-fault rate exceeds the upper limit, we allocate the process another frame. If the page-fault rate falls
below the lower limit, we remove a frame from the process. Thus, we can directly
measure and control the page-fault rate to prevent thrashing.
2. What is memory-mapped-file? Explain the advantage of the
approach.
Answer: Consider a
sequential read of a file on disk using the standard system calls open(),
read(), and write(). Each file access requires a system call and disk access.
Alternatively, we can use the virtual memory techniques to treat file I/O as
routine memory accesses. This approach, known as memory mapping a file, allows a part of the virtual
address space to be logically associated with the file.
The
advantage of this approach is I/O intensive operations can be much faster since
content does need to be copied between kernel space and user space. In some
cases, performance can nearly double.
3. With an average page-fault service time of 4 milliseconds and a
memory access time of 200 nanoseconds, what is the effective memory access time
in nanoseconds for 0.0001% page-fault rate?
Answer:
4. Consider a demand-paging system with the following
time-measured utilizations:
CPUutilization 20%
Paging disk 97.7%
OtherI/Odevices 5%
For each of the following, indicate whether it will (or is likely
to) improve
CPUutilization. Explain your answers.
a. Install a fasterCPU.
b. Install a bigger paging disk.
c. Increase the degree of multiprogramming.
d. Decrease the degree of multiprogramming.
e. Install more main memory.
f. Install a faster hard disk or multiple controllers with
multiple hard disks.
h. Increase the page size.
Answer:
The
system is spending most of its time paging, indicating over-allocation of
memory. If the level of multiprogramming is reduced resident processes would
page fault less frequently and the CPU utilization would improve. Another way
to improve performance would be to get more physical memory or a faster paging
drum.
a.
Install a faster CPU-No, This will likely have no effect. The limiting factor
is available memory per program
b.
Install a bigger paging disk—No, This should have no effect.
c.
Increase the degree of multiprogramming—No, This typically decreases CPU
utilization because less memory is available to each program and the chances of
page fault increase.
d.
Decrease the degree of multiprogramming—Yes, this typically increase CPU
utilization by keeping more of the working set of each program in memory,
thereby reducing the number of page faults.
e.
Install more main memory—Likely to improve CPU utilization as more pages can
remain resident and not require paging to or from the disks.
f.
Install a faster hard disk, or multiple controllers with multiple hard
disks—Also an improvement, for as the disk bottleneck is removed by faster
response and more throughput to the disks, the CPU will get more data more
quickly.
h.
Increase the page size—increasing the page size will result in fewer page
faults if data is being accessed sequentially. If data access is more or less
random, more paging action could ensue because fewer pages can be kept in
memory and more data is transferred per page fault. So this change is as likely
to decrease utilization as it is to increase it.
5. Answer to the textbook question “10.7. It is sometimes said …”
6. Answer to the textbook question “10.11. Suppose that a disk …”
2,069, 1,212, 2,296, 2,800, 544, 1,618, 356, 1,523, 4,965, 3681
Answer:
a. FCFS - 2150, 2,069, 1,212, 2,296, 2,800, 544, 1,618, 356, 1,523, 4,965, 3681
The total distance is 13011
b. SSTF - 2150, 2069, 2296, 2800, 3681, 4965, 1618, 1523, 1212, 544, 356. The
total distance is 7586.
c. SCAN - 2150, 2296, 2800, 3681, 4965, 4999, 2069, 1618, 1523, 1212, 544, 356. Total distance is 7492.
d. LOOK - 2150, 2296, 2800, 3681, 4965, 2069, 1618, 1523, 1212, 544, 356
Total distance is 7424.
e. C-SCAN - 2150, 2296, 2800, 3681, 4965, 4999, 0, 356, 544, 1212, 1523, 1618, 2069.
The total distance is 9917.
f. C-LOOK - 2150, 2296, 2800, 3681, 4965, 356, 544, 1212, 1523, 1618, 2069.
Total distance is 9137.
7. Answer to the textbook question “11.12. Provide examples of …”
Answer:
Sequential: Applications that access
files sequentially include word processors, music players, video players and
web servers
Random: Applications that access files
randomly include databases, video editors and auto editors.
Write two reasons for allocating kernel memory?
ReplyDelete