Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

7. Assume that the following processes are to be executed on a uniprocessor syst

ID: 3677744 • Letter: 7

Question

7. Assume that the following processes are to be executed on a uniprocessor system.

Textbook: Use Modern Operating Systems (3rd or 4th edition) based on the questions above. Please provide the detailed description. (Don't copy from Internet directly)

Based on their arrival time and CPU burst, calculate the average turnaround time and response time of these processes under the following scheduling policies: a. FCFS b. Round Robin (quantum = 4 and 6) c. Shortest Job First (preemptive and non-preemptive) Process P1 P2 P3 P4 Arrival Time 0 CPU burst 12 3 3 4 Compare the performance of above policies. 8. What are the additional requirements of multiprocessor scheduling compared with uniprocessor scheduling? What are the possible issues? sheae thpoisineorskedaling compared 9. List different ways to ensure mutual exclusion 10. What are the advantages and disadvantages of busy-waiting and sleep-and-wake approaches for mutual exclusion? 11. Why semaphore is needed? What are the commonalities and differences between semaphore and mutex. 12. In the dining philosophers problem (Page 166, Figure 2-46), explain what the does the function test(i) do (two roles) and how it works.

Explanation / Answer

7.a.FCFS:

p1: waiting time=0


    p2: waiting time=12


    p3: waiting time=13
    p4:waiting time=19

    average turnaround time=(12+15+20+22)/4=17.25
    response time=(0+12+13+19)/4=11
   
b.Shortest Job first:
   order of job scheduling:
   p2   p4 p3   and p1
Average tunaround time =(3+4+12+26)/4=11.25
Average Response Time =(0+0+5+14)/4=4.75
c.Round Robin :
for time quantum =4:
P1 waiting time= (0-0)+(15-4)+(22-19)=14
P2 waiting time=(4-0) =4
P3 waiting time=(7-2) +(19-11)=13
P4 waiting time=(11-3)=8
Average response time=(14+4+13+8)/4=9.75
Average turnaround time=(26+7+20+12)/4=65
for time quantum =6:
P1 waiting time=(0-0)+(19-6)=13
P2 waiting time=(6-0)=6
P3 waiting time=(9-2) + (25-15)=17
P4 waiting time=(15-3)=12
Average response time=(13+6+17+12)/4=12
From the above Shortest job first has les waiting time compared to other three scheduling algorithms.
8.On a uniprocessor, scheduling is one dimensional. The only question that must be answered (repeatedly) is: ''Which process should be run next?'' On a multiprocessor, scheduling is two dimensional. The scheduler has to decide which process to run and which CPU to run it on. This extra dimension greatly complicates scheduling on multiprocessors.

Another complicating factor is that in some systems, all the processes are unrelated whereas in others they come in groups. An example of the former situation is a timesharing system in which independent users start up independent processes. The processes are unrelated and each one can be scheduled without regard to the other ones
If multiple CPUs are available, load sharing becomes possible;however,theschedulingproblembecomescorrespondingly more complex.Many possibilitieshave been tried;and as we saw with singleprocessor CPU scheduling, there is no one best solution. Here, we discuss several concerns in multiprocessor scheduling. We concentrate on systems in which the processors are identical—homogeneous—in terms of their functionality; we can then use any available processor to run any process in the queue. (Note, however, that even with homogeneous multiprocessors, there are sometimeslimitationson scheduling. Considera systemwith an I/O device attached to a private bus of one processor. Processes that wish to use that device must be scheduled to run on that processor.)

9. Mutual exclusion. At least one resource must be held in a nonsharable mode; that is, only one process at a time can use the resource. If another process requests that resource, the requesting process must be delayed until the resource has been released.
The mutual-exclusion condition must hold for nonsharable resources. For example, a printer cannot be simultaneously shared by several processes. Sharable resources, in contrast, do not require mutually exclusive access and thus cannot be involved in a deadlock. Read-only les are a good example of a sharable resource. Ifseveralprocessesattemptto open a read-onlyle at the sametime,theycanbegrantedsimultaneousaccesstothele.Aprocessnever needs to wait for a sharable resource. In general, however, we cannot prevent deadlocksbydenyingthemutual-exclusioncondition,becausesomeresources are intrinsically nonsharable.

10.Busy waiting means a process simply spins (does nothing but continue to test its entry condition) while it is waiting to enter its critical section. This continues to use (waste) CPU cycles, which is inefficient. Additionally, if a low priority process is in its critical section, and a high priority process goes into busy waiting, it may never get to its critical section., On a single CPU, busy waiting becomes a circular issue, so blocking would make more sense, as it only requires a single call to the OS to change state to Ready/Run, once it is notified that blocking process is complete. For blocking solutions, we would need to be careful because there is a chance that an incremental counter might be corrupted; processes can go into a sleep state and never awake. Additionally, while busy waiting may waste cycles on a single processor, blocking may waste cycles on a multiprocessor.
As we have seen, busy waiting can be wasteful. Processes waiting to enter their critical sections waste processor time checking to see if they can proceed. A better solution to the mutual exclusion problem, which can be implemented with the addition of some new primitives, would be to block processes when they are denied access to their critical sections. Two primitives, Sleep and Wakeup, are often used to implement blocking in mutual exclusion.

Essentially, when a process is not permitted to access its critical section, it uses a system call known as Sleep, which causes that process to block. The process will not be scheduled to run again, until another process uses the Wakeup system call. In most cases, Wakeup is called by a process when it leaves its critical section if any other processes have blocked.

11.As per operating system terminology, mutex and semaphore are kernel resources that provide synchronization services (also called as synchronization primitives).
The producer-consumer problem:

Note that the content is generalized explanation. Practical details vary with implementation.

Consider the standard producer-consumer problem. Assume, we have a buffer of 4096 byte length. A producer thread collects the data and writes it to the buffer. A consumer thread processes the collected data from the buffer. Objective is, both the threads should not run at the same time.

Using Mutex:

A mutex provides mutual exclusion, either producer or consumer can have the key (mutex) and proceed with their work. As long as the buffer is filled by producer, the consumer needs to wait, and vice versa.

At any point of time, only one thread can work with the entire buffer. The concept can be generalized using semaphore.

Using Semaphore:

A semaphore is a generalized mutex. In lieu of single buffer, we can split the 4 KB buffer into four 1 KB buffers (identical resources). A semaphore can be associated with these four buffers. The consumer and producer can work on different buffers at the same time.

There is an ambiguity between binary semaphore and mutex. We might have come across that a mutex is binary semaphore. But they are not! The purpose of mutex and semaphore are different. May be, due to similarity in their implementation a mutex would be referred as binary semaphore.

Strictly speaking, a mutex is locking mechanism used to synchronize access to a resource. Only one task (can be a thread or process based on OS abstraction) can acquire the mutex. It means there is ownership associated with mutex, and only the owner can release the lock (mutex).

Semaphore is signaling mechanism (“I am done, you can carry on” kind of signal). For example, if you are listening songs (assume it as one task) on your mobile and at the same time your friend calls you, an interrupt is triggered upon which an interrupt service routine (ISR) signals the call processing task to wakeup.


12.while (true) { // get left chopstick chopStick[i].acquire(); // get right chopstick chopStick[(i + 1) % 5].acquire();
eating();
// return left chopstick chopStick[i].release(); // return right chopstick chopStick[(i + 1) % 5].release();
thinking();
}
Figure:The structure of philosopher i.

One simple solution is to represent each chopstick with a semaphore. A philosophertriestograbthechopstickbyexecutinganacquire()operationon thatsemaphore;shereleasesachopstickbyexecutingtherelease()operation on the appropriate semaphores. Thus, the shared data are
Semaphore chopStick[] = new Semaphore[5];
for(int i = 0; i < 5; i++) chopStick[i] = new Semaphore(1);
where all the elements of chopstick are initialized to 1. The structure of philosopher i is shown in Figure . Although this solution guarantees that no two neighboring philosophers are eating simultaneously, it nevertheless must be rejected because it has the possibility of creating a deadlock. Suppose that all ve philosophers become hungry simultaneously and each grabs her left chopstick. All the elements of chopstick will now be equal to 0. When each philosopher tries to grab her right chopstick, she will be delayedforever.

Hire Me For All Your Tutoring Needs
Integrity-first tutoring: clear explanations, guidance, and feedback.
Drop an Email at
drjack9650@gmail.com
Chat Now And Get Quote