Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

Objectives The purpose of this programming project is to gain some experience in

ID: 3719018 • Letter: O

Question

Objectives
The purpose of this programming project is to gain some experience involving the design of a few OS components by simulation. These components include CPU management and scheduling, process management, system queues, system statistics gathering and reporting.
Project Summary
Students work in groups of two or three, to write a simulation program in Java for testing the performance of their designs of a few CPU scheduling algorithms for a simple computer with limited HW / SW system, then report their findings, conclusions, and suggest improvements.
System Description
1. The Available Hardware
a. A single CPU.
b. One I/O Device.
c. Unlimited amount of Main and Secondary storage.
d. System Timer and all necessary support.
2. The User Processes
All processes created in this system fall under any one of the following types:
Type-1: Consists of:
10 CPU bursts of the following lengths: (1,2,1,1,1,3,1,2,2,1) Time Units,
9 I/O bursts of the following lengths: (6,4,10,3,5,3,2,10,6) Time Units.
Type-2: Consists of:
15 CPU bursts of 50 Time Units each, and
14 I/O bursts of 150 Time Units each.
Type-3: Consists of:
12 CPU bursts of 1000 Time Units each, and
11 I/O bursts of 5 Time Units each.
Type-4: Consists of:
A repeated pattern of (CPU, I/O1, Think, I/O2), where:
Each CPU burst takes 3 Time Units,
Each I/O1 burst takes 3 Time Units,
Each I/O2 burst takes 10 Time Units,
Think time takes 60 Time Units.
Any one of the types above can be created at any time. Type-4 has a maximum of N instances. No new processes of this type can be created after this limit. Processes of Type-1 to Type-3 terminate after executing their last CPU burst, while processes of Type-4 never terminate, they cycle through their pattern forever. Assume that a process in its think period stays out of the Ready Queue, say in a special list.
Dr. A. M. Al-Qasimi EE463 Term Project – Spring 2018 Page 2 of 6
3. The Operating System Components
The OS supports multi-programming and priority-based, pre-emptive CPU-scheduling. Each of the following OS components shall be written by the students participating in this project, according to the specifications given for each component below:
a) A modularly designed generic CPU Scheduler that supports a number of short-term scheduling algorithms including, FCFS, SJF, RR, MLFQ (Multi-Level Feedback Queue of n-levels) and -- for extra credit -- lottery (more about this is given below). Assume that the particular scheduling algorithm to be used in any simulation run is given as an input parameter on the command-line.
Other parameters that are required for a particular scheduler, depends on your design, such as the number-levels of MLFQ, time-quanta, … etc. These are part of your design decisions and they shall be saved in a suitable data-structure for fast access during execution. In particular, designing RR, MLFQ and lottery, you would choose the parameters and data structures to satisfy your goals which must be to achieve the best system performance possible.
b) A Supervisor, its job is to control all system operations, including the timer functions, interrupt handling, preemption decisions, …etc.
c) A Dispatcher, its job is to do the context switching and assign the CPU to the process selected by the CPU scheduler.
d) A Creator, is responsible for creating new processes of a given type. This is done by creating their PCBs and putting them in the ready queue.
e) A Terminator, is responsible for terminating finished processes by removing their PCBs.
f) An I/O Monitor, it is responsible for monitoring the processes completing their I/O. It generates an I/O Completion interrupt to the CPU each time a process finishes its I/O burst. For simplicity of simulation, it also takes the process's PCB to the ready queue.
g) The Job Generator, its purpose is to generate new jobs and select their types randomly, then, calls the creator to create their PCBs and enter them into the system. Job types are selected using an integer random number generator in the range [1,4] that has a uniform distribution over that range. The Java rand() function can be used.
Jobs are assumed to arrive randomly according to Poisson distribution with an expected value, v, in the range [0,1] provided to the program at run time. Use the following Poisson generator at each time step to get the number of new jobs to be created at that time step, then use the random number generator, rand() to get its type.
h) The Poisson Generator: Given an expected value, v, it generates a random integer to be used in the simulation, such that the expected value and Poisson distribution are satisfied. It is assumed that you already seeded the random number generator.
int poisson (double v) {
double x, em;
int n;
em = Math.exp(-v);
x = rand(); // returns 0 < x < 1
n = 0;
while (x > em) {
n++;
x *= rand();
}
return n;
}
Dr. A. M. Al-Qasimi EE463 Term Project – Spring 2018 Page 3 of 6
i) A Statistics-Collecting Module, used to keep record of important information about the system and report the following statistics about the simulation run:
1- The total number of time-units used.
2- The total number of jobs created.
3- The total number of each job-type created.
4- The total number of each job-type terminated.
5- The Maximum, and Average queue-lengths for each of the queues in the system.
6- The Minimum, Maximum, and Average response-times for jobs of Type-4 only.
7- The Minimum, Maximum, and Average turnaround-times for each job type other than Type-4.
8- The Minimum, Maximum, and Average turnaround-times for all jobs other than those of Type-4.
9- The total system throughput for jobs of Type-1 to Type-3.
10- The Minimum, Maximum, and Average of CPU-overhead-time.
11- The Percentage of CPU-idle-time.
12- The Percentage of CPU-utilization.
4. Assumptions
a) The CPU context-switching time takes 0.1 Time Units. i.e. each 10 take 1 time uint.
b) The SVC-start-I/O-interrupt takes 2 Time Units.
c) I/O-completion-interrupt takes 3 Time Units.
d) Job-scheduling overhead takes 1 Time Unit.
e) All other supervisor activities take 1 Time Unit per call.
f) The time-quantum of RR is left for each group to decide.
g) The MLFQ design parameters as given in class are left for each group to decide.
h) When the MLFQ is used, all processes are initially entered into the first level queue.
i) The response-time is defined for Type-4 jobs only, as the total time period spent from the end of I/O2 operation to the start of I/O1 operation. This includes the CPU-burst time, CPU-overhead and all queue delays during this period.
What to do
Write a Java program to simulate the above system. The input to the program should be through command-line parameters as follows:
1. The total number of time steps for the run, S, integer > 100; default = 100.
2. The ready queue type, integer, 1:FCFS, 2:SJF, 3:RR, 4:MLFQ, 5:lottery; default = 1.
3. The minimum quantum size, Q, to use as a basis of RR and MLFQ, integer > 0; default = 1.
4. The maximum number of Type-4 jobs, N, integer [0 .. 100]; default = 5.
5. The expected number of new jobs arriving per time unit, v, double [0 .. 1]; default = 0.5.
6. If implemented, the minimum number of tickets, t, integer > 0; default = 5.
7. If implemented, the maximum number of tickets, T, integer > t; default = 100.
8. If implemented, the speed of giving/taking tickets, c , multiple of Q, integer > 0; default = 0.
Choose an appropriate value for Q and v, Let N=20 and S=100,000 and run your program 5 times using the same values of Q, v, N, and S but each time with a different Queue type. Also, if implemented, choose suitable values for lottery parameters, t, T and c. Show the contents of all system queues, only for the first 20 time steps. After each run, your output should also show all your input values, the queue type used, and all the statistics reported by the statistics-collecting module.
Dr. A. M. Al-Qasimi EE463 Term Project – Spring 2018 Page 4 of 6
Discuss the statistics coming out of the five runs as to the effect of the scheduling algorithm, quantum time, job types and their numbers, the expected value, and so on.

Explanation / Answer

CPU-I/O Burst Cycle

All procedures exchange between two states in a proceeding with cycle, underneath :

A CPU burst of performing counts, and

An I/O burst, sitting tight for information move in or out of the framework.

CPU Scheduler

At whatever point the CPU winds up sit, it is the activity of the CPU Scheduler ( a.k.a. the fleeting scheduler ) to choose another procedure from the prepared line to keep running straightaway.

The capacity structure for the prepared line and the calculation used to choose the following procedure are not really a FIFO line. There are a few contrasting options to look over, and in addition various flexible parameters for every calculation, which is the fundamental subject of this whole section.

Preemptive Scheduling

CPU booking choices happen under one of four conditions:

At the point when a procedure changes from the running state to the holding up state, for example, for an I/O ask for or conjuring of the pause( ) framework call.

At the point when a procedure changes from the running state to the prepared state, for instance in light of an intrude.

At the point when a procedure changes from the holding up state to the prepared state, say at culmination of I/O or an arrival from pause( ).

At the point when a procedure ends.

For conditions 1 and 4 there is no decision - another procedure must be chosen.

For conditions 2 and 3 there is a decision - To either keep running the present procedure, or select an alternate one.

On the off chance that booking happens just under conditions 1 and 4, the framework is said to be non-preemptive, or helpful. Under these conditions, once a procedure begins running it continues running, until it either intentionally squares or until the point that it wraps up. Generally the framework is said to be preemptive.

Windows utilized non-preemptive booking up to Windows 3.x, and began utilizing pre-emptive planning with Win95. Macintoshes utilized non-preemptive preceding OSX, and pre-emptive from that point forward. Note that pre-emptive planning is just conceivable on equipment that backings a clock intrude.

Note that pre-emptive booking can cause issues when two procedures share information, since one process may get hindered really busy refreshing shared information structures. Section 6 will analyze this issue in more noteworthy detail.

Acquisition can likewise be an issue if the piece is occupied with executing a framework call ( e.g. refreshing basic part information structures ) when the acquisition happens. Most present day UNIXes manage this issue by influencing the procedure to hold up until the point when the framework call has either finished or hindered previously permitting the appropriation Unfortunately this arrangement is hazardous for constant frameworks, as continuous reaction can never again be ensured.

Some basic areas of code shield themselves from simultaneousness issues by crippling hinders before entering the basic segment and re-empowering hinders on leaving the segment. Obviously, this should just be done in uncommon circumstances, and just on short bits of code that will complete rapidly, ( typically only a couple of machine guidelines. )

Dispatcher

The dispatcher is the module that gives control of the CPU to the procedure chose by the scheduler. This capacity includes:

Exchanging setting.

Changing to client mode.

Bouncing to the best possible area in the recently stacked program.

The dispatcher should be as quick as could be expected under the circumstances, as it is keep running on each setting switch. The time devoured by the dispatcher is known as dispatch dormancy.

Scheduling Criteria

There are a few unique criteria to consider when endeavoring to choose the "best" planning calculation for a specific circumstance and condition, including:

CPU usage - Ideally the CPU would be occupied 100% of the time, to squander 0 CPU cycles. On a genuine framework CPU utilization should go from 40% ( daintily stacked ) to 90% ( vigorously stacked. )

Throughput - Number of procedures finished per unit time. May go from 10/second to 1/hour relying upon the particular procedures.

Turnaround time - Time required for a specific procedure to finish, from accommodation time to culmination. ( Wall clock time. )

Holding up time - How much time forms spend in the prepared line hanging tight to get on the CPU.

( Load normal - The normal number of procedures sitting in the prepared line holding up to get into the CPU. Revealed in 1-minute, 5-moment, and 15-minute midpoints by "uptime" and "who". )

Reaction time - The time taken in an intelligent program from the issuance of a summon to the begin of a reaction to that order.

By and large one needs to advance the normal estimation of a criteria ( Maximize CPU use and throughput, and limit all the others. ) However a few times one needs to accomplish something else, for example, to limit the most extreme reaction time.

Now and then it is most attractive to limit the fluctuation of a criteria than the genuine esteem. I.e. clients are more tolerating of a reliable unsurprising framework than a conflicting one, regardless of whether it is a smidgen slower.

Scheduling Algorithms

The accompanying subsections will clarify a few basic planning methodologies, taking a gander at just a solitary CPU burst each for few procedures. Clearly genuine frameworks need to manage significantly more concurrent procedures executing their CPU-I/O burst cycles.

First-Come First-Serve Scheduling, FCFS

FCFS is extremely basic - Just a FIFO line, similar to clients holding up in line at the bank or the mail station or at a duplicating machine.

Tragically, in any case, FCFS can yield some long normal hold up times, especially if the primary procedure to arrive takes quite a while. For instance, think about the accompanying three procedures:

In the main Gantt outline beneath, process P1 arrives first. The normal sitting tight time for the three procedures is ( 0 + 24 + 27 )/3 = 17.0 ms.

In the second Gantt outline underneath, a similar three procedures have a normal hold up time of ( 0 + 3 + 6 )/3 = 3.0 ms. The aggregate run time for the three blasts is the same, however in the second case two of the three complete significantly snappier, and alternate process is just postponed by a short sum.

FCFS can likewise obstruct the framework in a bustling dynamic framework in another route, known as the escort impact. When one CPU serious process obstructs the CPU, various I/O escalated procedures can get went down behind it, leaving the I/O gadgets sit without moving. At the point when the CPU hoard at long last gives up the CPU, at that point the I/O forms go through the CPU rapidly, leaving the CPU sit without moving while everybody lines up for I/O, and after that the cycle rehashes itself when the CPU serious process returns to the prepared line.

Shortest-Job-First Scheduling, SJF

The thought behind the SJF calculation is to pick the speediest quickest little occupation that should be done, get it off the beaten path to start with, and after that pick the following littlest speediest employment to do straightaway.

( Technically this calculation picks a procedure in view of the following most brief CPU burst, not the general procedure time. )

For instance, the Gantt diagram underneath depends on the accompanying CPU burst times, ( and the suspicion that all employments touch base in the meantime.

For the situation over the normal hold up time is ( 0 + 3 + 9 + 16 )/4 = 7.0 ms, ( rather than 10.25 ms for FCFS for similar procedures. )

SJF can be turned out to be the speediest planning calculation, however it experiences one imperative issue: How would you know to what extent the following CPU burst will be?

For long haul group employments this should be possible in light of the limits that clients set for their occupations when they submit them, which urges them to set low breaking points, yet hazards their having to re-present the activity on the off chance that they set the farthest point too low. However that does not work for here and now CPU booking on an intelligent framework.

Another choice is measurably measure the run time qualities of employments, especially if similar assignments are run over and again and typically. However, by and by that truly isn't a suitable alternative for here and now CPU planning for this present reality.

A more down to earth approach is to anticipate the length of the following burst, in light of some recorded estimation of late burst circumstances for this procedure. One basic, quick, and moderately precise strategy is the exponential normal, which can be characterized as takes after. ( The book utilizes tau and t for their factors, however those are difficult to recognize from each other and don't function admirably in HTML. )

estimate[ I + 1 ] = alpha * burst[ I ] + ( 1.0 - alpha ) * estimate[ I ]

In this plan the past gauge contains the historical backdrop of every past time, and alpha fills in as a weighting factor for the relative significance of late information versus previous history. On the off chance that alpha is 1.0, at that point previous history is disregarded, and we accept the following burst will be an indistinguishable length from the last burst. On the off chance that alpha is 0.0, at that point all deliberate burst times are disregarded, and we simply accept a consistent burst time.

SJF can be either preemptive or non-preemptive. Appropriation happens when another procedure lands in the prepared line that has an anticipated blasted time shorter than the time staying in the process whose burst is at present on the CPU.