Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

int main() { int i, j, n; printf(\"Example of schedule clause\ \"); printf(\"Giv

ID: 668316 • Letter: I

Question

int main()
{

   int i, j, n;
   printf("Example of schedule clause ");
   printf("Give an upper bound on the number of iterations: ");
   scanf_s("%d", &n);
   printf("n = %d ", n);

   /* The outer loop is parallelized with the loop construct. The workload in the inner   */
   /* loop depends on the value of the outer loop iteration variable i. Therefore, the    */
   /* workload is not balanced, and the static schedule is probably not the best choice.   */

#pragma omp parallel for default(none) schedule(runtime) private(i,j) shared(n) num_threads(4)
   for (i = 0; i < n; i++)
   {
       printf("Iteration %d executed by thread %d ", i, omp_get_thread_num());
       for (j = 0; j < i; j++)
           system("sleep 1");
   } /* -- End of parallel for -- */

   return(0);
}

I am supposed to expoler the impact of kind and chunk_size on how chunks are assigned to threads. I am also supposed to investigate the schedules when kind is static,dynamic, and runtime for n = 200 itterations and num_threads(5).

Now, I did this but I am also supposed to graph each. I'm not sure how I am supposed to do this, though. I don't understand how you would graph each of these. Any help on doing this is much appreciated.

The critieria is as follows:

give the iteration number along the x-axis (1, . . . , 200) and the thread identifier along the y-axis (0, . . . , 4). This graph should indicate which thread was assigned to which loop iteration, graphically illustrating the differences between the kinds of schedules. Explain the results of your graphs.

(Hint: To make schedule.cc run faster, put the threads to sleep for less time.)

Explanation / Answer

Parallel random access :

#include void iterator_example()

{ std::vector vec(23);

std::vector::iterator it;

#pragma omp parallel for default(none) shared(vec)

for (it = vec.begin();

it < vec.end(); it++) {

// do work with *it // } C/C++

}

num of threads class :

Memory model :

#include <stdio.h>
#include <omp.h>
int main(){
int x;

x = 2;
#pragma omp parallel num_threads(2) shared(x)
{
if (omp_get_thread_num() == 0) {
x = 5;
} else {
/* Print 1: the following read of x has a race */
printf("1: Thread# %d: x = %d ", omp_get_thread_num(),x );
}

#pragma omp barrier

if (omp_get_thread_num() == 0) {
/* Print 2 */
printf("2: Thread# %d: x = %d ", omp_get_thread_num(),x );
} else {
/* Print 3 */
printf("3: Thread# %d: x = %d ", omp_get_thread_num(),x );
}
}
return 0;
C/C++
}

For simplicity, we assume that we have a loop of 16 iterations, which has been parallelized by OpenMP, and that we are about to execute that loop using 2 threads.

In default scheduling

In static scheduling, using a "chunksize" of 4:

In dynamic scheduling, using a "chunksize" of 3:

The next chunk is iterations 7 to 9, and will be assigned to whichever thread finishes its current work first, and so on until all work is completed.

Usage:

In the BASH shell, the program could be run with 2 threads using the commands: