Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

Consider parallel execution of an MPI-coded C program in SPMD (single program an

ID: 3773173 • Letter: C

Question

Consider parallel execution of an MPI-coded C program in SPMD (single program and multiple data streams) mode on a server cluster consisting of n identical Linux servers. SPMD mode means the same MPI program is running simultaneously on all servers but over different data sets of identical workloads. Assume that 25 percent of the program execution is attributed to the execution of MPI commands. For simplicity, assume that all MPI commands take the same amount of execution time. Answer the following questions using Amdahl’s law:
a. Given that the total execution time of the MPI program on a four-server cluster is T minutes, what is the speedup factor of executing the same MPI program on a 256-server cluster, compared with using the four-server cluster? Assume that the program execution is deadlock-free and ignore all other runtime execution overheads in the calculation.









b. Suppose that all MPI commands are now enhanced by a factor of 2 by using active messages executed by message handlers at the user space. The enhancement can reduce the execution time of all MPI commands by half. What is the speedup of the 256-server cluster installed with this MPI enhancement, computed with the old 256-server cluster without MPI enhancement?

Explanation / Answer

a.Ans:

   # include<stdio.h>

# include<conio.h>

# include<malloc.h>

int x;

double a[4];

int g(int i) { ... }

int main(int argc, char **argv)

{

x=3;

  a[2]=7.0

...

return g(x);

}

int *x;

double (*a)[4];

void pccc_allocate_globals()

{

x=malloc(sizeof(int)* pccc_num_thr_local);

a=malloc(sizeof(double[4])* pccc_num_thr_local);

}

int g(int pccc_tid, int i)

{

...

}

int pccc_main(int pccc_tid, int argc, char **argv)

{

x[pccc_tid]=3;

a[pccc_tid][2]=7.0

...

return g(pccc_tid, x);

}

b)Ans:

# include<stdio.h>

# include<conio.h>

# include<malloc.h>

void lib_init ()

{

MPI_Keyval_create(MPI_NULL_FN, MPI_NULL_FN, &lib_key, (void *)NULL);

}

void lib_call( MPI_Comm comm, … )

{

int flag;

MPI_Comm *private_comm;

MPI_Attr_get( comm, lib_key, &private_comm, &flag );

if (!flag)

{

private_comm = (MPI_Comm *)malloc(sizeof(MPI_Comm));

MPI_Comm_Dup( comm, private_comm );

MPI_Attr_put( comm, lib_key, (void *)private_comm );

}

}

char msg[20] ;

int myrank,

tag = 99;

MPI_Status status; … MPI_Comm_rank( MPI_COMM_WORLD, &myrank );
if (myrank == 0) { strcpy( msg, “Hello there”);

MPI_Send( msg, strlen(msg)+1, MPI_CHAR, 1, tag, MPI_COMM_WORLD);

} else

{

MPI_Recv( msg, 20, MPI_CHAR, 0, tag, MPI_COMM_WORLD, &status );

}

    

Hire Me For All Your Tutoring Needs
Integrity-first tutoring: clear explanations, guidance, and feedback.
Drop an Email at
drjack9650@gmail.com
Chat Now And Get Quote