Due November 26, 2015 Introduction: For this project, the students are to use MS
ID: 3764239 • Letter: D
Question
Due November 26, 2015
Introduction:
For this project, the students are to use MS Visual Studio to program and run a simple multi-process console program using MPI in visual C++. The MPI libraries can be downloaded and added to visual C++ project on the project property tab. There are many available MPI libraries online, such as MS HPC, MPICH etc. The installation/configuration steps can also be found on the internet, usually from the library provider websites, as computer science master students, this should not be an issue.
The goal of this project is for student to learn the basic implementation of multi-process programming in parallel computing with MPI and using the knowledge learned from the course to decompose a relative complex computation problem into smaller parallel problems.
The student is required to design their own decomposition logic on the given problem, such as the number of processes produced, how and where to pass the result from one process to another etc.
Problem requirement:
Write a parallel program with MPI that supports the following computation.
1). It generates five processes P0, P1, P2, P3, and P4.
2). Main process gets a number n from key board, then initiates MPI.
3) Process Pi (i=0,1,2,3) uses n to call the following two functions.
a) function prime(int n) finds the smallest prime number q that q=8m+(2i+1) >n for some integer m. Note a prime number p is an integer that is not product of two integers less than p.
b) function twin(int n) finds the least twin number (q, q+2) that q=8m+(2i+1)> n for some integer m. A pair (q,q+2) is a twin if both q and q+2 are prime numbers.
4) P4 gets all four results from the other four processes and returns the least prime number and twin.
For example, if n=10, P0 returns 11 and (11,13), P1 returns 13 and (17, 19), P2 returns 17 and (17, 19), and P3 returns 19 and (29,31). Finally, P4 returns 11 and (11,13).
All the five processes share the same program.
Search the internet about twin number conjecture and report its recent status.
What to turn-in:
Your complete C++ code for this project with summary in a word document, please have readable code
Screenshots of test runs
Due Date: Thurs. Nov.26, 2015.
Submit your solution, which contains source code, summary, and test result, through the Blackboard of this class.
Example Instruction to install MPI (Taken from Microsoft website : https://msdn.microsoft.com/library/ee441265(v=vs.100).aspx )
Step1: Download MS HPC pack 2012 to your computer from this url: http://www.microsoft.com/en-us/download/details.aspx?id=36045
Step2: Install the HPC package to :
C:Program FilesMicrosoft HPC Pack 2012Inc
C:Program FilesMicrosoft HPC Pack 2012Libmd64
C:Program FilesMicrosoft HPC Pack 2012Libi386
Step 3: right click on your visual studio C++ project, select properties
Step 4: Expand Configuration Properties, then select VC++ Directories. In Include Directories, copy the above path C:Program FilesMicrosoft HPC Pack 2012Inc; to the beginning of the line; In Library Directories: copy the above path: C:Program FilesMicrosoft HPC Pack 2012Libi386; to the front of the line, it should look like this:
$(VCInstallDir)bin;$(WindowsSdkDir)binNETFX 4.0 Tools;$(W
C:program FilesMicrosoft HPC pack 2012Inc;$(VCInstall
$(VCInstallDir)atlmfclib;$(VCInstallDir)lib
C:program FilesMicrosoft HPC pack 2012Libi386;$(VC
$(VCInstallDir)atlmfcsrcmfc;$(VCInstallDir)atlmfcsrcmfcm;
$(VCInstallDir)include;$(VCInstallDir)atlmfcinclude;$(Window
Step 5: Under Linker, select Input.
In Additional Dependencies, place the cursor at the beginning of the list that appears in the text box, and then type the following:
msmpi.lib;
Should look like this:
msmpi.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;col
Step 6: If you are using the code sample with OpenMP:
In Configuration Properties, expand C/C++, and then select Language.
In Open MP Support, select Yes (/openmp) to enable compiler support for OpenMP.Click OK to close the property pages.
Step7: configure MPI cluster debugging for VS
In Solution Explorer, right-click Parallel PI, and then click Properties. This opens the Property Pages dialog box.
Expand Configuration Properties, and then select Debugging.
Under Debugger to launch, select MPI Cluster Debugger.
In Run Environment, select Edit Hpc Node from the drop-down list. This opens the Node Selector dialog box.
In the Head Node drop-down list, select localhost.
In Number of processes, select 4 << can be any number but it’s better to match your local processor number to reduce overhead.
Click OK to save changes and close the Node Selector dialog box.
That’s it your VS should be able to run MPI on your local computer. Please read through the MPI provider website for more information;
Below is a simple MPI sample showing message passing (more can be found in textbook and online):
#include "stdafx.h"
#include "mpi.h"
#include "stdio.h"
#include "stdlib.h"
//int _tmain(int argc, _TCHAR* argv[])
//add function
int add(int a, int b) {
return a+b;
}
//main method
int main(int argc, char* argv[])
{
//cout << "Hello World ";
int nTasks, rank;
int result;
MPI_Status status;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&nTasks);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
//int *buffer= (int *)malloc(4*sizeof(int));
//check for number of tasks and rank
printf ("Number of threads = %d, My rank = %d ", nTasks, rank);
//process 0 receives message from other processes and add them;
if(rank == 0){
result =0;
int *received = 0;
int *buffer= (int *)malloc(4*sizeof(int));
MPI_Recv(buffer, 1, MPI_INT, 1, 1, MPI_COMM_WORLD,&status);
MPI_Recv(buffer+1, 1, MPI_INT, 2, 1, MPI_COMM_WORLD,&status);
MPI_Recv(buffer+2, 1, MPI_INT, 3, 1, MPI_COMM_WORLD,&status);
MPI_Recv(buffer+3, 1, MPI_INT, 4, 1, MPI_COMM_WORLD,&status);
for (int i=0; i<4; i++){
result += buffer[i];
printf("+%d", buffer[i]);
}
printf ("rank = %d, My result = %d ", rank,result );
}
//even processes will execute this block
if(rank%2 ==0 && rank != 0){
int result = add(12,13);
//sending result message to process 0
MPI_Send(&result, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
printf ("from process %d,result %d",rank,result );
}
//odd processes will execute this block
if(rank%2 ==1){
int result = add(12,11);
//sending result message to process 0
MPI_Send(&result, 1, MPI_INT, 0, 1, MPI_COMM_WORLD);
printf ("from process %d,result %d",rank,result );
}
MPI_Finalize();
system("PAUSE");
return 0;
}
please anyone help me with this program
Explanation / Answer
Here is the expanded parallel version of the same program using MPI calls.
program sumvector_mpi
c This program sums all rows in a vector using MPI parallelism.
c The root process acts as a master and sends a portion of the
c vector to each child process. Master and child processes then
c all calculate a partial sum of the portion of the vector assigned
c to them, and the child processes send their partial sums to
c the master, who calculates a grand total.
include '/usr/include/mpif.h'
parameter (max_rows = 10000000)
parameter ( send_data_tag = 2001, return_data_tag = 2002)
integer my_id, root_process, ierr, status(MPI_STATUS_SIZE)
integer num_procs, an_id, num_rows_to_receive
integer avg_rows_per_process, num_rows, num_rows_to_send
real vector(max_rows), vector2(max_rows), partial_sum, sum
c Let process 0 be the root process.
root_process = 0
c Now replicate this process to create parallel processes.
c From this point on, every process executes a separate copy
c of this program.
call MPI_INIT (ierr)
c find out MY process ID, and how many processes were started.
call MPI_COMM_RANK (MPI_COMM_WORLD, my_id, ierr)
call MPI_COMM_SIZE (MPI_COMM_WORLD, num_procs, ierr)
if (my_id .eq. root_process) then
c
c I must be the root process, so I will query the user
c to determine how many numbers to sum.
print *, "please enter the number of numbers to sum:"
read *, num_rows
if ( num_rows .gt. max_rows) stop "Too many numbers."
avg_rows_per_process = num_rows / num_procs
c initialize a vector,
do I = 1, num_rows
vector(i) = float(i)
end do
c distribute a portion of the vector to each child process,
do an_id = 1, num_procs -1
start_row = ( an_id * avg_rows_per_process) + 1
end_row = start_row + avg_rows_per_process - 1
if (an_id .eq. (num_procs - 1)) end_row = num_rows
num_rows_to_send = end_row - start_row + 1
call MPI_SEND( num_rows_to_send, 1, MPI_INT,
& an_id, send_data_tag, MPI_COMM_WORLD, ierr)
call MPI_SEND( vector(start_row), num_rows_to_send, MPI_REAL,
& an_id, send_data_tag, MPI_COMM_WORLD, ierr)
end do
c and calculate the sum of the values in the segment assigned
c to the root process,
sum = 0.0
do i = 1, avg_rows_per_process
sum = sum + vector(i)
end do
print *, "sum ", sum, " calculated by root process."
c and, finally, I collect the partial sums from slave processes,
c print them, and add them to the grand sum, and print it.
do an_id = 1, num_procs -1
call MPI_RECV( partial_sum, 1, MPI_REAL, MPI_ANY_SOURCE,
& MPI_ANY_TAG, MPI_COMM_WORLD, status, ierr)
sender = status(MPI_SOURCE)
print *, "partial sum ", partial_sum,
& " returned from process ", sender
sum = sum + partial_sum
end do
print *, "The grand total is: ", sum
else
c I must be a slave process, so I must receive my vector segment,
c storing it in a "local" vector, vector2.
call MPI_RECV ( num_rows_to_receive, 1 , MPI_INT,
& root_process, MPI_ANY_TAG, MPI_COMM_WORLD, status, ierr)
call MPI_RECV ( vector2, num_rows_to_received, MPI_REAL,
& root_process, MPI_ANY_TAG, MPI_COMM_WORLD, status, ierr)
num_rows_received = num_rows_to_receive
c Calculate the sum of my portion of the vector,
partial_sum = 0.0
do i = 1, num_rows_received
partial_sum = partial_sum + vector2(i)
end do
c and, finally, send my partial sum to the root process.
call MPI_SEND( partial_sum, 1, MPI_REAL, root_process,
& return_data_tag, MPI_COMM_WORLD, ierr)
endif
c Stop this process.
call MPI_FINALIZE(ierr)
stop
end
*********************
Here is the expanded parallel version of the same program using MPI calls.
program sumvector_mpi
c This program sums all rows in a vector using MPI parallelism.
c The root process acts as a master and sends a portion of the
c vector to each child process. Master and child processes then
c all calculate a partial sum of the portion of the vector assigned
c to them, and the child processes send their partial sums to
c the master, who calculates a grand total.
include '/usr/include/mpif.h'
parameter (max_rows = 10000000)
parameter ( send_data_tag = 2001, return_data_tag = 2002)
integer my_id, root_process, ierr, status(MPI_STATUS_SIZE)
integer num_procs, an_id, num_rows_to_receive
integer avg_rows_per_process, num_rows, num_rows_to_send
real vector(max_rows), vector2(max_rows), partial_sum, sum
c Let process 0 be the root process.
root_process = 0
c Now replicate this process to create parallel processes.
c From this point on, every process executes a separate copy
c of this program.
call MPI_INIT (ierr)
c find out MY process ID, and how many processes were started.
call MPI_COMM_RANK (MPI_COMM_WORLD, my_id, ierr)
call MPI_COMM_SIZE (MPI_COMM_WORLD, num_procs, ierr)
if (my_id .eq. root_process) then
c
c I must be the root process, so I will query the user
c to determine how many numbers to sum.
print *, "please enter the number of numbers to sum:"
read *, num_rows
if ( num_rows .gt. max_rows) stop "Too many numbers."
avg_rows_per_process = num_rows / num_procs
c initialize a vector,
do I = 1, num_rows
vector(i) = float(i)
end do
c distribute a portion of the vector to each child process,
do an_id = 1, num_procs -1
start_row = ( an_id * avg_rows_per_process) + 1
end_row = start_row + avg_rows_per_process - 1
if (an_id .eq. (num_procs - 1)) end_row = num_rows
num_rows_to_send = end_row - start_row + 1
call MPI_SEND( num_rows_to_send, 1, MPI_INT,
& an_id, send_data_tag, MPI_COMM_WORLD, ierr)
call MPI_SEND( vector(start_row), num_rows_to_send, MPI_REAL,
& an_id, send_data_tag, MPI_COMM_WORLD, ierr)
end do
c and calculate the sum of the values in the segment assigned
c to the root process,
sum = 0.0
do i = 1, avg_rows_per_process
sum = sum + vector(i)
end do
print *, "sum ", sum, " calculated by root process."
c and, finally, I collect the partial sums from slave processes,
c print them, and add them to the grand sum, and print it.
do an_id = 1, num_procs -1
call MPI_RECV( partial_sum, 1, MPI_REAL, MPI_ANY_SOURCE,
& MPI_ANY_TAG, MPI_COMM_WORLD, status, ierr)
sender = status(MPI_SOURCE)
print *, "partial sum ", partial_sum,
& " returned from process ", sender
sum = sum + partial_sum
end do
print *, "The grand total is: ", sum
else
c I must be a slave process, so I must receive my vector segment,
c storing it in a "local" vector, vector2.
call MPI_RECV ( num_rows_to_receive, 1 , MPI_INT,
& root_process, MPI_ANY_TAG, MPI_COMM_WORLD, status, ierr)
call MPI_RECV ( vector2, num_rows_to_received, MPI_REAL,
& root_process, MPI_ANY_TAG, MPI_COMM_WORLD, status, ierr)
num_rows_received = num_rows_to_receive
c Calculate the sum of my portion of the vector,
partial_sum = 0.0
do i = 1, num_rows_received
partial_sum = partial_sum + vector2(i)
end do
c and, finally, send my partial sum to the root process.
call MPI_SEND( partial_sum, 1, MPI_REAL, root_process,
& return_data_tag, MPI_COMM_WORLD, ierr)
endif
c Stop this process.
call MPI_FINALIZE(ierr)
stop
end
Here is the expanded parallel version of the same program using MPI calls.
program sumvector_mpi
c This program sums all rows in a vector using MPI parallelism.
c The root process acts as a master and sends a portion of the
c vector to each child process. Master and child processes then
c all calculate a partial sum of the portion of the vector assigned
c to them, and the child processes send their partial sums to
c the master, who calculates a grand total.
include '/usr/include/mpif.h'
parameter (max_rows = 10000000)
parameter ( send_data_tag = 2001, return_data_tag = 2002)
integer my_id, root_process, ierr, status(MPI_STATUS_SIZE)
integer num_procs, an_id, num_rows_to_receive
integer avg_rows_per_process, num_rows, num_rows_to_send
real vector(max_rows), vector2(max_rows), partial_sum, sum
c Let process 0 be the root process.
root_process = 0
c Now replicate this process to create parallel processes.
c From this point on, every process executes a separate copy
c of this program.
call MPI_INIT (ierr)
c find out MY process ID, and how many processes were started.
call MPI_COMM_RANK (MPI_COMM_WORLD, my_id, ierr)
call MPI_COMM_SIZE (MPI_COMM_WORLD, num_procs, ierr)
if (my_id .eq. root_process) then
c
c I must be the root process, so I will query the user
c to determine how many numbers to sum.
print *, "please enter the number of numbers to sum:"
read *, num_rows
if ( num_rows .gt. max_rows) stop "Too many numbers."
avg_rows_per_process = num_rows / num_procs
c initialize a vector,
do I = 1, num_rows
vector(i) = float(i)
end do
c distribute a portion of the vector to each child process,
do an_id = 1, num_procs -1
start_row = ( an_id * avg_rows_per_process) + 1
end_row = start_row + avg_rows_per_process - 1
if (an_id .eq. (num_procs - 1)) end_row = num_rows
num_rows_to_send = end_row - start_row + 1
call MPI_SEND( num_rows_to_send, 1, MPI_INT,
& an_id, send_data_tag, MPI_COMM_WORLD, ierr)
call MPI_SEND( vector(start_row), num_rows_to_send, MPI_REAL,
& an_id, send_data_tag, MPI_COMM_WORLD, ierr)
end do
c and calculate the sum of the values in the segment assigned
c to the root process,
sum = 0.0
do i = 1, avg_rows_per_process
sum = sum + vector(i)
end do
print *, "sum ", sum, " calculated by root process."
c and, finally, I collect the partial sums from slave processes,
c print them, and add them to the grand sum, and print it.
do an_id = 1, num_procs -1
call MPI_RECV( partial_sum, 1, MPI_REAL, MPI_ANY_SOURCE,
& MPI_ANY_TAG, MPI_COMM_WORLD, status, ierr)
sender = status(MPI_SOURCE)
print *, "partial sum ", partial_sum,
& " returned from process ", sender
sum = sum + partial_sum
end do
print *, "The grand total is: ", sum
else
c I must be a slave process, so I must receive my vector segment,
c storing it in a "local" vector, vector2.
call MPI_RECV ( num_rows_to_receive, 1 , MPI_INT,
& root_process, MPI_ANY_TAG, MPI_COMM_WORLD, status, ierr)
call MPI_RECV ( vector2, num_rows_to_received, MPI_REAL,
& root_process, MPI_ANY_TAG, MPI_COMM_WORLD, status, ierr)
num_rows_received = num_rows_to_receive
c Calculate the sum of my portion of the vector,
partial_sum = 0.0
do i = 1, num_rows_received
partial_sum = partial_sum + vector2(i)
end do
c and, finally, send my partial sum to the root process.
call MPI_SEND( partial_sum, 1, MPI_REAL, root_process,
& return_data_tag, MPI_COMM_WORLD, ierr)
endif
c Stop this process.
call MPI_FINALIZE(ierr)
stop
end
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.