In class we discussed several communication models including message passing, sh
ID: 1996197 • Letter: I
Question
In class we discussed several communication models including message passing, shared memory, and remote procedure call. This question will test your understanding of the issues related to these communication models. Explain the difference between message passing and a remote procedure call (in Its most basic form) in terms of the following characteristics: (In answering these questions you might indicate, where appropriate, what extra work is needed by the programmer using such a mechanism to achieve the characteristic). Synchronization: Parallelism: Reliability: Performance You are developing a distributed database that will operate across a high-latency satellite link. You are considering RPC and message passing approaches for communicating data between parts of the system. Which approach would you choose and why? Why would you rule out the other approach?Explanation / Answer
Message passing in terms of Synchronization:
In a message-passing model, the sending and receiving processes need to coordinate sending and receiving messages with each other so that messages sent are eventually received and that messages received have actually been sent. They synchronize access to the shared channel.In synchronous message passing, both sender and receiver are blocking and the channel provides a direct link between the two processes. A process sending a message delays until the other process is ready to receive it. An exchange of a message represents a synchronization point between the two processes. Thus, communication and synchronization are tightly-coupled.
In synchronous message passing, both sender and receiver are blocking and the channel provides a direct link between the two processes. A process sending a message delays until the other process is ready to receive it. An exchange of a message represents a synchronization point between the two processes. Thus, communication and synchronization are tightly-coupled.
A good example of synchronous message passing is the telephone systemgif where the caller places a call and waits for the callee to answer; the caller is blocked until the callee answers. Note that initially the callee is blocked too; for example, it does not periodically pick up its phone to see if there's someone trying to talk to it. We can also model buffered message passing, in which the channel has capacity. Here, the sender delays if the channel is full; thus, if the callee is busy, then the caller has to wait until the callee is off the phone before being able to place the call successfully.
Remote procedure call in terms of synchronization:
Remote Procedure Calls are a common method of communication in distributed programming because of their easy-to-understand procedure-style semantics.Interaction between processes using an RPC is synchronized as follows. After "invoking" a remote procedure call,the client waits until server has executed the procedure and sent back results,which are treated as the return values of the procedure call. As far as the programmer is concerned,
invoking a remote procedure is not different than executing a regular procedure call.From the programmer's point of view, the RPC synchronization primitive can simplify and encapsulates interprocess interaction, especially for client-server applications. It is the underlying RPC communication package that takes care of message passing, procedure parameter encoding, and results decoding.
Parallelism:
Message passing:
In a message-passing model, parallel processes exchange data through passing messages to one another.
These communications can be asynchronous, where a message can be sent before the receiver is ready, or synchronous, where the receiver must be ready. The Communicating sequential processes formalisation of message passing uses synchronouscommunication channels to connect processes, and led to important languages such as Occam, Limbo and Go. In contrast, the actor model uses asynchronous message passing and has been employed in the design of languages such as D, Scala and SALSA.
Remote Procedure call:
As communication in the LAN-based networks of workstationsgets faster, such systems are becoming viable environmentsfor running parallel applications. Even thoughstill there is an order of magnitude difference in the speedof latency and transmission rate from parallel machines, thenetwork of workstations has two noteworthy advantages.First, it provides an opportunity to have a parallel machinevirtually with no extra costs. Workstations are ubiquitousand most of them have been underutilizing at most of times.Second, the virtual parallel machine can be constructedso as to take advantage of some special resources locallyavailable on some of the network hosts, for example graphicprocessors or vector processors.
Reliability:
Message passing:
Different applications require different degrees of reliability. The sender of a multicast message can specify the number of receivers from which a response message is expected. In one-to-many communication, the degree of reliability is normally expressed in the following forms
The 0-reliability. No response is expected by the sender from any of the receivers. The 1-reliability. The sender expects a response from any of the receivers.The m-out-of-n-reliable. The multicast group consists of n receivers and the sender expects a response from m (1<m<n) of the receivers.All-reliable. The sender expects a response message from all the receivers of the multicast group.
Remote Procedure Call:
No reliability is implemented with RPC and reliability is left to the application.RPC does not rely on a specific transport protocol.RPC can run on any operating system .Fields for client and server identification and authorization are provided
Performance:
Message Passing:
The MPI interface is meant to provide essential virtual topology, synchronization, and communication functionality between a set of processes in a language-independent way, with language-specific syntax plus a few language-specific features. MPI programs always work with processes, but programmers commonly refer to the processes as processors. Typically, for maximum performance, each CPU will be assigned just a single process. This assignment happens at runtime through the agent that starts the MPI program, normally called mpirun or mpiexec.
MPI library functions include, but are not limited to, point-to-point rendezvous-type send/receive operations, choosing between a Cartesian or graph-like logical process topology, exchanging data between process pairs, combining partial results of computations, synchronizing nodes as well as obtaining network-related information such as the number of processes in the computing session, current processor identity that a process is mapped to, neighboring processes accessible in a logical topology, and so on. Point-to-point operations come in synchronous, asynchronous, buffered, and ready forms, to allow both relatively stronger and weaker semantics for the synchronization aspects of a rendezvous-send. Many outstanding operations are possible in asynchronous mode, in most implementations
Remote Procedure Call:
RPC is one of the ways for creating distributed client-server based applications. Sun RPC (ONC RPC) is old yet still popular implementation of RPC on UNIX based systems. However Sun RPC implementation suffers from poor performance despite having high speed hardware. In this paper we have given brief about Sun RPC, performance analysis of Sun RPC library and different possible optimization technique that can be applied for enhancing its performance
1b)
Messsage passing is the best approach to develop distributed database that would operate accross a high latency satellite because:
The biggest advantage of message passing is that it is easier to build massively parallel hardware. Message passing programming models tend to be more tolerant of higher communication latencies.
Whether shared memory or message passing is faster depends on the problem being solved, the quality of the implementations, and the system(s) it is running on. For example, on a single server, it will probably be easier and higher performance to use a shared memory programming environment. Across a distributed cluster, it will probably be faster to use a message passing library.
As a general rule, RPC provides a higher level of abstraction than some other means of interprocess communication. This makes it, perhaps, easier to use than lower level primitives. For this abstraction you may pay some penalty in performance due to marshaling/unmarshaling and may have to deal with added complexity in configuration for simple scenarios.
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.