Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

Introduction to Data Communications 1. Define Local Area Network (LAN), metropol

ID: 3665129 • Letter: I

Question

Introduction to Data Communications

1. Define Local Area Network (LAN), metropolitan area network (MAN), wide area network (WAN), and backbone networks (BN).

2. List the seven OSI network model layers. Briefly explain each layer.

3. List the five layers in the Internet network model. Explain each layer.

4. Explain how Internet standards are developed.

5. Explain who develops Internet standards.

6. List important data communications standards-making bodies including the Internet Society, the IETF, IEEE, ANSI, and the ITU.

Application Layer

1. Compare and contrast host and client-server networks.

2. What is middleware? Why is it important?

3. Compare and contrast two-tier, three-tier, and n-tier client-server architectures.

4. How does a thin client differ from a thick client?

5. For what is HTTP/HTML used?

6. Explain how a Web browser, the Internet, and a Web server work together to send a page from the server to a user. V1.1 Test One

7. Describe how mail user agents and mail transfer agents work together to transfer mail messages.

8. What is a virtual server?

9. What is cloud computing?

Explanation / Answer

Answer 1-

Local Area Network (LAN) - A Local Area Network (LAN) is a network that is restricted to smaller physical areas e.g. a local office, school, or house. Approximately all current LANs whether wired or wireless are based on Ethernet. On a ‘Local Area Network’ data transfer speeds are higher than WAN and MAN that can extend to a 10.0 Mbps (Ethernet network) and 1.0 Gbps (Gigabit Ethernet).

Wide Area Network (WAN)

Wide Area Network is a computer network that covers relatively larger geographical area such as a state, province or country. It provides a solution to companies or organizations operating from distant geographical locations who want to communicate with each other for sharing and managing central data or for general communication.

Metropolitan Area Network (MAN)

A Metropolitan Area Network (MAN) is a network that connects two or more computers, communicating devices or networks in a single network that has geographic area larger than that covered by even a large ‘Local Area Network’ but smaller than the region covered by a ‘Wide Area Network’. MANs are mostly built for cities or towns to provide a high data connection and usually owned by a single large organization.

A backbone network or network backbone is a part of computer network infrastructure that interconnects various pieces of network, providing a path for the exchange of information between different LANs or subnetworks.

Answer 2-The open system interconnection model, better known as the OSI model, is a network map that was originally developed as a universal standard for creating networks. But instead of serving as a model with agreed-upon protocols that would be used worldwide, the OSI model has become a teaching tool that shows how different tasks within a network should be handled in order to promote error-free data transmission.

Answer 3 - Computers on the Internet are connected by various networks. The complexity of networking is addressed by dividing the Internet into many layers. The International Organization for Standardization (ISO) developed a 7 layer network model (Application, Presentation, Session, Transport,Network, Data Link and Physical layers) long before the Internet has gained popularity. The 7 layer model has been revised to a 5 layer TCP/IP based Internet Model (Application, Transport, Internet, Network, and Physical layers).

Application Layer Application layer defines generic available network applications or services the Internet can support. See the table below for widely used network applications and the corresponding network protocols.

Transmission Control Protocol (TCP) Layer This layer concerns how data can be reliably transferred over the network. UDP (User Datagram Protocol) is used when speed of data transmission is more important than reliability.

Internet Protocol (IP) Layer This layer handles address and routing of the network.

Local Network Access Protocol (NAP) Layer This is the part of your system that is concerned with how you communicate with your local network, whether is Ethernet or token ring.

Physical Layer This is the physical connection whether using a Network Interface Card (NIC) or with a modem to connect to the local network.

Answer 4- Becoming a standard is a two-step process within the IETF called Proposed Standards and Internet Standards. If an RFC is part of a proposal that is on the Standard Track, then at the first stage, the standard is proposed and subsequently organizations decide whether to implement this Proposed Standard. After the criteria in RFC 6410 is met (two separate implementations, widespread use, no errata etc.), the RFC can advance to Internet Standard.

Answer 5- There are a number of organisations involved in developing standards for the Internet such as Institute of Electrical and Electronics Engineers (IEEE), the World Wide Web Consortium (W3C) and the International Telecommunication Union (ITU). , the foremost of which is the theInternational Engineering Task Force (IETF),which is an activity of the Internet Society, is a self-organised group of people who contribute to the engineering and evolution of Internet technologies. It is the principal body engaged in developing new Internet standard specifications.

Anyone can participate in the IETF, and standards coming out of this body are freely available to the public. The standards process, in a nutshell, is this: one writes an Internet Draft RFC and sends it to the IETF for review. The work can directly become a standard, but more often than not, it’s developed further by an IETF working group. After a time, the working group then decides whether or not to approve the draft. The broader community is then given a final opportunity to comment. The Internet Engineering Steering Group makes a final judgment as to whether or not there is at least rough consensus to publish the edited document as an Internet standard. A key factor that sways opinion within the IETF is whether or not there is running code.

ANSWER 6 - There are a large number of organizations creating standards. These organizations usually specialize in the types of standards they work on. For example the T1 organization of ANSI works on protocols primarily used by the phone companies for medium and long distance communications. The IEEE 802 committee works primarily on Local Area Networks (LANs) used for communication over a shorter distance.

Some of the better-known standards organizations are:

ANSI – The American National Standards Institute

ETSI– European Telecommunications Standards Institute

T1– Committee T1

ADSL – The ADSL Forum

ATM – The ATM Forum

ITU – The International Telecommunication Union

IETF – The Internet Engineering Task Force

IEEE – Institute of Electrical and Electronic Engineers

TIA – Telecommunications Industry Association

----------------------------------------NEXT SECTION O ANSWERS--------------------------------------

ANSWER 1 -

There’s a huge difference between client/server and host based networks. For instance, a host based network has no central server. Each workstation on the network shares its files equally with the others. There’s no central storage or authentication of users. Conversely, there are separate dedicated servers and clients in a client/server network. Through client workstations, users can access most files, which are generally stored on the server. The server will determine which users can access the files on the network.

Host based networks should be installed in homes or in very small businesses where employees interact regularly. They are inexpensive to set up (comparatively speaking); however, they offer almost no security. On the other hand, client/server networks can become as big as you need them to be. Some support millions of users and offer elaborate security measures. As you can imagine, client/server networks can become very expensive.

ANSWER 2 - Middleware is the software that connects software components or enterprise applications. Middleware is the software layer that lies between the operating system and the applications on each side of a distributed computer network. Typically, it supports complex, distributed business software applications.

Because of middleware's importance in the sharing of information, its importance to the EAI solution is growing more and more evident. Although it was a tool for moving information between systems within a single enterprise, we now look to middleware products to allow us to move information between multiple enterprises. This new demand on middleware presents vendors with a significant challenge, since middleware products were conceived for, and built exclusively for, intra-enterprise integration.

ANSWER 3 -

Two-Tier Architecture:

The two-tier is based on Client Server architecture. The two-tier architecture is like client server application. The direct communication takes place between client and server. There is no intermediate between client and server. Because of tight coupling a 2 tiered application will run faster.

Two-tier architecture is divided into two parts:

1) Client Application (Client Tier)
2) Database (Data Tier)

On client application side the code is written for saving the data in the SQL server database. Client sends the request to server and it process the request & send back with data. The main problem of two tier architecture is the server cannot respond multiple request same time, as a result it cause a data integrity issue.

Three-Tier Architecture:

it typically comprise a presentation tier, a business or data access tier, and a data tier. Three layers in the three tier architecture are as follows:

1) Client layer - This layer is used for the design purpose where data is presented to the user or input is taken from the user.
2) Business layer - In this layer all business logic written like validation of data, calculations, data insertion etc. This acts as a interface between Client layer and Data Access Layer.

3) Data layer - In this layer actual database is comes in the picture. Data Access Layer contains methods to connect with database and to perform insert, update, delete, get data from database based on our input data

n-tier architecture is a client–server architecture in which presentation, application processing, and data management functions are physically separated. N-tier application architecture provides a model by which developers can create flexible and reusable applications. By segregating an application into tiers, developers acquire the option of modifying or adding a specific layer, instead of reworking the entire application. .

ANSWER 4 -

A thin client is designed to be especially small so that the bulk of the data processing occurs on the server. Although the term thin client often refers to software, it is increasingly used for the computers, such as network computers and Net PCs, that are designed to serve as the clients for client/server architectures. A thin client is a network computer without a hard disk drive. They act as a simple terminal to the server and require constant communication with the server as well.

In contrast, a thick client (also called a fat client) is one that will perform the bulk of the processing in client/server applications. With thick clients, there is no need for continuous server communications as it is mainly communicating archival storage information to the server. As in the case of a thin client, the term is often used to refer to software, but again is also used to describe the networked computer itself. If your applications require multimedia components or that are bandwidth intensive, you'll also want to consider going with thick clients. One of the biggest advantages of thick clients rests in the nature of some operating systems and software being unable to run on thin clients. Thick clients can handle these as it has its own resources.

ANSWER 5 - The Hypertext Transfer Protocol (HTTP) is designed to enable communications between clients and servers. HTTP works as a request-response protocol between a client and server. A web browser may be the client, and an application on a computer that hosts a web site may be the server. Example: A client (browser) submits an HTTP request to the server; then the server returns a response to the client. The response contains status information about the request and may also contain the requested content.

Two commonly used methods for a request-response between a client and server are: GET and POST.

ANSWER 6 - here is the process by which a Web browser, the Internet, and a Web server work together to send a page from the server to a user.

This is also called "stateless" as the server closes communication with client after the client has received everything from the reponse-stream.Therefore the server cannot know if the client is still connected nor if its comming back later. Many servers does provide a session object using cookies or similar to track if its the same client that sends the next REQUEST and if so, allowing more "intelligent" server responses - such as seeking, transactions and logins.

ANSWER 7 - The Simple Mail Transfer Protocol (SMTP)is the most commonly used email standard simply because it is the email standard used on the Internet

.•Email works similarly to how the Web works, but it is a bit more complex.

•SMTP email is usually implemented as a two-tier thick client-server application, but not always.

EXAMPLE

User agent –Example:MS Outlook

Mail transfer agent –mail server software

SMTP packet- Includes information such as the sender’s address and the destination address

1.The user creates the email message using email clients, which formats the message into anSMTP packet that includes information such as the sender’s address and the destination address.

2.The user agent then sends the SMTP packet to amail serverthat runs a special application layer softwarepackage called amail transfer agent,which is more commonly called mail server software.

3.This email server reads the SMTP packet to find the destination address and then sends the packet on its way through the network—often over the Internet—from mail server to mail server, until it reaches themail server specified in the destination address.

4.The mail transfer agent on the destination server then stores the message in the receiver’s mailbox on thatserver. The message sits in the mailbox assigned to the user who is to receive the message until he or shechecks for new mail.

ANSWER 8 - On the Internet, a virtual server is a server (computer and various server programs) at someone else's location that is shared by multiple Web site owners so that each owner can use and administer it as though they had complete control of the server.

A server, usually a Web server, that shares computer resources with other virtual servers. In this context, the virtual part simply means that it is not a dedicated server-- that is, the entire computer is not dedicated to running the server software.Virtual Web servers are a very popular way of providing low-costweb hosting services. Instead of requiring a separate computer for each server, dozens of virtual servers can co-reside on the same computer. In most cases, performance is not affected and each web site behaves as if it is being served by a dedicated server. However, if too many virtual servers reside on the same computer, or if one virtual server starts hogging resources, Web pages will be delivered more slowly.

ANSWER 9 - Cloud computing enables companies to consume compute resources as a utility -- just like electricity -- rather than having to build and maintain computing infrastructures in-house. Cloud computing promises several attractive benefits for businesses and end users. Three of the main benefits of cloud computing include:

• Self-service provisioning: End users can spin up computing resources for almost any type of workload on-demand.
• Elasticity: Companies can scale up as computing needs increase and then scale down again as demands decrease.
• Pay per use: Computing resources are measured at a granular level, allowing users to pay only for the resources and workloads they use.

Application Protocol Web HTTP Email SMTP File Transfer FTP
Hire Me For All Your Tutoring Needs
Integrity-first tutoring: clear explanations, guidance, and feedback.
Drop an Email at
drjack9650@gmail.com
Chat Now And Get Quote