Academic Integrity: tutoring, explanations, and feedback — we don’t complete graded work or submit on a student’s behalf.

Questions 1. How do database servers guarantee durability and consistency. How c

ID: 3759376 • Letter: Q

Question

Questions

1. How do database servers guarantee durability and consistency. How can that same principle be applied to clustering in order to have high-availability databases?

2. Compare and contrast multi-master with master-slave replication in database clusters. List advantages and disadvantages for each in the context of high availability and disaster recovery.

3. Describe three issues with physical host independence when building failover clusters.

4. What is “split-brain” syndrome with failover clusters? How can that problem be resolved?

5. What makes writing an application specifically for high-availability clustering difficult?

Explanation / Answer

1.

Durability guarantees that the database will keep track of pending changes in such a way that the server can recover from an abnormal termination.Durability ensures that any transaction committed to the database will not be lost. Durability is ensured through the use of database backups and transaction logs that facilitate the restoration of committed transactions in spite of any subsequent software or hardware failures.Committed data is saved by the system such that, even in the event of a failure and system restart, the data is available in its correct state.

Consistency states that only valid data will be written to the database. If, for some reason, a transaction is executed that violates the database’s consistency rules, the entire transaction will be rolled back and the database will be restored to a state consistent with those rules. On the other hand, if a transaction successfully executes, it will take the database from one state that is consistent with the rules to another state that is also consistent with the rules.A transaction either creates a new and valid state of data, or, if any failure occurs, returns all data to its state before the transaction was started.

2)

Multi-master replication is a method of database replication which allows data to be stored by a group of computers, and updated by any member of the group. All members are responsive to client data queries. The multi-master replication system is responsible for propagating the data modifications made by each member to the rest of the group, and resolving any conflicts that might arise between concurrent changes made by different members.

Multi-master replication can be contrasted with master-slave replication, in which a single member of the group is designated as the "master" for a given piece of data and is the only node allowed to modify that data item. Other members wishing to modify the data item must first contact the master node. Allowing only a single master makes it easier to achieve consistency among the members of the group, but is less flexible than multi-master replication.

Multimaster replication examples usually are set up with only two servers, but they can be done with any number in a circular set.

List advantages and disadvantages for each in the context of high availability and disaster recovery.

1- A highly available cloud application implements strategies to absorb the outage of the dependencies like the managed services offered by the cloud platform. Despite possible failures of the cloud platform’s capabilities, this approach permits the application to continue to exhibit the expected functional and non-functional systemic characteristics.When you implement the application, you must consider the probability of a capability outage. Additionally, consider the impact an outage will have on the application from the business perspective before diving deep into the implementation strategies. Without due consideration to the business impact and the probability of hitting the risk condition, the implementation can be expensive and potentially unnecessary.There are a few key characteristics of highly available cloud services: availability, scalability, and fault tolerance. Although these characteristics are interrelated, it is important to understand each and how they contribute to the overall availability of the solution.

2- A cloud deployment might cease to function due to a systemic outage of the dependent services or the underlying infrastructure. Under such conditions, a business continuity plan triggers the disaster recovery (DR) process. This process typically involves both operations personnel and automated procedures in order to reactivate the application at a functioning datacenter. This requires the transfer of application users, data, and services to the new datacenter. It also involves the use of backup media or ongoing replication.

Consider the previous analogy that compared high availability to the ability to recover from a flat tire through the use of a spare. In contrast, disaster recovery involves the steps taken after a car crash where the car is no longer operational. In that case, the best solution is to find an efficient way to change cars by calling a travel service or a friend. In this scenario, there is likely going to be a longer delay in getting back on the road. There is also more complexity in repairing and returning to the original vehicle. In the same way, disaster recovery to another datacenter is a complex task that typically involves some downtime and potential loss of data.

3-

A feature of failover clusters called Cluster Shared Volumes is specifically designed to enhance the availability and manageability of virtual machines. Cluster Shared Volumes are volumes in a failover cluster that multiple nodes can read from and write to at the same time. This feature enables multiple nodes to concurrently access a single shared volume.

On a failover cluster that uses Cluster Shared Volumes, multiple clustered virtual machines that are distributed across multiple cluster nodes can all access their Virtual Hard Disk files at the same time, even if the VHD files are on a single disk in the storage. This means that the clustered virtual machines can fail over independently of one another, even if they use only a single LUN. When Cluster Shared Volumes is not enabled, a single disk can only be accessed by a single node at a time. This means that clustered virtual machines can only fail over independently if each virtual machine has its own LUN, which makes the management of LUNs and clustered virtual machines more difficult.

Issues

4- Split brain syndrome, in a clustering context, is a state in which a cluster of nodes gets divided into smaller clusters of equal numbers of nodes, each of which believes it is the only active cluster. Believing the other clusters are dead, each cluster may simultaneously access the same application data or disks, which can lead to data corruption. A split brain situation is created during cluster reformation. When one or more node fails in a cluster, the cluster reforms itself with the available nodes. During this reformation, instead of forming a single cluster, multiple fragments of the cluster with an equal number of nodes may be formed. Each cluster fragment assumes that it is the only active cluster -- and that other clusters are dead -- and starts accessing the data or disk. Since more than one cluster is accessing the disk, the data gets corrupted.

This problem can be resolved with High availability clusters are all vulnerable to split brain syndrome and should use some mechanism to avoid it. Clustering tools, such as Pacemaker, HP ServiceGuard, CMAN and LinuxHA, generally include such mechanisms.

5)

High availability clusters are often used for load balancing, backup and failover purposes. To properly configure a high-availability cluster, the hosts in the cluster must all have access to the same shared storage. This allows virtual machines on a given host to fail over to another host without any downtime in the event of a failure.

High availability clusters can range from two nodes to dozens of nodes, but storage administrators must be wary of the number of VMs and hosts they add to an HA cluster because too many can complicate load balancing.

Hosts in a virtual server cluster must have access to the same shared storage, and they must have identical network configurations. Domain name system naming is important too: All hosts must resolve other hosts using DNS names, and if DNS is not set correctly, you won’t be able to configure HA settings at all. HA cluster configuration is also critical. Server clustering depends on three settings: host failures allowed, admission control and VM options.