Always On Failover Cluster Instances (SQL server) leverage WSFC functionality (Windows Server Failover Clustering) to provide local high-availability through redundancy, at the server-instance level. With failover cluster, users experience minimum disruptions in their services because every time one of the cluster nodes fails, another node starts providing service by a process called failover. To experience the smooth switching of nodes, one needs to have a general-purpose SQL failover cluster to be successfully installed and deployed.
This blog will guide you through the successful deployment of the failover cluster in three parts. Follow the below steps for installing and configuring a general-purpose SQL failover clustering, having two nodes.
Part – 1: Environment setup
Step 1: Prerequisites
- For this scenario, we will need three Windows Server machines (2 nodes & 1 storage server).
- On all the three machines, install any of the respective OS. Say it, Windows Server 2019 or Windows Server 2016 or Windows Server 2012 or Windows Server 2012 R2.
- Add all VMs into the domain.
Note: The OS should be updated. Also, the system date, time of the domain controller & nodes should be same & synced.
- Add three NICs on both node servers & two NICs on storage server(iSCSI).
- Below are the three machines which we will use in our scenario with three subnets & networks:
|Specifications||Machine 1||Machine 2||Machine 3|
|Role||iSCSI shared storage||Failover Cluster Node1||Failover Cluster Node2|
|Domain Network IP or Server IP||10.0.102.132||10.0.102.130||10.0.102.131|
|Storage Network IP||192.168.0.132||192.168.0.130||192.168.0.131|
|Cluster Network IP or Heartbeat IP||not required||18.104.22.168||22.214.171.124|
- Create one service user (eg. [email protected]) in AD with the rights to create computer object and read permissions (In AD go to user>security>advance>add user & assign the permissions).
- Add that user to the local administrators on all servers & please check that in local administrators groups, the “domain admins group” must be there because it gets automatically added.
Account for administering the cluster: If you are creating a cluster or adding servers to it for the first time, log in to a domain with an account having administrator rights and permissions on all servers in the cluster. The account does not necessarily need to be a domain admin account. It may also be a domain users account that is on each clustered server in the administrator’s group. Furthermore, if the account is not a domain admin account, the account (or group of which the account is a member) must be given the permissions to create computer objects and read all properties in the organizational unit (OU) of the domain that will reside in it.
Step 2: Adding the “iSCSI target server” role on machine 1 SRV-ST.contoso.com
- Open iSCSI > select add volumes.
- Created two virtual disks:
a) 1 GB (for Quorum drive)
b) 10 GB (Shared storage)
- Add server IPs of both nodes (SRV-N1 & SRV-N2) IE 192.168.0.130 and 192.168.0.131 to both the virtual disks that you have created.
Step 3: After the installation of iSCSI disks on the storage server, move on Node1
- On machine two or Node 1(SRV-N1), go to local server >tools> iSCSI initiator. Click on the discovery tab> discovery portal. Add the IP address of the storage server 192.168.0.132, click on advance add in Local adaptor: “MS iSCSI initiator” and in Initiator Ip: “select the IP of the iSCSI server” > apply.
- Now go to the Target Tab, make both targets to the connected state from the inactive state. Click on target >connect> advance Local adaptor: “MS iSCSI initiator” and in Initiator Ip: “select the IP of the iSCSI server” and in Target portal IP: “select the same range IP of 192… then click on ok.
- Repeat all steps mentioned in step 3 on Node 2 (Machine 3 or SRV-N2 server).
Part – 2: Configuration of the failover cluster
Step 1: Assigning the storage to Node 1
- On machine two or Node 1(SRV-N1), go to Disk Management. On both disks, right-click and then click on Online.
- Right-click on one disk, then click on initialize, then select both disks and convert to GPT.
- Format Disk 1 and give drive letter Q for Quorum.
Do not perform the above steps on Node 2(SRV-N2). The disk will remain in the offline state on Node 2.
Step 2: Adding Failover Cluster role
- Add the below Roles on both the nodes Machine 2 & Machine 3:
a. Framework 3.0
b. Cluster failover
Step 3: Configuring Failover Cluster
- Go to local server > tools> Failover cluster. Only we must perform the below steps on one of our nodes, i.e., SRV-N1.
- On the right-hand side, click on validate configuration > next > select serves of the cluster “select both Node1 and Node2 servers” > run all test (If all test passes move on next step to create the cluster).
- Create the cluster > add both the server nodes > specify the cluster name & free IP address of the Domain IP range (it will create DNS entry in server with the cluster name and IP, e.g., [email protected]) > add all available storage to the cluster.
Step 4: Assign the Disks on Node 1
- Right-click on cluster and more actions > manual configuration > select 1 GB Disk to Quorum witness.
- Go to disks, right-click storage disk (not Quorum disk) add to clustered shared volumes (used for database & SQL)
Part – 3: Installation & configuration of the SQL Server failover cluster
Step 1: Install SQL server failover cluster on Node 1, i.e., SRV-N1
- Run SQL Server setup Installation > new SQL server failover cluster installation > standalone (1st option).
- Select the features of SQL server that you want to install like database service > In instance configuration give SQL cluster instance name, e.g., [email protected]
- Then> specify the shared storage > Specify the IP address > specify the service users & its credentials> next and finish the setup.
Step 2: Run Setup of SQL server on node 2, i.e., SRV-N2
- Select the option “Add node to a SQL Server failover cluster” > keep all as default and click on next, now we have to check the instance IP & SQL cluster name which were provided in step 1 > now, provide a password for the service user.
- Open Failover cluster on any node > go to the roles > double click > select both NODES or servers & select priority high.
We are now done with the setup. To test, shut down any one of the nodes to check if the SQL instance works or not. And for the same, follow the mentioned steps: Open failover cluster on any of the nodes and see which node is currently in use, e.g., Node 1 is in use. Then, open the failover cluster on Node 2 and shut down Node 1. In status, you will see that instead of Node 1, it has been switched to Node 2.
In case you face any issues while setting up this environment, viewing the status updates, or any other such concerns, contact us anytime. We are here to make technology work!