{"id":13,"date":"2012-03-03T09:39:26","date_gmt":"2012-03-03T08:39:26","guid":{"rendered":"http:\/\/www.pmsapp.org\/?page_id=13"},"modified":"2012-03-24T05:55:03","modified_gmt":"2012-03-24T04:55:03","slug":"quick-guide","status":"publish","type":"page","link":"https:\/\/mail-e.dk\/pmsapp.org\/?page_id=13","title":{"rendered":"Quick Guide"},"content":{"rendered":"<h3>Index<\/h3>\n<ul>\n<li><a href=\"#intro\">Introduction<\/a><\/li>\n<li><a href=\"#firewall\">Disable firewall<\/a><\/li>\n<li><a href=\"#selinux\">Disable SElinux<\/a><\/li>\n<li><a href=\"#resolv\">\/etc\/resolv.conf<\/a><\/li>\n<li><a href=\"#network\">\/etc\/sysconfig\/network<\/a><\/li>\n<li><a href=\"#eth0\">\/etc\/sysconfig\/network-scripts\/ifcfg-eth0<\/a><\/li>\n<li><a href=\"#eth1\">\/etc\/sysconfig\/network-scripts\/ifcfg-eth1<\/a><\/li>\n<li><a href=\"#eth2\">\/etc\/sysconfig\/network-scripts\/ifcfg-eth2<\/a><\/li>\n<li><a href=\"#hosts\">\/etc\/hosts<\/a><\/li>\n<li><a href=\"#software\">Installing software<\/a><\/li>\n<li><a href=\"#targetconf\">iSCSI target configuration<\/a><\/li>\n<li><a href=\"#initconf\">iSCSI initiator configuration<\/a><\/li>\n<li><a href=\"#raidconf\">RAID configuration<\/a><\/li>\n<li><a href=\"#restarttgt\">Restart iSCSI target<\/a><\/li>\n<li><a href=\"#filesystem\">Filesystem<\/a><\/li>\n<li><a href=\"#clustsyssvc\">System cluster services<\/a><\/li>\n<li><a href=\"#fencedevice\">Fence device<\/a><\/li>\n<li><a href=\"#swraidscript\">Software RAID script<\/a><\/li>\n<li><a href=\"#finalclustconf\">Final cluster configuration<\/a><\/li>\n<\/ul>\n<h3 id=\"intro\">Introduction<\/h3>\n<p>This is meant as a cheat sheet that can be used to quickly setup a set of appliances. Take note that this is only accurate for a 3 node cluster.<br \/>\nIf in doubt, take a look at the complete\u00a0<a title=\"Install guide\" href=\"http:\/\/www.pmsapp.org\/pmsapp-install-guide\/\">pmsApp<\/a>\u00a0guide.<\/p>\n<p>The first part of this cheat sheet must be run on all nodes in the cluster.<br \/>\n<span style=\"color: red;\"><strong>Ensure to correct names and IP addresses.<\/strong><\/span><br \/>\nPlease leave a comment.<\/p>\n<h3 id=\"firewall\">Disable firewall<\/h3>\n<pre>chkconfig iptables off\r\nchkconfig ip6tables off<\/pre>\n<h3 id=\"selinux\">Disable SElinux<\/h3>\n<pre>echo \"SELINUX=disabled\"      &gt; \/etc\/selinux\/config;\\\r\necho \"SELINUXTYPE=targeted\" &gt;&gt; \/etc\/selinux\/config<\/pre>\n<h3 id=\"resolv\">\/etc\/resolv.conf<\/h3>\n<pre>echo \"domain pmsapp.org\"          &gt; \/etc\/resolv.conf;\\\r\necho \"search pmsapp.org\"         &gt;&gt; \/etc\/resolv.conf;\\\r\necho \"nameserver 192.168.0.11\"   &gt;&gt; \/etc\/resolv.conf;\\\r\necho \"nameserver 208.67.222.222\" &gt;&gt; \/etc\/resolv.conf;\\\r\necho \"nameserver 208.67.220.220\" &gt;&gt; \/etc\/resolv.conf<\/pre>\n<h3 id=\"network\">\/etc\/sysconfig\/network<\/h3>\n<pre>echo \"NETWORKING=yes\"               &gt; \/etc\/sysconfig\/network;\\\r\necho \"HOSTNAME=pmsapp1.pmsapp.org\" &gt;&gt; \/etc\/sysconfig\/network<\/pre>\n<h3 id=\"eth0\">\/etc\/sysconfig\/network-scripts\/ifcfg-eth0<\/h3>\n<table border=\"0\">\n<tbody>\n<tr>\n<td>\n<pre>echo \"# Fist NIC, used for NFS and management traffic\"    &gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth0;\\\r\necho \"# should be reachable by the virtualization hosts\" &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth0;\\\r\necho \"DEVICE=eth0\"                                       &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth0;\\\r\necho \"BOOTPROTO=static\"                                  &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth0;\\\r\necho \"ONBOOT=yes\"                                        &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth0;\\\r\necho \"TYPE=Ethernet\"                                     &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth0;\\\r\necho \"IPADDR=192.168.0.21\"                               &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth0;\\\r\necho \"NETMASK=255.255.255.0\"                             &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth0;\\\r\necho \"BROADCAST=192.168.0.255\"                           &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth0;\\\r\necho \"GATEWAY=192.168.0.1\"                               &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth0;\\\r\necho \"DNS1=192.168.0.11\"                                 &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth0;\\\r\necho \"DNS2=208.67.222.222\"                               &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth0;\\\r\necho \"DNS3=208.67.220.220\"                               &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth0;\\\r\necho \"IPV6INIT=no\"                                       &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth0;\\\r\necho \"USERCTL=no\"                                        &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth0<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"eth1\">\/etc\/sysconfig\/network-scripts\/ifcfg-eth1<\/h3>\n<table border=\"0\">\n<tbody>\n<tr>\n<td>\n<pre>echo \"# 2nd NIC, used for iSCSI traffic.\"               &gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth1;\\\r\necho \"# No DNS or routes are necessary, but all nodes\" &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth1;\\\r\necho \"# should be able to communicate on the subnet\"   &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth1;\\\r\necho \"DEVICE=eth1\"                                     &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth1;\\\r\necho \"BOOTPROTO=static\"                                &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth1;\\\r\necho \"ONBOOT=yes\"                                      &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth1;\\\r\necho \"TYPE=Ethernet\"                                   &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth1;\\\r\necho \"IPADDR=172.16.0.21\"                              &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth1;\\\r\necho \"NETMASK=255.255.255.0\"                           &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth1;\\\r\necho \"BROADCAST=172.16.0.255\"                          &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth1;\\\r\necho \"IPV6INIT=no\"                                     &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth1;\\\r\necho \"USERCTL=no\"                                      &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth1<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"eth2\">\/etc\/sysconfig\/network-scripts\/ifcfg-eth2<\/h3>\n<table border=\"0\">\n<tbody>\n<tr>\n<td>\n<pre>echo \"# 3rd NIC, used for heartbeat and cluster traffic.\" &gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth2;\\\r\necho \"# No DNS or routes are necessary, but all nodes\"   &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth2;\\\r\necho \"# should be able to communicate on the subnet\"     &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth2;\\\r\necho \"DEVICE=eth2\"                                       &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth2;\\\r\necho \"BOOTPROTO=static\"                                  &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth2;\\\r\necho \"ONBOOT=yes\"                                        &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth2;\\\r\necho \"TYPE=Ethernet\"                                     &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth2;\\\r\necho \"IPADDR=10.0.0.21\"                                  &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth2;\\\r\necho \"NETMASK=255.255.255.0\"                             &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth2;\\\r\necho \"BROADCAST=10.0.0.255\"                              &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth2;\\\r\necho \"IPV6INIT=no\"                                       &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth2;\\\r\necho \"USERCTL=no\"                                        &gt;&gt; \/etc\/sysconfig\/network-scripts\/ifcfg-eth2<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"hosts\">\/etc\/hosts<\/h3>\n<pre>echo \"127.0.0.1 localhost localhost.localdomain\" &gt; \/etc\/hosts;\\\r\necho \"10.0.0.21 pmsapp1 pmsapp1.pmsapp.org\"     &gt;&gt; \/etc\/hosts;\\\r\necho \"10.0.0.22 pmsapp2 pmsapp2.pmsapp.org\"     &gt;&gt; \/etc\/hosts;\\\r\necho \"10.0.0.23 pmsapp3 pmsapp3.pmsapp.org\"     &gt;&gt; \/etc\/hosts<\/pre>\n<h3 id=\"software\">Installing software<\/h3>\n<table border=\"0\">\n<tbody>\n<tr>\n<td>\n<pre>yum -y upgrade\r\nyum -y groupinstall \"High Availability\"\r\nyum -y install scsi-target-utils iscsi-initiator-utils nfs-utils mdadm\r\nchkconfig iscsi off\r\nchkconfig nfs off\r\nchkconfig tgtd off<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"targetconf\">iSCSI target configuration<\/h3>\n<table border=\"0\">\n<tbody>\n<tr>\n<td>\n<pre>echo \"&lt;target iqn.2012-03.org.pmsapp:pmsapp1.disk&gt;\" &gt; \/etc\/tgt\/targets.conf;\\\r\necho \"    backing-store \/dev\/vdb\"                  &gt;&gt; \/etc\/tgt\/targets.conf;\\\r\necho \"&lt;\/target&gt;\"                                   &gt;&gt; \/etc\/tgt\/targets.conf\r\n\r\nshutdown -h now<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><center>Add second harddrive and restart nodes<\/center><\/p>\n<pre>service tgtd start<\/pre>\n<p>&nbsp;<\/p>\n<h1><span style=\"color: red;\">&#8212; The above must be done on all nodes &#8212;<\/span><\/h1>\n<p>&nbsp;<\/p>\n<h3 id=\"initconf\">iSCSI initiator configuration<\/h3>\n<table border=\"0\">\n<tbody>\n<tr>\n<td>\n<pre>echo \"node.conn[0].timeo.login_timeout = 2\"     &gt;&gt; \/etc\/iscsi\/iscsid.conf;\\\r\necho \"node.session.initial_login_retry_max = 1\" &gt;&gt; \/etc\/iscsi\/iscsid.conf\r\nscp \/etc\/iscsi\/iscsid.conf pmsapp2:\/etc\/iscsi\r\nscp \/etc\/iscsi\/iscsid.conf pmsapp3:\/etc\/iscsi\r\niscsiadm -m discovery -t st -p 172.16.0.21\r\niscsiadm -m discovery -t st -p 172.16.0.22\r\niscsiadm -m discovery -t st -p 172.16.0.23\r\niscsiadm -m node -T iqn.2012-03.org.pmsapp:pmsapp1.disk -p 172.16.0.21 --login\r\niscsiadm -m node -T iqn.2012-03.org.pmsapp:pmsapp2.disk -p 172.16.0.22 --login\r\niscsiadm -m node -T iqn.2012-03.org.pmsapp:pmsapp3.disk -p 172.16.0.23 --login\r\n\r\nfdisk -l 2&gt;\/dev\/null | grep Disk | grep bytes<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"raidconf\">RAID configuration<\/h3>\n<table>\n<tbody>\n<tr>\n<td>\n<pre>mdadm --create \/dev\/md0 --bitmap=internal --level=5 --raid-devices=3 \/dev\/sda \/dev\/sdb \/dev\/sdc\r\n\r\nwhile [ $? -eq 0 ]; do cat \/proc\/mdstat; sleep 1; grep finish \/proc\/mdstat &amp;&gt;\/dev\/null; done\r\n\r\nmdadm --examine --scan &gt; \/etc\/mdadm.conf\r\ncat \/etc\/mdadm.conf\r\n\r\nscp \/etc\/mdadm.conf pmsapp2:\/etc\r\nscp \/etc\/mdadm.conf pmsapp3:\/etc\r\n\r\nmdadm --stop \/dev\/md0\r\nservice iscsi stop\r\n\r\n<span style=\"color: red;\"># ON NODE 2<\/span>\r\niscsiadm -m discovery -t st -p 172.16.0.21\r\niscsiadm -m discovery -t st -p 172.16.0.22\r\niscsiadm -m discovery -t st -p 172.16.0.23\r\niscsiadm -m node -T iqn.2012-03.org.pmsapp:pmsapp1.disk -p 172.16.0.21 --login\r\niscsiadm -m node -T iqn.2012-03.org.pmsapp:pmsapp2.disk -p 172.16.0.22 --login\r\niscsiadm -m node -T iqn.2012-03.org.pmsapp:pmsapp3.disk -p 172.16.0.23 --login\r\ncat \/proc\/mdstat\r\nmdadm --stop \/dev\/md0\r\nservice iscsi stop\r\n\r\n<span style=\"color: red;\"># ON NODE 3<\/span>\r\niscsiadm -m discovery -t st -p 172.16.0.21\r\niscsiadm -m discovery -t st -p 172.16.0.22\r\niscsiadm -m discovery -t st -p 172.16.0.23\r\niscsiadm -m node -T iqn.2012-03.org.pmsapp:pmsapp1.disk -p 172.16.0.21 --login\r\niscsiadm -m node -T iqn.2012-03.org.pmsapp:pmsapp2.disk -p 172.16.0.22 --login\r\niscsiadm -m node -T iqn.2012-03.org.pmsapp:pmsapp3.disk -p 172.16.0.23 --login\r\ncat \/proc\/mdstat\r\nmdadm --stop \/dev\/md0\r\nservice iscsi stop<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"restarttgt\">Restart iSCSI target<\/h3>\n<table border=\"0\">\n<tbody>\n<tr>\n<td>\n<pre>echo -e \"\\x23\\x21\/bin\/bash\"                  &gt; \/etc\/init.d\/iscsitarget;\\\r\necho \"case \\\"\\$1\\\" in\"                      &gt;&gt; \/etc\/init.d\/iscsitarget;\\\r\necho \"    start)\"                           &gt;&gt; \/etc\/init.d\/iscsitarget;\\\r\necho \"        mdadm --stop \/dev\/md0\"        &gt;&gt; \/etc\/init.d\/iscsitarget;\\\r\necho \"        \/etc\/init.d\/tgtd start\"       &gt;&gt; \/etc\/init.d\/iscsitarget;\\\r\necho \"        ;;\"                           &gt;&gt; \/etc\/init.d\/iscsitarget;\\\r\necho \"    *)\"                               &gt;&gt; \/etc\/init.d\/iscsitarget;\\\r\necho \"        echo $\\\"Usage: \\$0 {start}\\\"\" &gt;&gt; \/etc\/init.d\/iscsitarget;\\\r\necho \"        exit 2\"                       &gt;&gt; \/etc\/init.d\/iscsitarget;\\\r\necho \"esac\"                                 &gt;&gt; \/etc\/init.d\/iscsitarget;\\\r\necho \"exit \\$?\"                             &gt;&gt; \/etc\/init.d\/iscsitarget\r\n\r\nchmod +x \/etc\/init.d\/iscsitarget\r\nscp \/etc\/init.d\/iscsitarget pmsapp2:\/etc\/init.d\r\nscp \/etc\/init.d\/iscsitarget pmsapp3:\/etc\/init.d\r\nln -s \/etc\/init.d\/iscsitarget \/etc\/rc3.d\/S16iscsitarget\r\nssh pmsapp2 ln -s \/etc\/init.d\/iscsitarget \/etc\/rc3.d\/S16iscsitarget\r\nssh pmsapp3 ln -s \/etc\/init.d\/iscsitarget \/etc\/rc3.d\/S16iscsitarget<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"filesystem\">Filesystem<\/h3>\n<pre>service iscsi start\r\ncat \/proc\/mdstat\r\nmkfs.ext4 \/dev\/md0\r\n\r\nmdadm --stop \/dev\/md0\r\nservice iscsi stop\r\n\r\nmkdir \/sharedstorage\r\nssh pmsapp2 mkdir \/sharedstorage\r\nssh pmsapp3 mkdir \/sharedstorage<\/pre>\n<h3 id=\"clustsyssvc\">System cluster services<\/h3>\n<table border=\"0\">\n<tbody>\n<tr>\n<td>\n<pre>chkconfig cman on; chkconfig rgmanager on; chkconfig modclusterd on; chkconfig ricci on; passwd ricci\r\nssh pmsapp2 \"chkconfig cman on; chkconfig rgmanager on; chkconfig modclusterd on; chkconfig ricci on; passwd ricci\"\r\nssh pmsapp3 \"chkconfig cman on; chkconfig rgmanager on; chkconfig modclusterd on; chkconfig ricci on; passwd ricci\"<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"fencedevice\">Fence device<\/h3>\n<pre>echo -e \"\\x23\\x21\/bin\/bash\" &gt; \/usr\/sbin\/fence_disable\r\nchmod +x \/usr\/sbin\/fence_disable\r\nscp \/usr\/sbin\/fence_disable pmsapp2:\/usr\/sbin\r\nscp \/usr\/sbin\/fence_disable pmsapp3:\/usr\/sbin<\/pre>\n<h3 id=\"swraidscript\">Software RAID script<\/h3>\n<table border=\"0\">\n<tbody>\n<tr>\n<td>\n<pre>echo -e \"\\x23\\x21\/bin\/bash\"                                                                     &gt; \/etc\/init.d\/swraid;\\\r\necho \"case \\\"\\$1\\\" in\"                                                                         &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"    start)\"                                                                              &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        # Start the iscsi client, it will log in to the allready configured targets.\"    &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        service iscsi start\"                                                             &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho                                                                                           &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        # Wait for 1 second to allow the software RAID driver to discover the RAID set.\" &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        sleep 1\"                                                                         &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho                                                                                           &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        # Search \/proc\/mdstat for \\\": active\\\"\"                                          &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        grep \\\": active\\\" \/proc\/mdstat &amp;&gt;\/dev\/null\"                                      &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho                                                                                           &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        # If \\\": active\\\" is not found, it means that the array is not started.\"         &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        if [ \\$? -ne 0 ]; then\"                                                          &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"            # Try to start the array\"                                                    &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"            mdadm --run \/dev\/md0\"                                                        &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        fi\"                                                                              &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho                                                                                           &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        # Check \/proc\/mdstat again to see if the array is running.\"                      &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        grep \\\": active\\\" \/proc\/mdstat &amp;&gt;\/dev\/null\"                                      &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho                                                                                           &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        # If \\\": active\\\" is not found, it means that the array is not started\"          &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        if [ \\$? -ne 0 ]; then\"                                                          &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"            # Tell that the array is not started and return with an non-zero exit code\"  &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"            echo \\\"Array \/dev\/md0 is not started\\\"\"                                      &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"            exit 1\"                                                                      &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        fi\"                                                                              &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        # Tell that the array is started and return with 0 as exit code.\"                &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        echo \\\"Array \/dev\/md0 is started\\\"\"                                              &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        exit 0\"                                                                          &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        ;;\"                                                                              &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"    stop)\"                                                                               &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        # Stop the RAID array\"                                                           &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        mdadm --stop \/dev\/md0\"                                                           &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        # Stop the iscsi service and exit without error\"                                 &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        service iscsi stop\"                                                              &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        exit 0\"                                                                          &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        ;;\"                                                                              &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"    status)\"                                                                             &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        # Check \/proc\/mdstat for \\\": active\\\"\"                                           &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        grep \\\": active\\\" \/proc\/mdstat &amp;&gt;\/dev\/null\"                                      &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        # If \\\": active\\\" is not found, the array is not running\"                        &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        if [ \\$? -ne 0 ]; then\"                                                          &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"            # Tell that the array is not running and exit with a non-zero exit code.\"    &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"            echo \\\"Array \/dev\/md0 is not running\\\"\"                                      &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"            exit 1\"                                                                      &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        fi\"                                                                              &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        # Tell that the array is running and exit with 0 as exit code.\"                  &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        echo \\\"Array \/dev\/md0 is running\\\"\"                                              &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        exit 0\"                                                                          &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        ;;\"                                                                              &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"    *)\"                                                                                  &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        # Tell how to use the script and exit with a non-zero exit code.\"                &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        echo $\\\"Usage: \\$0 { start | stop | status }\\\"\"                                  &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        exit 2\"                                                                          &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"        ;;\"                                                                              &gt;&gt; \/etc\/init.d\/swraid;\\\r\necho \"esac\"                                                                                    &gt;&gt; \/etc\/init.d\/swraid\r\n\r\nchmod +x \/etc\/init.d\/swraid\r\nscp \/etc\/init.d\/swraid pmsapp2:\/etc\/init.d\r\nscp \/etc\/init.d\/swraid pmsapp3:\/etc\/init.d<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3 id=\"finalclustconf\">Final cluster configuration<\/h3>\n<table border=\"0\">\n<tbody>\n<tr>\n<td>\n<pre>echo \"&lt;?xml version=\\\"1.0\\\"?&gt;\"                                                                                           &gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"&lt;cluster config_version=\\\"2\\\" name=\\\"pmsappcluster\\\"&gt;\"                                                            &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"    &lt;clusternodes&gt;\"                                                                                               &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"        &lt;clusternode name=\\\"pmsapp1\\\" nodeid=\\\"1\\\"&gt;\"                                                              &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"            &lt;fence&gt;\"                                                                                              &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"                &lt;method name=\\\"fence_off\\\"&gt;\"                                                                      &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"                    &lt;device name=\\\"no_fence\\\"\/&gt;\"                                                                  &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"                &lt;\/method&gt;\"                                                                                        &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"            &lt;\/fence&gt;\"                                                                                             &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"        &lt;\/clusternode&gt;\"                                                                                           &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"        &lt;clusternode name=\\\"pmsapp2\\\" nodeid=\\\"2\\\"&gt;\"                                                              &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"            &lt;fence&gt;\"                                                                                              &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"                &lt;method name=\\\"fence_off\\\"&gt;\"                                                                      &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"                    &lt;device name=\\\"no_fence\\\"\/&gt;\"                                                                  &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"                &lt;\/method&gt;\"                                                                                        &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"            &lt;\/fence&gt;\"                                                                                             &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"        &lt;\/clusternode&gt;\"                                                                                           &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"        &lt;clusternode name=\\\"pmsapp3\\\" nodeid=\\\"3\\\"&gt;\"                                                              &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"            &lt;fence&gt;\"                                                                                              &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"                &lt;method name=\\\"fence_off\\\"&gt;\"                                                                      &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"                    &lt;device name=\\\"no_fence\\\"\/&gt;\"                                                                  &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"                &lt;\/method&gt;\"                                                                                        &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"            &lt;\/fence&gt;\"                                                                                             &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"        &lt;\/clusternode&gt;\"                                                                                           &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"    &lt;\/clusternodes&gt;\"                                                                                              &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"    &lt;fencedevices&gt;\"                                                                                               &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"        &lt;fencedevice agent=\\\"fence_disable\\\" name=\\\"no_fence\\\"\/&gt;\"                                                 &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"    &lt;\/fencedevices&gt;\"                                                                                              &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"    &lt;rm&gt;\"                                                                                                         &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"        &lt;service autostart=\\\"1\\\" exclusive=\\\"0\\\" name=\\\"clusvc\\\" recovery=\\\"relocate\\\"&gt;\"                          &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"            &lt;script file=\\\"\/etc\/init.d\/swraid\\\" name=\\\"swraid\\\"&gt;\"                                                 &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"                &lt;fs device=\\\"\/dev\/md0\\\" fstype=\\\"ext4\\\" mountpoint=\\\"\/sharedstorage\\\" name=\\\"sharedvol\\\"&gt;\"        &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"                    &lt;nfsexport name=\\\"sharednfs\\\"&gt;\"                                                               &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"                        &lt;nfsclient name=\\\"nfsclients\\\" options=\\\"rw,no_root_squash\\\" target=\\\"192.168.0.0\/24\\\"\/&gt;\" &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"                    &lt;\/nfsexport&gt;\"                                                                                 &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"                &lt;\/fs&gt;\"                                                                                            &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"            &lt;\/script&gt;\"                                                                                            &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"            &lt;ip address=\\\"192.168.0.20\\\" monitor_link=\\\"on\\\"\/&gt;\"                                                   &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"        &lt;\/service&gt;\"                                                                                               &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"    &lt;\/rm&gt;\"                                                                                                        &gt;&gt; \/etc\/cluster\/cluster.conf;\\\r\necho \"&lt;\/cluster&gt;\"                                                                                                       &gt;&gt; \/etc\/cluster\/cluster.conf\r\n\r\nccs_config_validate\r\n\r\nscp \/etc\/cluster\/cluster.conf pmsapp2:\/etc\/cluster\r\nscp \/etc\/cluster\/cluster.conf pmsapp3:\/etc\/cluster\r\n\r\nssh pmsapp2 reboot\r\nssh pmsapp3 reboot\r\nreboot\r\n\r\nclustat\r\n\r\nshowmount -e 192.168.0.20<\/pre>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<div class=\"sharedaddy sd-sharing-enabled\"><div class=\"robots-nocontent sd-block sd-social sd-social-icon-text sd-sharing\"><h3 class=\"sd-title\">Share this:<\/h3><div class=\"sd-content\"><ul><li><a href=\"#\" class=\"sharing-anchor sd-button share-more\"><span>Share<\/span><\/a><\/li><li class=\"share-end\"><\/li><\/ul><div class=\"sharing-hidden\"><div class=\"inner\" style=\"display: none;\"><ul><li class=\"share-twitter\"><a rel=\"nofollow noopener noreferrer\" data-shared=\"sharing-twitter-13\" class=\"share-twitter sd-button share-icon\" href=\"https:\/\/mail-e.dk\/pmsapp.org\/?page_id=13&amp;share=twitter\" target=\"_blank\" title=\"Click to share on Twitter\"><span>Twitter<\/span><\/a><\/li><li class=\"share-facebook\"><a rel=\"nofollow noopener noreferrer\" data-shared=\"sharing-facebook-13\" class=\"share-facebook sd-button share-icon\" href=\"https:\/\/mail-e.dk\/pmsapp.org\/?page_id=13&amp;share=facebook\" target=\"_blank\" title=\"Click to share on Facebook\"><span>Facebook<\/span><\/a><\/li><li class=\"share-end\"><\/li><li class=\"share-end\"><\/li><\/ul><\/div><\/div><\/div><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>Index Introduction Disable firewall Disable SElinux \/etc\/resolv.conf \/etc\/sysconfig\/network \/etc\/sysconfig\/network-scripts\/ifcfg-eth0 \/etc\/sysconfig\/network-scripts\/ifcfg-eth1 \/etc\/sysconfig\/network-scripts\/ifcfg-eth2 \/etc\/hosts Installing software iSCSI target configuration iSCSI initiator configuration RAID configuration Restart iSCSI target Filesystem System cluster services Fence device Software RAID script Final cluster configuration Introduction This is &hellip; <a href=\"https:\/\/mail-e.dk\/pmsapp.org\/?page_id=13\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n<div class=\"sharedaddy sd-sharing-enabled\"><div class=\"robots-nocontent sd-block sd-social sd-social-icon-text sd-sharing\"><h3 class=\"sd-title\">Share this:<\/h3><div class=\"sd-content\"><ul><li><a href=\"#\" class=\"sharing-anchor sd-button share-more\"><span>Share<\/span><\/a><\/li><li class=\"share-end\"><\/li><\/ul><div class=\"sharing-hidden\"><div class=\"inner\" style=\"display: none;\"><ul><li class=\"share-twitter\"><a rel=\"nofollow noopener noreferrer\" data-shared=\"sharing-twitter-13\" class=\"share-twitter sd-button share-icon\" href=\"https:\/\/mail-e.dk\/pmsapp.org\/?page_id=13&amp;share=twitter\" target=\"_blank\" title=\"Click to share on Twitter\"><span>Twitter<\/span><\/a><\/li><li class=\"share-facebook\"><a rel=\"nofollow noopener noreferrer\" data-shared=\"sharing-facebook-13\" class=\"share-facebook sd-button share-icon\" href=\"https:\/\/mail-e.dk\/pmsapp.org\/?page_id=13&amp;share=facebook\" target=\"_blank\" title=\"Click to share on Facebook\"><span>Facebook<\/span><\/a><\/li><li class=\"share-end\"><\/li><li class=\"share-end\"><\/li><\/ul><\/div><\/div><\/div><\/div><\/div>","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":2,"comment_status":"open","ping_status":"open","template":"","meta":{"spay_email":""},"jetpack_shortlink":"https:\/\/wp.me\/P2gkts-d","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/mail-e.dk\/pmsapp.org\/index.php?rest_route=\/wp\/v2\/pages\/13"}],"collection":[{"href":"https:\/\/mail-e.dk\/pmsapp.org\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/mail-e.dk\/pmsapp.org\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/mail-e.dk\/pmsapp.org\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/mail-e.dk\/pmsapp.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=13"}],"version-history":[{"count":5,"href":"https:\/\/mail-e.dk\/pmsapp.org\/index.php?rest_route=\/wp\/v2\/pages\/13\/revisions"}],"predecessor-version":[{"id":39,"href":"https:\/\/mail-e.dk\/pmsapp.org\/index.php?rest_route=\/wp\/v2\/pages\/13\/revisions\/39"}],"wp:attachment":[{"href":"https:\/\/mail-e.dk\/pmsapp.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=13"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}