This article is intended for administrators wishing to create a 2x node + witness High Availability cluster utilizing Proxmox Virtual Environment (https://www.proxmox.com/en/proxmox-virtual-environment/overview) and StorMagic SvSAN (https://stormagic.com/svsan/).
Note: All images are clickable for enlarging, or can be opened in a new tab
Information
Guide Summary
This multipart guide will walk through the process to deploy 2x hyperconverged Proxmox VE nodes utilizing SvSAN virtualized block storage and a lightweight witness node, such as a Raspberry Pi.
Provision a qdevice/SvSAN witness
Enable root SSH on the qdevice/SvSAN witness machine
In this example we're using a Raspberry Pi4
Note: The qdevice root password must be open over SSH and match the Proxmox root password
Change the root password with the below:
sudo passwd root
and enable root ssh login via:
vi /etc/sshd/sshd_config
un hashing/commenting “permit root login” and change to “yes”
root@raspberrypi:~# sudo nano /etc/ssh/sshd_config
root@raspberrypi:~# passwd root
New password:
Retype new password:
passwd: password updated successfully
root@raspberrypi:~# sudo systemctl restart ssh
root@raspberrypi:~#
Or utilizing the below:
sudo sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
sudo systemctl restart ssh
Install the qdevice
sudo apt install corosync-qnetd
e.g.
root@raspberrypi:~# sudo apt install corosync-qnetd
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following package was automatically installed and is no longer required: libfuse2
Use 'sudo apt autoremove' to remove it.
The following additional packages will be installed: libnss3-tools
The following NEW packages will be installed: corosync-qnetd libnss3-tools
0 upgraded, 2 newly installed, 0 to remove and 76 not upgraded.
Need to get 940 kB of archives. After this operation, 4,280 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://raspbian.raspberrypi.org/raspbian bullseye/main armhf libnss3-tools armhf 2:3.61-1+deb11u3 [882 kB]
Get:2 http://raspbian.mirror.uk.sargasso.net/raspbian bullseye/main armhf corosy nc-qnetd armhf 3.0.1-1 [57.9 kB]
Fetched 940 kB in 5s (197 kB/s) Selecting previously unselected package libnss3-tools.
(Reading database ... 106411 files and directories currently installed.)
Preparing to unpack .../libnss3-tools_2%3a3.61-1+deb11u3_armhf.deb ...
Unpacking libnss3-tools (2:3.61-1+deb11u3) ...
Selecting previously unselected package corosync-qnetd. Preparing to unpack .../corosync-qnetd_3.0.1-1_armhf.deb ...
Unpacking corosync-qnetd (3.0.1-1) ...
Setting up libnss3-tools (2:3.61-1+deb11u3) ...
Setting up corosync-qnetd (3.0.1-1) ...
Creating /etc/corosync/qnetd/nssdb Creating new key and cert db password file contains no data
Creating new noise file /etc/corosync/qnetd/nssdb/noise.txt
Creating new CA
Generating key. This may take a few moments...
Is this a CA certificate [y/N]?
Enter the path length constraint, enter to skip [<0 for="for" unlimited="unlimited" path]:="path]:"> Is this a critical extension [y/N]?
Generating key. This may take a few moments...
Notice: Trust flag u is set automatically if the private key is present.
QNetd CA certificate is exported as /etc/corosync/qnetd/nssdb/qnetd-cacert.crt
Created symlink /etc/systemd/system/multi-user.target.wants/corosync-qnetd.service → /lib/systemd/system/corosync-qnetd.service.
/usr/sbin/policy-rc.d returned 101, not running 'start corosync-qnetd.service'
Processing triggers for systemd (245.4-4ubuntu3.20) ...
Processing triggers for man-db (2.9.4-2) ...
Processing triggers for libc-bin (2.31-0ubuntu9.9) ...
root@raspberrypi:~#
Install the SvSAN Witness service
Install the SvSAN Witness Service to the qdevice
Upload the deb package, in this case arm based, to the qdevice/Raspberry Pi, using winSCP or similar
root@mc-proxmox-qdevice:~# chmod +x /root/stormagic-witness_6.3.2011_amd64.deb
root@mc-proxmox-qdevice:~# apt install ./stormagic-witness_6.3.2011_amd64.deb
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Note, selecting 'stormagic-witness' instead of './stormagic-witness_6.3.2011_amd64.deb'
The following additional packages will be installed:
dialog
The following NEW packages will be installed:
dialog stormagic-witness
0 upgraded, 2 newly installed, 0 to remove and 96 not upgraded.
Need to get 303 kB/14.6 MB of archives.
After this operation, 40.5 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://gb.archive.ubuntu.com/ubuntu jammy/universe amd64 dialog amd64 1.3-20211214-1 [303 kB]
Get:2 /root/stormagic-witness_6.3.2011_amd64.deb stormagic-witness amd64 6.3.2011 [14.3 MB]
Fetched 303 kB in 0s (688 kB/s)
Selecting previously unselected package dialog.
(Reading database ... 109524 files and directories currently installed.)
Preparing to unpack .../dialog_1.3-20211214-1_amd64.deb ...
Unpacking dialog (1.3-20211214-1) ...
Selecting previously unselected package stormagic-witness.
Preparing to unpack .../stormagic-witness_6.3.2011_amd64.deb ...
Cannot install package. ONLY Ubuntu '20.04' is supported. Detected this version as '22.04' in /etc/os-release
dpkg: error processing archive /root/stormagic-witness_6.3.2011_amd64.deb (--unpack):
new stormagic-witness package pre-installation script subprocess returned error exit status 1
Errors were encountered while processing:
/root/stormagic-witness_6.3.2011_amd64.deb
needrestart is being skipped since dpkg has failed
N: Download is performed unsandboxed as root as file '/root/stormagic-witness_6.3.2011_amd64.deb' couldn't be accessed by user '_apt'. - pkgAcquire::Run (13: Permission denied)
E: Sub-process /usr/bin/dpkg returned an error code (1)
root@mc-proxmox-qdevice:~# vi /etc/os-release
root@mc-proxmox-qdevice:~# apt install ./stormagic-witness_6.3.2011_amd64.deb
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Note, selecting 'stormagic-witness' instead of './stormagic-witness_6.3.2011_amd64.deb'
The following NEW packages will be installed:
stormagic-witness
0 upgraded, 1 newly installed, 0 to remove and 96 not upgraded.
1 not fully installed or removed.
Need to get 0 B/14.3 MB of archives.
After this operation, 39.2 MB of additional disk space will be used.
Get:1 /root/stormagic-witness_6.3.2011_amd64.deb stormagic-witness amd64 6.3.2011 [14.3 MB]
(Reading database ... 109680 files and directories currently installed.)
Preparing to unpack .../stormagic-witness_6.3.2011_amd64.deb ...
Unpacking stormagic-witness (6.3.2011) ...
Setting up dialog (1.3-20211214-1) ...
Setting up stormagic-witness (6.3.2011) ...
Processing triggers for man-db (2.10.2-1) ...
Scanning processes...
Scanning candidates...
Scanning linux images...
Restarting services...
/etc/needrestart/restart.d/dbus.service
systemctl restart networkd-dispatcher.service systemd-logind.service unattended-upgrades.service
No containers need to be restarted.
No user sessions are running outdated binaries.
No VM guests are running outdated hypervisor (qemu) binaries on this host.
N: Download is performed unsandboxed as root as file '/root/stormagic-witness_6.3.2011_amd64.deb' couldn't be accessed by user '_apt'. - pkgAcquire::Run (13: Permission denied)
root@mc-proxmox-qdevice:~# ls /opt/StorMagic/
dirs Witness/
root@mc-proxmox-qdevice:~# ls /opt/StorMagic/
dirs Witness/
root@mc-proxmox-qdevice:~# ls /opt/StorMagic/Witness/
bin/ etc/ lib/ README.txt scratch/
root@mc-proxmox-qdevice:~# ls /opt/StorMagic/Witness/bin/
accept-eula.sh configure.sh exlog install-Witness.sh smclusterd smc_state smdisco smdiscod
root@mc-proxmox-qdevice:~# cd /opt/StorMagic/Witness/bin/
root@mc-proxmox-qdevice:/opt/StorMagic/Witness/bin# ./configure.sh
Deploy the Witness service viewing the curses GUI
Running the configure.sh will set the SvSAN witness daemon to be managed by systemctl and install with a curses gui, also opening up firewalls if needed.
systemctl list-units --type=service --state=active
Set the Time Configuration
Ensure time is correct on every Proxmox and qdevice
root@mc-proxmox-qdevice:/opt/StorMagic/Witness/bin#sudo apt-get install ntp
root@mc-proxmox-qdevice:/opt/StorMagic/Witness/bin#sudo ufw allow 123/udp
root@mc-proxmox-qdevice:/opt/StorMagic/Witness/bin#sudo systemctl restart ntp
root@mc-proxmox-qdevice:/opt/StorMagic/Witness/bin#sudo systemctl status ntp
root@mc-proxmox-qdevice:/opt/StorMagic/Witness/bin#ntpq -p
root@mc-proxmox-qdevice:/opt/StorMagic/Witness/bin#date
Tue 10 Oct 10:29:18 UTC 2023
Create the Cluster
Note: This must be done before VSA, or any VM, is create otherwise you’ll receive the below:
detected the following error(s): * this host already contains virtual guests TASK ERROR: Check if node may join a cluster failed!
detected the following error(s):
* this host already contains virtual guests
TASK ERROR: Check if node may join a cluster failed!
This is related to the VM IDs. e.g. if you have 2x VMs or VSAs provisioned first, they’ll default to VM ID 100.
To join both nodes to a cluster this would need to be correct, which Proxmox doesn’t handle.
They still block even if different VM IDs.
This can be worked around via the below:
Backup and remove the VM conf files from /etc/pve/nodes/proxmox-1/qemu-server
Then add back in after cluster join.
This is similar to the remove from inventory and register vm to inventory functionality within VMware.
Proxmox Join Cluster Failed - This host already contains virtual guests!
Define a cluster name, and select the network to utilize, in our case a teamed vmbr0:
Confirm the task completes OK:
Select Join Information and copy the info to node2:
Add the Corosync to the Proxmox nodes
Cluster Manager - Proxmox VE - Corosync External Vote Support section
Install the corosync-qdevice package on all nodes e.g. proxmox-1 and proxmox-2
apt install corosync-qdevice
Add the qdevice to the cluster
Note: All nodes in the cluster will need to be configured and online to do this
root@proxmox-1:~# pvecm qdevice setup 10.10.130.15
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '10.10.130.15 (10.10.130.15)' can't be established.
ED25519 key fingerprint is SHA256:OJpUokHj2W45i4bcZ1PWxxUbh1WCknelbrteaGq507A.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@10.10.130.15's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'root@10.10.130.15'"
and check to make sure that only the key(s) you wanted were added.
INFO: initializing qnetd server
Certificate database (/etc/corosync/qnetd/nssdb) already exists. Delete it to initialize new db
INFO: copying CA cert and initializing on all nodes
node 'proxmox-1': Creating /etc/corosync/qdevice/net/nssdb
password file contains no data
node 'proxmox-1': Creating new key and cert db
node 'proxmox-1': Creating new noise file /etc/corosync/qdevice/net/nssdb/noise.txt
node 'proxmox-1': Importing CA
node 'proxmox-2': Creating /etc/corosync/qdevice/net/nssdb
password file contains no data
node 'proxmox-2': Creating new key and cert db
node 'proxmox-2': Creating new noise file /etc/corosync/qdevice/net/nssdb/noise.txt
node 'proxmox-2': Importing CA
INFO: generating cert request
Creating new certificate request
Generating key. This may take a few moments...
Certificate request stored in /etc/corosync/qdevice/net/nssdb/qdevice-net-node.crq
INFO: copying exported cert request to qnetd server
INFO: sign and export cluster cert
Signing cluster certificate
Certificate stored in /etc/corosync/qnetd/nssdb/cluster-proxmox-cluster.crt
INFO: copy exported CRT
INFO: import certificate
Importing signed cluster certificate
Notice: Trust flag u is set automatically if the private key is present.
pk12util: PKCS12 EXPORT SUCCESSFUL
Certificate stored in /etc/corosync/qdevice/net/nssdb/qdevice-net-node.p12
INFO: copy and import pk12 cert to all nodes
node 'proxmox-1': Importing cluster certificate and key
node 'proxmox-1': pk12util: PKCS12 IMPORT SUCCESSFUL
node 'proxmox-2': Importing cluster certificate and key
node 'proxmox-2': pk12util: PKCS12 IMPORT SUCCESSFUL
INFO: add QDevice to cluster configuration
INFO: start and enable corosync qdevice daemon on node 'proxmox-1'...
Synchronizing state of corosync-qdevice.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable corosync-qdevice
Created symlink /etc/systemd/system/multi-user.target.wants/corosync-qdevice.service - /lib/systemd/system/corosync-qdevice.service.
INFO: start and enable corosync qdevice daemon on node 'proxmox-2'...
Synchronizing state of corosync-qdevice.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable corosync-qdevice
Created symlink /etc/systemd/system/multi-user.target.wants/corosync-qdevice.service - /lib/systemd/system/corosync-qdevice.service.
Reloading corosync.conf...
Done
root@proxmox-1:~#
If you see an error:
command 'ssh -o 'BatchMode=yes' -lroot 10.10.196.71 corosync-qdevice-net-certutil
-m -c /etc/pve/qdevice-net-node.p12' failed: exit code 255
ssh between the nodes first as root 'ssh root@10.10.196.71' accepting the cert, then re-attempt the qdevice add operation
Check the status of the cluster
pvecm status
Example output below
root@proxmox-1:~# pvecm status
Cluster information
-------------------
Name: proxmox-cluster
Config Version: 3
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Tue Oct 10 11:05:56 2023
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1.9
Quorate: Yes
Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate Qdevice
Membership information
----------------------
Nodeid Votes Qdevice Name
0x00000001 1 A,V,NMW 10.10.194.3 (local)
0x00000002 1 A,V,NMW 10.10.194.4
0x00000000 1 Qdevice
root@proxmox-1:~#
See Also
Comments
0 comments
Article is closed for comments.