This article is intended for administrators wishing to create a 2x node + witness High Availability cluster utilizing Proxmox Virtual Environment (https://www.proxmox.com/en/proxmox-virtual-environment/overview) and StorMagic SvSAN (https://stormagic.com/svsan/).
Note: All images are clickable for enlarging, or can be opened in a new tab
Information
Guide Summary
This multipart guide will walk through the process to deploy 2x hyperconverged Proxmox VE nodes utilizing SvSAN virtualized block storage and a lightweight witness node, such as a Raspberry Pi.
Prepare the hosts ready for SvSAN
In this section we'll prepare the hosts by:
1. clearing the local file system from the disk we'll utilize as our SAN disk,
2. enable the storing of VMs on the Local storage
3. setting local storage to consume the full boot disk disk,
4. Updating the hosts
Clear the SAN disk
Note: Repeat the below steps on both hyperconverged nodes.
Delete the LVM on the “SAN disk” - equivalent to deleting a local VMFS on the disk to be used by SvSAN. e.g. the storage we’ll use for the SAN is currently consumed as “Local Storage”
Next enable storing of VMs on the boot “Local Disk”
Enable VM storage on the local datastore, leaving the SAN storage unformatted to be handed to the StorMagic VSA later
Note: Repeat the below steps on both hyperconverged nodes.
Under Content select “Disk image” in addition to the other selected items
Such that it now looks like:
Expand the local File system
Note: Repeat the below steps on both hyperconverged nodes.
Expand the file system local boot root file system to have room for the VSA VM:
The below example shows the increase of /dev/mapper/pve-root from 35G > 90G and uses the commands:
df -h
lvremove /dev/pve/data
lvresize -l +100%FREE /dev/pve/root
resize2fs /dev/mapper/pve-root
root@demo-proxmox01:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 12G 0 12G 0% /dev
tmpfs 2.4G 904K 2.4G 1% /run
/dev/mapper/pve-root 69G 2.5G 63G 4% /
tmpfs 12G 46M 12G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/fuse 128M 16K 128M 1% /etc/pve
tmpfs 2.4G 0 2.4G 0% /run/user/0
root@demo-proxmox01:~#
root@demo-proxmox01:~#
root@demo-proxmox01:~# lvremove /dev/pve/data
Do you really want to remove active logical volume pve/data? [y/n]: y
Logical volume "data" successfully removed.
root@demo-proxmox01:~#
root@demo-proxmox01:~#
root@demo-proxmox01:~# lvresize -l +100%FREE /dev/pve/root
Size of logical volume pve/root changed from <69.75 GiB (17855 extents) to <231.00 GiB (59135 extents).
Logical volume pve/root successfully resized.
root@demo-proxmox01:~#
root@demo-proxmox01:~#
root@demo-proxmox01:~#
root@demo-proxmox01:~#
root@demo-proxmox01:~# resize2fs /dev/mapper/pve-root
resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/mapper/pve-root is mounted on /; on-line resizing required
old_desc_blocks = 5, new_desc_blocks = 13
The filesystem on /dev/mapper/pve-root is now 26944512 (4k) blocks long.
root@demo-proxmox01:~#
This increase will display in the GUI per the below:
Set the update repositories and update the hosts
Note: Repeat the below steps on both hyperconverged nodes.
This guide utilizes the public/unsubscribed repo to get setup. It is recommended if utilizing these technologies in production that the support and the Enterprise repo is utilized.
Change the repository to the public/unsubscribed.
The below message will be displayed:
Add the No-Subscription Repository
If the open-source repo isn’t added the upgrade will fail similar to the below:
Disable the enterprise repo, alongside ceph repos, to avoid confusion.
Go to Updates and refresh - this will perform an apt update - e.g. see what updates the repo has available
Again the below message will be displayed:
The below will display showing the apt update
Go to Updates and Upgrade - this will perform an apt upgrade - e.g. pull and apply the available updates
A list of available updates will be displayed
A console box will display per the below to proceed with the upgrade
Setup the Host Networking
Note: Repeat the below steps on both hyperconverged nodes, setting IP addresses appropriately
vmbr0 will be created by default on a single physical NIC.
The below example leverages 2x NICs in a bond for the management and VM traffic alongside 2x bridges on 2x Direct Attach NICs for storage traffic similar to the below:
Logical diagram
Example IP Schema
HOST 1 | HOST 2 | ||
---|---|---|---|
Purpose | IP address | Purpose | IP address |
Host 1 management connection (Hostname: host1.example.com) |
10.1.100.11/24 |
Host 2 management connection (Hostname: host2.example.com) |
10.1.100.12/24 |
Host 1 storage connection 1 | 192.168.1.1/24 | Host 2 storage connection 1 | 192.168.1.2/24 |
Host 1 storage connection 2 | 192.168.2.1/24 | Host 2 storage connection 2 | 192.168.2.2/24 |
VSA 1 management connection (Hostname: VSAhost1.example.com) |
10.1.100.13/24 |
VSA 2 management connection (Hostname: VSAhost2.example.com) |
10.1.100.14/24 |
VSA 1 iSCSI and mirror connection 1 | 192.168.1.11/24 | VSA 2 iSCSI and mirror connection 1 | 192.168.1.12/24 |
VSA 1 iSCSI and mirror connection 2 | 192.168.2.11/24 | VSA 2 iSCSI and mirror connection 2 | 192.168.2.12/24 |
Default gateway | 10.1.100.254/24 | Default gateway | 10.1.100.254/24 |
DNS name server (primary) | 10.1.100.2/24 | DNS name server (primary) | 10.1.100.2/24 |
DNS name server (secondary) | 10.1.100.3/24 | DNS name server (secondary) | 10.1.100.3/24 |
It is not possible to rename vmbrX in Proxmox however it is recommended to utilize a Comment.
Note: The Network Device name will vary depending on hardware and network driver utilized. e.g. nested on VMware ens192, ens224 but on real hardware, such as Dell R650s eno8303, eno8403, eno12399np0, eno12409np1.
It is also possible to view and edit the network setup by modifying the below file:
/etc/network/interfaces
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_network_configuration
Note: This is just one example configuration, and may be modified to your use case
In the below nested example we have 4x physical NICs with the below being the default, with comments added for which physical NIC is connected to which network:
Now we'll bond the management NICs to provide some redundancy, by creating a new bond of nic ens192 and ens224
First we remove ens192 from vmbr0
Select to create a Linux Bond
Leave the IP details blanks however assign a load balancing mode based on the switch setup and define the slave NICs
Then we add Bond0 to vmbr0, in place of ens192
Define a static IP address onto the bridge, based on your network
We can then apply this configuration and ensure we can still communicate to the Proxmox node
Next we can create our storage bridges, noting 2x bridges per physical NIC and 2x different IP subnets due to the back to back/DAC cabling.
Define a static IP address for the storage, in this example storage 1 network, as we have 2x direct attach cables.
Define a static IP address for the storage, in this example storage 2 network, as we have 2x direct attach cables.
Apply the config once more:
And validate everything can communicate/ping:
root@mc-proxmox-1:~# ping 10.10.130.52
PING 10.10.130.52 (10.10.130.52) 56(84) bytes of data.
64 bytes from 10.10.130.52: icmp_seq=1 ttl=64 time=1.37 ms
64 bytes from 10.10.130.52: icmp_seq=2 ttl=64 time=0.809 ms
^C
--- 10.10.130.52 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.809/1.087/1.366/0.278 ms
root@mc-proxmox-1:~# ping 192.167.130.52
PING 192.167.130.52 (192.167.130.52) 56(84) bytes of data.
64 bytes from 192.167.130.52: icmp_seq=1 ttl=64 time=1.37 ms
64 bytes from 192.167.130.52: icmp_seq=2 ttl=64 time=0.809 ms
^C
--- 192.167.130.52 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.809/1.087/1.366/0.278 ms
root@mc-proxmox-1:~# ping 192.168.130.52
PING 192.168.130.52 (192.168.130.52) 56(84) bytes of data.
64 bytes from 192.168.130.52: icmp_seq=1 ttl=64 time=1.37 ms
64 bytes from 192.168.130.52: icmp_seq=2 ttl=64 time=0.809 ms
^C
--- 192.168.130.52 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.809/1.087/1.366/0.278 ms
Upload the StorMagic VSA Installer ISO
Note: Repeat the below steps on both hyperconverged nodes.
Note: Please contact StorMagic support (support@stormagic.com) for the link to download the ISO
Select the Local storage volume, ISO images and select Upload.
Select to Select File to open the file browser.
Upload the StorMagic VSA ISO.
Confirm this completes successfully.
This process can also be completed via WinSCP or similar to the below location:
/var/lib/vz/template/iso/StorMagic_SvSAN_ISO_6.3.P2.51850.iso
See Also
Comments
0 comments
Article is closed for comments.