This article is intended for administrators wishing to create a 2x node + witness High Availability cluster utilizing Proxmox Virtual Environment (https://www.proxmox.com/en/proxmox-virtual-environment/overview) and StorMagic SvSAN (https://stormagic.com/svsan/).
Note: All images are clickable for enlarging, or can be opened in a new tab
Information
Guide Summary
This multipart guide will walk through the process to deploy 2x hyperconverged Proxmox VE nodes utilizing SvSAN virtualized block storage and a lightweight witness node, such as a Raspberry Pi.
Ensure the Proxmox node software iSCSI IQNs are Unique
Note: Repeat the below steps on both hyperconverged nodes.
There have been issues, working nested, in the past with IQNs being the same.
Confirm they’re different, however maybe you want to rename to something more friendly.
nano /etc/iscsi/initiatorname.iscsi
In our systems we have the below:
host1
root@proxmox-1:~# cat /etc/iscsi/initiatorname.iscsi
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:48aa658c4691
host2
root@proxmox-2:~# cat /etc/iscsi/initiatorname.iscsi
## DO NOT EDIT OR REMOVE THIS FILE!
## If you remove this file, the iSCSI daemon will not start.
## If you change the InitiatorName, existing access control lists
## may reject this initiator. The InitiatorName must be unique
## for each iSCSI initiator. Do NOT duplicate iSCSI InitiatorNames.
InitiatorName=iqn.1993-08.org.debian:01:bea18b518bc6
Should they be edited restart the iSCSI daemon via systemctl
root@proxmox-1:~# systemctl restart iscsid.service
These IQNs can be manually added to the VSA via the WebGUI or will be picked up automatically later with a rescan and then need to be added to the target ACL.
Create the Mirrored Target
https://stormagic.com/doc/svsan/6-3-P2/en/Content/datastore-create-manually.htm
Add and Configure MPIO
Note: Repeat the below steps on both hyperconverged nodes.
https://pve.proxmox.com/wiki/ISCSI_Multipath
root@proxmox-1:~# apt-get install multipath-tools
Edit /etc/iscsi/iscsid.conf
root@proxmox-1:~# nano /etc/iscsi/iscsid.conf
Overwrite the default iSCSId config file with the below:
#
# Open-iSCSI default configuration.
# Could be located at /etc/iscsi/iscsid.conf or ~/.iscsid.conf
#
# Note: To set any of these values for a specific node/session run
# the iscsiadm --mode node --op command for the value. See the README
# and man page for iscsiadm for details on the --op command.
#
######################
# iscsid daemon config
######################
# If you want iscsid to start the first time an iscsi tool
# needs to access it, instead of starting it when the init
# scripts run, set the iscsid startup command here. This
# should normally only need to be done by distro package
# maintainers.
#
# Default for Fedora and RHEL. (uncomment to activate).
# iscsid.startup = /etc/rc.d/init.d/iscsid force-start
iscsid.startup = /bin/systemctl start iscsid.socket
#
# Default for upstream open-iscsi scripts (uncomment to activate).
# iscsid.startup = /sbin/iscsid
# Check for active mounts on devices reachable through a session
# and refuse to logout if there are any. Defaults to "No".
# iscsid.safe_logout = Yes
#############################
# NIC/HBA and driver settings
#############################
# open-iscsi can create a session and bind it to a NIC/HBA.
# To set this up see the example iface config file.
#*****************
# Startup settings
#*****************
# To request that the iscsi initd scripts startup a session set to "automatic".
# node.startup = automatic
#
# To manually startup the session set to "manual". The default is manual.
node.startup = manual
# For "automatic" startup nodes, setting this to "Yes" will try logins on each
# available iface until one succeeds, and then stop. The default "No" will try
# logins on all available ifaces simultaneously.
node.leading_login = No
# *************
# CHAP Settings
# *************
# To enable CHAP authentication set node.session.auth.authmethod
# to CHAP. The default is None.
#node.session.auth.authmethod = CHAP
# To set a CHAP username and password for initiator
# authentication by the target(s), uncomment the following lines:
#node.session.auth.username = username
#node.session.auth.password = password
# To set a CHAP username and password for target(s)
# authentication by the initiator, uncomment the following lines:
#node.session.auth.username_in = username_in
#node.session.auth.password_in = password_in
# To enable CHAP authentication for a discovery session to the target
# set discovery.sendtargets.auth.authmethod to CHAP. The default is None.
#discovery.sendtargets.auth.authmethod = CHAP
# To set a discovery session CHAP username and password for the initiator
# authentication by the target(s), uncomment the following lines:
#discovery.sendtargets.auth.username = username
#discovery.sendtargets.auth.password = password
# To set a discovery session CHAP username and password for target(s)
# authentication by the initiator, uncomment the following lines:
#discovery.sendtargets.auth.username_in = username_in
#discovery.sendtargets.auth.password_in = password_in
# ********
# Timeouts
# ********
#
# See the iSCSI README's Advanced Configuration section for tips
# on setting timeouts when using multipath or doing root over iSCSI.
#
# To specify the length of time to wait for session re-establishment
# before failing SCSI commands back to the application when running
# the Linux SCSI Layer error handler, edit the line.
# The value is in seconds and the default is 120 seconds.
# Special values:
# - If the value is 0, IO will be failed immediately.
# - If the value is less than 0, IO will remain queued until the session
# is logged back in, or until the user runs the logout command.
node.session.timeo.replacement_timeout = 120
# To specify the time to wait for login to complete, edit the line.
# The value is in seconds and the default is 15 seconds.
node.conn[0].timeo.login_timeout = 15
# To specify the time to wait for logout to complete, edit the line.
# The value is in seconds and the default is 15 seconds.
node.conn[0].timeo.logout_timeout = 15
# Time interval to wait for on connection before sending a ping.
node.conn[0].timeo.noop_out_interval = 5
# To specify the time to wait for a Nop-out response before failing
# the connection, edit this line. Failing the connection will
# cause IO to be failed back to the SCSI layer. If using dm-multipath
# this will cause the IO to be failed to the multipath layer.
node.conn[0].timeo.noop_out_timeout = 5
# To specify the time to wait for abort response before
# failing the operation and trying a logical unit reset edit the line.
# The value is in seconds and the default is 15 seconds.
node.session.err_timeo.abort_timeout = 15
# To specify the time to wait for a logical unit response
# before failing the operation and trying session re-establishment
# edit the line.
# The value is in seconds and the default is 30 seconds.
node.session.err_timeo.lu_reset_timeout = 30
# To specify the time to wait for a target response
# before failing the operation and trying session re-establishment
# edit the line.
# The value is in seconds and the default is 30 seconds.
node.session.err_timeo.tgt_reset_timeout = 30
#******
# Retry
#******
# To specify the number of times iscsid should retry a login
# if the login attempt fails due to the node.conn[0].timeo.login_timeout
# expiring modify the following line. Note that if the login fails
# quickly (before node.conn[0].timeo.login_timeout fires) because the network
# layer or the target returns an error, iscsid may retry the login more than
# node.session.initial_login_retry_max times.
#
# This retry count along with node.conn[0].timeo.login_timeout
# determines the maximum amount of time iscsid will try to
# establish the initial login. node.session.initial_login_retry_max is
# multiplied by the node.conn[0].timeo.login_timeout to determine the
# maximum amount.
#
# The default node.session.initial_login_retry_max is 8 and
# node.conn[0].timeo.login_timeout is 15 so we have:
#
# node.conn[0].timeo.login_timeout * node.session.initial_login_retry_max =
# 120 seconds
#
# Valid values are any integer value. This only
# affects the initial login. Setting it to a high value can slow
# down the iscsi service startup. Setting it to a low value can
# cause a session to not get logged into, if there are distuptions
# during startup or if the network is not ready at that time.
node.session.initial_login_retry_max = 8
################################
# session and device queue depth
################################
# To control how many commands the session will queue set
# node.session.cmds_max to an integer between 2 and 2048 that is also
# a power of 2. The default is 128.
node.session.cmds_max = 128
# To control the device's queue depth set node.session.queue_depth
# to a value between 1 and 1024. The default is 32.
node.session.queue_depth = 32
##################################
# MISC SYSTEM PERFORMANCE SETTINGS
##################################
# For software iscsi (iscsi_tcp) and iser (ib_iser) each session
# has a thread used to transmit or queue data to the hardware. For
# cxgb3i you will get a thread per host.
#
# Setting the thread's priority to a lower value can lead to higher throughput
# and lower latencies. The lowest value is -20. Setting the priority to
# a higher value, can lead to reduced IO performance, but if you are seeing
# the iscsi or scsi threads dominate the use of the CPU then you may want
# to set this value higher.
#
# Note: For cxgb3i you must set all sessions to the same value, or the
# behavior is not defined.
#
# The default value is -20. The setting must be between -20 and 20.
node.session.xmit_thread_priority = -20
#***************
# iSCSI settings
#***************
# To enable R2T flow control (i.e., the initiator must wait for an R2T
# command before sending any data), uncomment the following line:
#
#node.session.iscsi.InitialR2T = Yes
#
# To disable R2T flow control (i.e., the initiator has an implied
# initial R2T of "FirstBurstLength" at offset 0), uncomment the following line:
#
# The defaults is No.
node.session.iscsi.InitialR2T = No
#
# To disable immediate data (i.e., the initiator does not send
# unsolicited data with the iSCSI command PDU), uncomment the following line:
#
#node.session.iscsi.ImmediateData = No
#
# To enable immediate data (i.e., the initiator sends unsolicited data
# with the iSCSI command packet), uncomment the following line:
#
# The default is Yes
node.session.iscsi.ImmediateData = Yes
# To specify the maximum number of unsolicited data bytes the initiator
# can send in an iSCSI PDU to a target, edit the following line.
#
# The value is the number of bytes in the range of 512 to (2^24-1) and
# the default is 262144
node.session.iscsi.FirstBurstLength = 262144
# To specify the maximum SCSI payload that the initiator will negotiate
# with the target for, edit the following line.
#
# The value is the number of bytes in the range of 512 to (2^24-1) and
# the defauls it 16776192
node.session.iscsi.MaxBurstLength = 16776192
# To specify the maximum number of data bytes the initiator can receive
# in an iSCSI PDU from a target, edit the following line.
#
# The value is the number of bytes in the range of 512 to (2^24-1) and
# the default is 262144
node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
# To specify the maximum number of data bytes the initiator will send
# in an iSCSI PDU to the target, edit the following line.
#
# The value is the number of bytes in the range of 512 to (2^24-1).
# Zero is a special case. If set to zero, the initiator will use
# the target's MaxRecvDataSegmentLength for the MaxXmitDataSegmentLength.
# The default is 0.
node.conn[0].iscsi.MaxXmitDataSegmentLength = 0
# To specify the maximum number of data bytes the initiator can receive
# in an iSCSI PDU from a target during a discovery session, edit the
# following line.
#
# The value is the number of bytes in the range of 512 to (2^24-1) and
# the default is 32768
#
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768
# To allow the targets to control the setting of the digest checking,
# with the initiator requesting a preference of enabling the checking, uncomment# one or both of the following lines:
#node.conn[0].iscsi.HeaderDigest = CRC32C,None
#node.conn[0].iscsi.DataDigest = CRC32C,None
#
# To allow the targets to control the setting of the digest checking,
# with the initiator requesting a preference of disabling the checking,
# uncomment one or both of the following lines:
#node.conn[0].iscsi.HeaderDigest = None,CRC32C
#node.conn[0].iscsi.DataDigest = None,CRC32C
#
# To enable CRC32C digest checking for the header and/or data part of
# iSCSI PDUs, uncomment one or both of the following lines:
#node.conn[0].iscsi.HeaderDigest = CRC32C
#node.conn[0].iscsi.DataDigest = CRC32C
#
# To disable digest checking for the header and/or data part of
# iSCSI PDUs, uncomment one or both of the following lines:
#node.conn[0].iscsi.HeaderDigest = None
#node.conn[0].iscsi.DataDigest = None
#
# The default is to never use DataDigests or HeaderDigests.
#
# For multipath configurations, you may want more than one session to be
# created on each iface record. If node.session.nr_sessions is greater
# than 1, performing a 'login' for that node will ensure that the
# appropriate number of sessions is created.
node.session.nr_sessions = 1
#************
# Workarounds
#************
# Some targets like IET prefer after an initiator has sent a task
# management function like an ABORT TASK or LOGICAL UNIT RESET, that
# it does not respond to PDUs like R2Ts. To enable this behavior uncomment
# the following line (The default behavior is Yes):
node.session.iscsi.FastAbort = Yes
# Some targets like Equalogic prefer that after an initiator has sent
# a task management function like an ABORT TASK or LOGICAL UNIT RESET, that
# it continue to respond to R2Ts. To enable this uncomment this line
# node.session.iscsi.FastAbort = No
# To prevent doing automatic scans that would add unwanted luns to the system
# we can disable them and have sessions only do manually requested scans.
# Automatic scans are performed on startup, on login, and on AEN/AER reception
# on devices supporting it. For HW drivers all sessions will use the value
# defined in the configuration file. This configuration option is independent
# of scsi_mod scan parameter. (The default behavior is auto):
node.session.scan = auto
Restart the service
root@proxmox-1:~# systemctl restart iscsid.service
Per SvSAN on our StorMagic KVM stack set the multipath as below:
Create the below folder path:
root@proxmox-1:~# mkdir /etc/multipath/conf.d/
Create the file per the below:
root@proxmox-1:~# nano /etc/multipath/conf.d/StorMagc.conf
To contain:
root@proxmox-1:~# cat /etc/multipath/conf.d/StorMagc.conf
defaults {
polling_interval 2
max_polling_interval 4
find_multipaths "yes"
}
devices {
device {
user_friendly_names "yes"
no_path_retry 5
detect_checker "no"
path_checker "tur"
path_grouping_policy "group_by_prio"
detect_prio "no"
prio "alua"
prio_args "exclusive_pref_bit"
hardware_handler "1 alua"
vendor "StorMagc"
product "iSCSI Volume"
}
}
With the same on the other host
root@proxmox-2:~# cat /etc/multipath/conf.d/StorMagc.conf
defaults {
polling_interval 2
max_polling_interval 4
find_multipaths "yes"
}
devices {
device {
user_friendly_names "yes"
no_path_retry 5
detect_checker "no"
path_checker "tur"
path_grouping_policy "group_by_prio"
detect_prio "no"
prio "alua"
prio_args "exclusive_pref_bit"
hardware_handler "1 alua"
vendor "StorMagc"
product "iSCSI Volume"
}
}
and restart multipath
root@proxmox-1:~# systemctl restart multipathd.service
Login to the storage using iscsiadm
Note: Repeat the below steps on both hyperconverged nodes.
Via the commandline on each host run the below to discover out to the SvSAN VSA iSCSI IPs.
This is the equivalent to adding the IPs to iscsicpl (iSCSI Control Panel) in Microsoft Hyper-V.
e.g. go out and see what targets the host is allowed to see or not.
host1
root@proxmox01:~# iscsiadm -m discovery -t st -p 192.168.1.3
192.168.1.3:3260,1 iqn.2006-06.com.stormagic:b5833e0200000018.storage
192.168.1.4:3260,3 iqn.2006-06.com.stormagic:b5833e0200000018.storage
root@proxmox01:~# iscsiadm -m discovery -t st -p 192.168.1.4
192.168.1.4:3260,3 iqn.2006-06.com.stormagic:b5833e0200000018.storage
192.168.1.3:3260,1 iqn.2006-06.com.stormagic:b5833e0200000018.storage
host2
root@proxmox02:~# iscsiadm -m discovery -t st -p 192.168.1.3
192.168.1.3:3260,1 iqn.2006-06.com.stormagic:b5833e0200000018.storage
192.168.1.4:3260,3 iqn.2006-06.com.stormagic:b5833e0200000018.storage
root@proxmox02:~# iscsiadm -m discovery -t st -p 192.168.1.4
192.168.1.4:3260,3 iqn.2006-06.com.stormagic:b5833e0200000018.storage
192.168.1.3:3260,1 iqn.2006-06.com.stormagic:b5833e0200000018.storage
If the host IQN isn't in the Target ACL the below will be observed.
The below will also populate the initiators on the VSA, such that they can be added to the Target ACL without manually copying
root@proxmox02:~# iscsiadm -m discovery -t st -p 192.168.1.3
iscsiadm: No portals found
root@proxmox02:~# iscsiadm -m discovery -t st -p 192.168.1.4
iscsiadm: No portals found
Login to the disk - this is the equivalent to logging in the sessions via iscsicpl (iSCSI Control Panel) in Microsoft Hyper-V
iscsiadm - Linux "iscsiadm" Command Line Options and Examples
-m or --mode
-T or --target
-p or --portal
-l or --login
-n node.startup -v automatic
host1
root@proxmox01:~# iscsiadm -m node -T iqn.2006-06.com.stormagic:b5833e0200000018.storage -p 192.168.1.3 -l -n node.startup -v automatic
Logging in to [iface: default, target: iqn.2006-06.com.stormagic:b5833e0200000018.storage, portal: 192.168.1.3,3260]
Login to [iface: default, target: iqn.2006-06.com.stormagic:b5833e0200000018.storage, portal: 192.168.1.3,3260] successful.
root@proxmox01:~# iscsiadm -m node -T iqn.2006-06.com.stormagic:b5833e0200000018.storage -p 192.168.1.4 -l -n node.startup -v automatic
Logging in to [iface: default, target: iqn.2006-06.com.stormagic:b5833e0200000018.storage, portal: 192.168.1.4,3260]
Login to [iface: default, target: iqn.2006-06.com.stormagic:b5833e0200000018.storage, portal: 192.168.1.4,3260] successful.
host2
root@proxmox02:~# iscsiadm -m node -T iqn.2006-06.com.stormagic:b5833e0200000018.storage -p 192.168.1.3 -l -n node.startup -v automatic
Logging in to [iface: default, target: iqn.2006-06.com.stormagic:b5833e0200000018.storage, portal: 192.168.1.3,3260]
Login to [iface: default, target: iqn.2006-06.com.stormagic:b5833e0200000018.storage, portal: 192.168.1.3,3260] successful.
root@proxmox02:~# iscsiadm -m node -T iqn.2006-06.com.stormagic:b5833e0200000018.storage -p 192.168.1.4 -l -n node.startup -v automatic
Logging in to [iface: default, target: iqn.2006-06.com.stormagic:b5833e0200000018.storage, portal: 192.168.1.4,3260]
Login to [iface: default, target: iqn.2006-06.com.stormagic:b5833e0200000018.storage, portal: 192.168.1.4,3260] successful.
List the disks
host1
root@proxmox01:~# ls -n /dev/disk/by-id/
total 0
lrwxrwxrwx 1 0 0 9 Feb 2 18:27 ata-IM2S33D4_2K24291DCC1U - ../../sda
lrwxrwxrwx 1 0 0 10 Feb 2 18:27 ata-IM2S33D4_2K24291DCC1U-part1 - ../../sda1
lrwxrwxrwx 1 0 0 10 Feb 2 18:27 ata-IM2S33D4_2K24291DCC1U-part2 - ../../sda2
lrwxrwxrwx 1 0 0 10 Feb 2 18:27 ata-IM2S33D4_2K24291DCC1U-part3 - ../../sda3
lrwxrwxrwx 1 0 0 10 Feb 5 17:48 dm-name-mpatha - ../../dm-2
lrwxrwxrwx 1 0 0 10 Feb 5 09:12 dm-name-pve-root - ../../dm-1
lrwxrwxrwx 1 0 0 10 Feb 2 18:27 dm-name-pve-swap - ../../dm-0
lrwxrwxrwx 1 0 0 10 Feb 5 09:12 dm-uuid-LVM-nEFXA26cqgJ57l48RNU2h2Wd56omAkbLaUmf7C51Rwyrw2BDUNkA6L6pXM8I2X02 - ../../dm-1
lrwxrwxrwx 1 0 0 10 Feb 2 18:27 dm-uuid-LVM-nEFXA26cqgJ57l48RNU2h2Wd56omAkbLeB6FiadwCcQMEb2fPeev93iWhlLCNli8 - ../../dm-0
lrwxrwxrwx 1 0 0 10 Feb 5 17:48 dm-uuid-mpath-2000339b5833e0002 - ../../dm-2
lrwxrwxrwx 1 0 0 10 Feb 2 18:27 lvm-pv-uuid-83Mzrx-sQOG-LPsA-bV52-81We-NElC-FNbCm1 - ../../sda3
lrwxrwxrwx 1 0 0 13 Feb 2 18:27 nvme-eui.343646304e6108110025385900000001 - ../../nvme0n1
lrwxrwxrwx 1 0 0 13 Feb 2 18:27 nvme-eui.343646304e6108360025385900000001 - ../../nvme1n1
lrwxrwxrwx 1 0 0 13 Feb 2 18:27 nvme-SAMSUNG_MZ1LB3T8HMLA-00007_S46FNY0N610811 - ../../nvme0n1
lrwxrwxrwx 1 0 0 13 Feb 2 18:27 nvme-SAMSUNG_MZ1LB3T8HMLA-00007_S46FNY0N610836 - ../../nvme1n1
lrwxrwxrwx 1 0 0 10 Feb 5 17:48 scsi-2000339b5833e0002 - ../../dm-2
lrwxrwxrwx 1 0 0 9 Feb 5 17:48 scsi-SStorMagc_iSCSI_Volume_b5833e0200000018 - ../../sdc
lrwxrwxrwx 1 0 0 10 Feb 5 17:48 wwn-0x000339b5833e0002 - ../../dm-2
lrwxrwxrwx 1 0 0 9 Feb 2 18:27 wwn-0x5707c18100925c1a - ../../sda
lrwxrwxrwx 1 0 0 10 Feb 2 18:27 wwn-0x5707c18100925c1a-part1 - ../../sda1
lrwxrwxrwx 1 0 0 10 Feb 2 18:27 wwn-0x5707c18100925c1a-part2 - ../../sda2
lrwxrwxrwx 1 0 0 10 Feb 2 18:27 wwn-0x5707c18100925c1a-part3 - ../../sda3
host2
root@proxmox02:~# ls -n /dev/disk/by-id/
total 0
lrwxrwxrwx 1 0 0 9 Feb 5 09:12 ata-IM2S33D4_2K242L1DE2TY - ../../sda
lrwxrwxrwx 1 0 0 10 Feb 5 09:12 ata-IM2S33D4_2K242L1DE2TY-part1 - ../../sda1
lrwxrwxrwx 1 0 0 10 Feb 5 09:12 ata-IM2S33D4_2K242L1DE2TY-part2 - ../../sda2
lrwxrwxrwx 1 0 0 10 Feb 5 09:12 ata-IM2S33D4_2K242L1DE2TY-part3 - ../../sda3
lrwxrwxrwx 1 0 0 10 Feb 5 17:48 dm-name-mpatha - ../../dm-2
lrwxrwxrwx 1 0 0 10 Feb 5 09:12 dm-name-pve-root - ../../dm-1
lrwxrwxrwx 1 0 0 10 Feb 5 09:06 dm-name-pve-swap - ../../dm-0
lrwxrwxrwx 1 0 0 10 Feb 5 09:06 dm-uuid-LVM-xfXycqTbuEKqOJWRnQ24fi8hbX6QNPYw7CvMF1Tc50gYWOLfXhB7sTufmBQgnKpE - ../../dm-0
lrwxrwxrwx 1 0 0 10 Feb 5 09:12 dm-uuid-LVM-xfXycqTbuEKqOJWRnQ24fi8hbX6QNPYwCI6LJxgsVuK340BrsGsdTFbTQatw2tSp - ../../dm-1
lrwxrwxrwx 1 0 0 10 Feb 5 17:48 dm-uuid-mpath-2000339b5833e0002 - ../../dm-2
lrwxrwxrwx 1 0 0 10 Feb 5 09:12 lvm-pv-uuid-dv2oQD-FEv3-WNyz-n349-bFha-yLrH-Q2iSVu - ../../sda3
lrwxrwxrwx 1 0 0 13 Feb 5 09:06 nvme-eui.343646304e6102800025385900000001 - ../../nvme1n1
lrwxrwxrwx 1 0 0 13 Feb 5 09:06 nvme-eui.343646304e6107270025385900000001 - ../../nvme0n1
lrwxrwxrwx 1 0 0 13 Feb 5 09:06 nvme-SAMSUNG_MZ1LB3T8HMLA-00007_S46FNY0N610280 - ../../nvme1n1
lrwxrwxrwx 1 0 0 13 Feb 5 09:06 nvme-SAMSUNG_MZ1LB3T8HMLA-00007_S46FNY0N610727 - ../../nvme0n1
lrwxrwxrwx 1 0 0 10 Feb 5 17:48 scsi-2000339b5833e0002 - ../../dm-2
lrwxrwxrwx 1 0 0 9 Feb 5 17:48 scsi-SStorMagc_iSCSI_Volume_b5833e0200000018 - ../../sdc
lrwxrwxrwx 1 0 0 10 Feb 5 17:48 wwn-0x000339b5833e0002 - ../../dm-2
lrwxrwxrwx 1 0 0 9 Feb 5 09:12 wwn-0x5707c18100925646 - ../../sda
lrwxrwxrwx 1 0 0 10 Feb 5 09:12 wwn-0x5707c18100925646-part1 - ../../sda1
lrwxrwxrwx 1 0 0 10 Feb 5 09:12 wwn-0x5707c18100925646-part2 - ../../sda2
lrwxrwxrwx 1 0 0 10 Feb 5 09:12 wwn-0x5707c18100925646-part3 - ../../sda3
Noting the newly appeared mpath disk device
Note the world wide name (wwn) for sda, and exclude
host1
root@proxmox01:~# /lib/udev/scsi_id -g -u -d /dev/sda
35707c18100925c1a
root@proxmox01:~# multipath -a 35707c18100925c1a
wwid '35707c18100925c1a' added
root@proxmox01:~# cat /etc/multipath/wwids
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/2000339b5833e0002/
/35707c18100925c1a/
host2
root@proxmox02:~# /lib/udev/scsi_id -g -u -d /dev/sda
35707c18100925646
root@proxmox02:~# multipath -a 35707c18100925646
wwid '35707c18100925646' added
root@proxmox02:~# cat /etc/multipath/wwids
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/2000339b5833e0002/
/35707c18100925646/
Confirm multipath is running correctly on both nodes and looks like the below:
host1
root@proxmox01:~# multipath -ll
mpatha (2000339b5833e0002) dm-2 StorMagc,iSCSI Volume
size=3.2T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 6:0:0:0 sdb 8:16 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 7:0:0:0 sdc 8:32 active ready running
host2
root@proxmox02:~# multipath -ll
mpatha (2000339b5833e0002) dm-2 StorMagc,iSCSI Volume
size=3.2T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 6:0:0:0 sdb 8:16 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 7:0:0:0 sdc 8:32 active ready running
Create an LVM PV (LVM physical volume) on the multipath device
From one host, noting the disk alias to select, in this instance being mpatha:
root@proxmox01:~# pvcreate /dev/mapper/mpatha
Physical volume "/dev/mapper/mpatha" successfully created.
Create an VG (Volume Group) on the multipath device
root@proxmox01:~# vgcreate vg-svsan-storage /dev/mapper/mpatha
Volume group "vg-svsan-storage" successfully created
Further SvSAN disks will be mpathb mpathc etc
Add the storage to the nodes
Browse to the cluster GUI and add the storage
Specify a name for the datastore, in this example "svsan-storage", set the base storage as the previous created volume group "vg-svsan-storage", and ensure to select "Shared".
Should shared not be selected when a guest VM is migrated between hosts, the data will also be copied rather than purely the memory.
See Also
Comments
0 comments
Article is closed for comments.