This article is intended for administrators wishing to configure an SvHCI node storage pool to provision VMs on top of. This may be a hardware RAID disk or a software RAID0, 1 or 10 over direct access (HBA managed) disks.
Note: All images are clickable for enlarging, or can be opened in a new tab
Resolution/Information
SvHCI and SvSAN storage consumption
Other hyperconverged products in the market consume individual drives in what are are typically called disk groups.
This typically involves a number of HDDs, up to a maximum of 6, with a dedicated SSD to act as a cache disk. Should one of these drives fail the disk group is marked failed.
A lot of these solutions recommend 3-nodes for this reason, as with a 2-node there is only, typically, node resiliency wih no internal drive resiliency. e.g. 1x fault domain. A drive fails and it fails the node.
Both SvHCI and SvSAN, being targeted to start with a 2-node configuration, consume disks/drives in more traditional RAID model, enabling 2x fault domains even in a 2-node cluster. e.g. internal disk resiliency through software or hardware RAID, alongside the cross-node resiliency of the synchronous mirroring (replication factor 2) of the virtual disks.
This results in SvHCI or SvSAN nodes tolerating an indivdual drive failure, be it SSD or HDD.
SvHCI and SvSAN also have a Storage Failed mechanism detailed below:
SvHCI & Software RAID
StorMagic SvHCI supports software RAID, if a hardware RAID adapter is unavailable
https://support.stormagic.com/hc/en-gb/articles/17597617525917-SvSAN-and-Software-RAID
Select Storage>Pools
In this example we have 2x 3.94TB Samsung M.2 NVMe SSDs, that we will software RAID1 via SvHCI:
SvHCI and Hardware RAID
SvHCI can also manage hardware RAID adapter managed disks as a JBOD.
See Also
Comments
0 comments
Article is closed for comments.