The purpose of this document is to provide basic guidance for resellers and partners who are migrating customers from a VMware ESXi/vSAN 2-node architecture to a VMware ESXi and StorMagic SvSAN 2-node architecture.
Note: All images are clickable for enlarging, or can be opened in a new tab
Resolution/Information
TARGET AUDIENCE
Resellers – Sales and Technical
Partners – Sales and Technical
SvSAN presents storage over block iSCSI that can be shared to the same hosts for hyper-converged storage, or to any other iSCSI initiator hosts on the network. This enables a non-disruptive migration path with the VM migration tools included in all hypervisors.
IN-PLACE MIGRATION
SvSAN can present non-mirrored storage that can convert to mirrored, to enable storage high availability, later.
This enables an in-place migration, detailed in the following steps.
Migration workflow
Due to the restrictions of VMware vSAN the below workflow is necessary, including a reinstall of VMware ESXi.
1. Clear node 2 by migrating all guest VMs to node 1.
2. Reboot the cleared node 2 via iLO/iDRAC and reinstall ESXi, additionally scrubbing vSAN partitions.
3. Deploy the SvSAN VSA to node 2 and present shared, unmirrored storage to node 2.
4. Migrate the compute and storage of all guest VMs from node 1 to the newly provisioned SvSAN node 2.
5. Reboot the cleared node 1 via iLO/iDRAC and reinstall ESXi, additionally scrubbing vSAN partitions.
6. Deploy the SvSAN VSA to node 1 and mirror the storage with node 2, mounting it to node 1.
7. Enable VMware High Availability and any other hypervisor features required.
1. CLEARING NODE 2 & MIGRATING GUEST VMs
Place node 2 in Maintenance Mode to migrate VMs to another node in the cluster, or alternatively, live migrate VMs, then place node 2 in Maintenance Mode.
VMware vSphere 7.0
VMware vSphere 8.0
Ensure the cleared ESXi host (node 2) is in Maintenance Mode before proceeding.
Break the existing VMware ESXi/vSAN cluster
Remove host from inventory
Figure 1 – Place ESXi host (node 2) into Maintenance Mode
Figure 2 – Disabling vSAN in Maintenance Mode
Figure 3 – Maintenance Mode VM warning
Figure 4 – Remove Host (node 2) from Inventory
2. REINSTALLING ESXi TO THE NEWLY CLEARED NODE 2
VMware ESXi 7.0.x install
VMware ESXi 8.0.x install
Figure 5 – ESXi Installer
Figure 6 – ESXi Installer VMFS overwrite
Figure 7 – ESXi Installer – Reboot on completion
After ESXi has been successfully reinstalled on node 2 and the host is manageable via an IP address, log into the ESXi Host Client, not via vCenter, to clear the old partitions on any VMware vSAN cache and capacity disks.
Go to Storage, then Devices, select the disk, then select clear partition table.
Figure 8 – Clear vSAN cache and capacity drive partitions
Figure 9 – VSA deploy wizard without vSAN partitions cleared
If the vSAN partitions are not removed, the VSA deploy wizard will not present the disks as available for use, believing them to already be consumed by something else.
Add the redeployed ESXi host to the existing vSAN Datacenter, note NOT the cluster, as vSAN will be reinstalled if this host is added to the cluster.
Figure 10 – ESXi host added to the Datacenter, not the cluster
VMware 7.0.x
3. HOST CONFIGURATION AND SvSAN DEPLOYMENT ON NODE 2
With the host added to vCenter, configure virtual switches for the ESXi host, based on your organization’s network policies and requirements.
Deploy the StorMagic plugin to vCenter, if not already deployed.
https://stormagic.com/doc/svSAN/6-3-U1/en/Content/vSphere%20Plugin/Plugin_deploy_vsphere.htm
Figure 11 – Deploy SvSAN Plugin to vCenter
Deploy a StorMagic VSA to the newly cleared ESXi host (node 2):
https://stormagic.com/doc/svSAN/6-3-U1/en/Content/vsa-deploy-vs.htm
Figure 12 – Deploy a SvSAN VSA to the newly cleared ESXi host (node 2)
Figure 13 - VSA deployment wizard summary
Create a non-mirrored datastore
As per the documentation at the link below, select the one StorMagic VSA and create an unmirrored datastore sharing to the ESXi hosts in the cluster.
https://stormagic.com/doc/svSAN/6-3-U1/en/Content/datastore-create-vs.htm
Figure 14 - Datastore creation wizard
Figure 15 – Non-mirrored storage creation message
Figure 16 – Optionally enable caching
Figure 17 – Share the datastore to both, and any additionally desired, ESXi hosts
Figure 18 – Login to the ESXi hosts with the plugin
Figure 19 – Ensure the hosts are authenticated
Figure 20 – Datastore creation wizard summary
4. MIGRATE ALL VMs FROM NODE 1 TO NODE 2
Leveraging VMware VM Compute and Storage vMotion, or other tools, migrate the VMs from node 1 to the newly presented SvSAN storage and newly reprovisioned ESXi host (node 2).
Figure 21 – VM Migration of both compute and storage
Figure 22 – Select the new SvSAN datastore to migrate the guest VMs to
After all VMs are migrated, validate that they are operational before proceeding.
5. CLEARING NODE 1 AND REINSTALLING ESXi
Follow the below process for the remaining ESXi host and cluster (node 1):
1. On the VMware Cluster, disable DRS, if in use.
2. On the VMware Cluster, disable HA, if in use.
3. Under the Cluster > vSAN Services, turn off VMware vSAN.
Then, repeat the steps outlined earlier in this guide for clearing and reinstalling ESXi, this time on node1:
1. Place Host in Maintenance Mode.
2. Remove Host from vSAN cluster.
3. Reinstall ESXi on this host.
4. Clear the old VMware vSAN partitions from the cache and capacity drives.
5. Add this ESXi host to the Datacenter, and provision a new cluster.
Figure 23 – Exit the Maintenance Mode
6. Drag both the ESXi hosts into the new cluster with vSAN disabled.
7. Configure virtual networking.
Figure 24 – Example networking utilized for this guide
8. Deploy a StorMagic VSA to the remaining host.
https://stormagic.com/doc/svSAN/6-3-U1/en/Content/vsa-deploy-vs.htm
Figure 25 – hosts in a new cluster, running VSAs
Ensure ESXi or vCenter credentials for all ESXi hosts are entered into each VSA. See the article below for more information:
https://support.stormagic.com/hc/en-gb/articles/5971578201373-SvSAN-and-ESXi-Credentials
Figure 26 – Validate VMware credentials on any/all SvSAN VSAs
If DNS is on the cluster, change the ESXi hostname to IP address.
6. CONVERT THE SvSAN NON-MIRRORED DATASTORE TO A MIRRORED DATASTORE(S)
Via the VSA1 WebGUI add the mirror to VSA2, selecting your SvSAN witness of choice:
https://stormagic.com/resources/data-sheets/svsan-witness-data-sheet/
https://stormagic.com/doc/svsan/6-3-U1/en/Content/target.htm#convert-simple-target
https://stormagic.com/doc/svsan/6-3-U1/en/Content/plex.htm
Figure 27 – Mirror the target
Mount the Datastore to the newly deployed ESXi host (node 1)
Figure 28 – Mount the datastore to the newly deployed host (node 1)
Figure 29 – Selecting any and all additional hosts to mount the volume
Ensure path availability for all hosts to both VSAs:
Figure 30 – iSCSI software adapter ensuring path availability to both VSAs
7. ENABLE VMWARE HIGH AVAILABILITY AND OTHER HYPERVISOR FEATURES
With the Datastore now mirrored, enable VMware High Availability and any additional hypervisor features required by your organization.
FURTHER HELP
If you require additional assistance in migrating from VMware vSAN to StorMagic SvSAN, please contact support@stormagic.com
Comments
0 comments
Article is closed for comments.