Note: All images are clickable for enlarging, or can be opened in a new tab
The following runbook/deployment guide was written for the following Server configuration.
Video references on the HPE ProLiant MicroServer Gen11 servers.
SvHCI install video on similar HPE hardware - 2x HPE DL325 Gen11.
Video created by StorMagic Technical Services. https://youtu.be/DqWqOnAI82w
Expected Server Configuration:
Table of Contents
- HPE unboxing and Hardware Setup
- Network(s) Configuration
- Configuring iLO / updating SPP
- Installing StorMagic SvHCI
- Post-Installation Configuration
- SvHCI Cluster Creation
- Post-Deployment Validation
- Validating built for Redundancy and uptime.
1. HPE unboxing and Hardware Setup
1.1 Unboxing the Server
- Verify all components are present and undamaged.



1.2 Installing Additional Hardware and Connecting Peripherals
- Install RAM, SSDs/HDDs, additional NICs if needed.
- Connect power cables.
- Attach a monitor, keyboard, and mouse for initial setup or if not using iLO.
2. Network(s) Configuration.
Network Traffic types
SvHCI presents synchronously mirrored block-based disk devices, as a replication factor 2 (RF2) of the virtual machine (VM) disks.
SvHCI has two network traffic types:
- Management – Management
- iSCSI – used by iSCSI initiators to access the iSCSI storage targets
- Mirror Preferred/Failover/Excluded – used by the nodes for synchronization and live migration traffic
You can define which traffic type is allowed over each of your network interfaces.
In addition, you can define which interfaces are allowed to be used for mirror traffic, including specifying preferred interfaces.
The use of back-to-back or direct attach (DAC) cables frees up switch ports, enables load balancing, and potentially enables 10 Gb speed without the need for a 10 Gb switch.
Note the back-to-back links are on two logical vSwitches and two different IP subnets. This is important to prevent issues with the host routing IP traffic to the NIC it cannot reach due to the directly connected cabling.
- SvHCI fully supports back-to-back or direct attach cabling and is preferred for the Storage/infrastructure network.
- Verify network cabling is setup per best practices.
- Power the server on to start it if this hasn't been done.
•1x Broadcom BCM5719 1Gb quad port (QP) Base-T NIC
•1x Broadcom BCM57416 10Gb quad port (DP) Base-T NIC
2.1 Rear view overview
Example cabling for this simplified setup.
3. Configuring iLO / updating firmware via ProLiant Service Pack ISO (SPP)
The below ilO is required to allow remote access and remote mounting of ISO files for installation
Configure the iLO port to access the iLO console through a web browser. The iLO console provides out-of-band remote server management and access to advanced management options.
The server is pre-configured with a default, static IP address to access iLO (192.168.0.120).
The iLO default username and password are found on the label attached to the top of the chassis.
The username is Administrator and the password is an eight-character alphanumeric string.
Password guidelines | HPE iLO 6 Security Technology Brief
- Browse to the default iLO IP address. The server is pre-configured with a default, static IP address to access iLO (192.168.0.120).
- Login to iLO with the default credentials. The username is Administrator and the password is an eight-character alphanumeric string. Usually the SKU of the server.
- Select the iLO Dedicated Network Port tab and set a Static IP address for remote management.
- Apply an iLO advanced license to allow remote access and remote ISO installation.
- Navigate to Administrator > Licensing Enter the license key, once licensed, “iLO Advanced” should be shown.
SPP is the Service Pack for the Server BIOS, drivers and firmware as well as iLO.
Best practice for firmware update and security reasons it is advised upgrade to the latest – this process takes about 1 hour to complete. Complete the following from the iLO interface:
- Download the latest SPP from HPE for Gen 11 Server (check the Gen of the server in your setup first) - HPE Service Pack for ProLiant (SPP) Home Page
- Click Download SPP > Then on the next page “Obtain Software”.
- Login with the deployment user
- Download the relevant version
- From iLO, open the HTML5 console (bottom left)
- From the console window, mount the SPP ISO.
- Click Disk Icon > CD/DVD > Local ISO > ISO file
- At this point sometimes the ISO will auto load, if not reboot the server, select the boot menu and from the options select the “ILO ISO” option.
- Follow the instructions to install the SPP, this can take 30 minutes to an hour and the server will reboot a couple of times.
- Once SPP is updated, reboot the server
- On the bootup screen, when given the option enter System Utilities menu by pressing f9
Storage Controller Layout:
This Micro Gen 11 Server has two SATA controller options.
- SATA AHCI. This is what we will use for this setup using SvHCI.
- Intel VROC. This is also an option.
The Intel VROC enables RAID configuration over the internal disks, without having a full RAID card in the system. This enables internal node redudancy on the drives through the use of RAID1 or 10, or 5 (with an additional feature/license).
SvHCI is mirroring the storage between the nodes providing a replication factor 2 (RF2) similar to a network RAID1.
Per the below SvSAN/SvHCI also offer software RAID 0, 1 or 10 as an alternative to utilizing VROC.
If utilizing VROC configure the RAID via the BIOS System Utilities.
If utilizing SvHCI software RAID, set the Storage Controller Options to utilize SATA AHCI mode.
- System Utilities→ System Configuration → BIOS / Platform Configuration → Storage Options→ SATA Controller Options.
- Showing the SATA AHCI Support Controller being selected.
This example configuration uses SATA AHCI and SvHCI software RAID
- Showing the Intel VROC SATA Support controller being selected.
- Storage device information.
- Take note in this example the first is the one to use for the OS of SvHCI.
- Back to System Utilities.
- Proceed to next section.
Note: True up - Please match the 2-nodes configuration to this point if utilizing a cluster.
4. Installing StorMagic SvHCI
Common Installation Issues. Firewall rules / ports closed. SvHCI 1.x.x - Port Numbers – StorMagic
- Download SvHCI ISO from StorMagic. SvHCI Installer ISO Download – StorMagic
- Select the Disk Icon>CD/DVD.
- Select Local ISO and choose the SvHCI ISO.
- Start / Restart the server.
Optionally: SvHCI Create a bootable USB installer disk with Rufus – StorMagic
- Reboot selecting F11 for the boot menu.
- Scroll down to the iLO Virtual CD-ROM.
- Follow the steps in the wizard to set a static IP, DNS, gateway etc.
- Select Manual Install.
- Select 1 to install SvHCI
- Select the drive you want to install SvHCI to.
- Select the adapter to use for your Management Switch/interface. This will enable web based connectivity to the node.
- Enter the rest of the necessary information needed for deployment.
***On the Storage Device information screenshot from above***
Please ensure the correct desired drive is selected for installation. This is destructive to any existing data on the selected disk.
Note: ctrl + c during the install will restart the installation wizard should a typo or similar occur.
Note: In the prior screenshot, it warns all data will be lost do you wish to continue. When Y is entered it starts installing prebuilt grub images. Now if using old disks or mistakes were made and a new install is needed, and the disks have been used (data on them) it will ask you the same question and then proceed to scrub the disks and then installs images. This is expected and very similar in process as say an Ubuntu install where it asks if you wish to erase the disks and install the OS.
- Completed installation, console view upon initial boot post installation.
Note: True up - Please match the 2-nodes configuration to this point if utilizing a cluster.
5. Post-Installation Configuration
- Log in to SvHCI Web Interface (
https://<Node-IP>
). - Username: admin Password: password (default given on installation and will be required to be changed).
- Check to accept the licensing agreement.
- Review and select next.
- Apply a SvHCI license key.
When applying a license key the device will reach out to the StorMagic license server for the node to receive the proper license information. If the environment is being setup on a closed site or doesn't have outside access yet the keys can be applied via an offline method. For the steps for offline licensing can be found at the link below.
SvHCI & SvSAN Online and Offline Licensing – StorMagic
- Review the updated information the node received is correct per your purchase.
- Apply a Hostname and Domain.
- Create a new strong secure password thus changing the default.
- Confirm and select Finish. (The password change triggers the need to log in again).
Note: True up - Please match the 2-nodes configuration to this point if utilizing a cluster.
- Login to the web interface again.
- Username: admin Password: <Your new password>
5.1 SvHCI Networking:
SvHCI Configure networking – StorMagic
- StorMagic SvHCI web interface upon signing back in for the first time with the new settings.
- Network Information on the Management interface that was created during the installation.
- Create a Port Group on the LAN vSwitch for your VMs.
- Apply any necessary VLANs.
- Click apply.
- Create another vSwitch for the first SAN (Storage Area Network) this will be used for the mirror synchronization and VM Live Migration traffic.
- Select to create bottom left, an interface on this switch for the Sync / Mirroring.
- Give it a name, uncheck the DHCP, supply an IP and subnet mask.
- Supply the necessary VLAN if using them.
- Leave the box unchecked for Management and check the box for iSCSI which will check the second automatically as well.
- Set the Mirror Traffic Policy to "Preferred". This will make this the primary route for the storage traffic and will only use the management network as a failover in a worst case.
- Click Create.
- Create another vSwitch for the second SAN (Storage Area Network) this will also be used for the mirror traffic.
- Create a network interface on the SAN2 vSwitch for the storage traffic.
- Give it a name, uncheck the DHCP, supply an IP and subnet mask.
- Supply the necessary VLAN if using them.
- Leave the box unchecked for Management and check the box for iSCSI which will check the second automatically as well.
- Set the Mirror Traffic Policy to "Preferred". This will make this the primary route for the storage traffic and will only use the management network as a failover in a worst case.
- Click Create.
- Complete basic but best practice networking setup for running StorMagic SvHCI.
5.2 SvHCI Storage Pools.
SvHCI & SvSAN Pool licensing – StorMagic
- Create a storage pool
- New Pool being created, showing the direct device mapping and also the performance as it's building the pool.
Note: True up - Please match the 2-nodes configuration to this point if utilizing a cluster.
At this stage you will want to ensure you have a two servers running SvHCI. The next stages include deploying a StorMagic Witness and creating the cluster. Therefore, it's a good point to do a check to make sure we have two single hosts ready to make a cluster.
5.3 SvHCI Witness Installation:
Witness for mirrored targets
To provide full data integrity in case of failure of a node/mirrored target, it is recommended to use a witness. A witness holds mirror state and acts as arbiter when the two sides of the mirror are uncertain as to which has the correct state. A witness is a third-party machine with the StorMagic Witness Service deployed to it. The Witness Service can be deployed to either a Windows or Linux OS on either a physical machine, or a virtual machine (off the HA cluster). The witness deployment procedure varies depending on the machine that you chose to host the Witness service. When you use a witness, your mirrored targets should use the Majority mirror isolation policy.
Note: Please refer to the link at our support portal to find the corresponds to your environment: StorMagic Witness: Deploy, Install, Upgrade and Migrate.
6. SvHCI Cluster Creation:
SvHCI Cluster Creation – StorMagic
- Select the Discovery tab to view the cluster details.
- Select the Join button.
- Select the other host to add and the Witness to be used.
- Click Apply.
- New Cluster view Showing two SvHCI hosts and a Witness.
7. Post-Deployment Validation
7.1 Network Connectivity Testing
Test remote access:
- Access iLO. Open
https:// <Hosts static iLO ip>
. - Access SvHCI. Open
https:// <Hosts static SvHCI management ip>
. - Access any created VMs to verify access over the VM Network / Management network.
Test Connectivity:
- You can utilize the onboard Ping tool to run different connectivity tests.
- You can utilize the onboard Traceroute to also run different connectivity tests.
Within the SvHCI dashboard there are some performance testing tools within the Networking tab these tools are basic but can help verify setup or test for networking issues.
- Network Speed Test.
This is set up with one side as a server and the other side the client. Then it is used to test the speed over a specific network interface. The next three screenshots are the complete test. Server, client and results.
- Network IO Path test
This utility enables the setting of specific parameters to run a test that will provide results of network throughput, IOPs, and latency. The first screenshot is the setup and the second is the results.
7.2 Verify SvHCI Storage Functionality.
- Upload iSO's to start building VMs. SvHCI ISO or RAW Disk img Upload – StorMagic
- Create Windows guest VMs SvHCI Windows VM Creation – StorMagic
- Create Linux guest VMs. SvHCI Linux Guest VM Creation – StorMagic
Resources for working with guest VMs & SvHCI.
SvHCI Guest VM Resource Handling – StorMagic
VM disk (VMDK/VHD) Copy, Conversion & Import of a Linux guest from VMware to SvHCI – StorMagic
SvHCI Create a VMDK/VHD>RAW Disk Converter Virtual Machine (VM) – StorMagic
8.Validating built for Redundancy and uptime.
8.1 Ensuring High Availability
Following the installation of the HPE ProLiant MicroServer Gen11 with SvHCI it is best practice to test the environment and ensure everything was configured correctly and the configuration performs as expected. The following evaluators guide walks through steps and videos to test and validate the configuration.
- StorMagic SvHCI Evaluators Guide – StorMagic
- High Availability (HA) Virtual Machine (VM) Failover Test
8.2 Monitoring, Maintenance and DRP (Backups).
- Set up one of the options for reporting to receive event notifications. https://support.stormagic.com/hc/en-gb/articles/5203720099997-SvHCI-SvSAN-Reporting-Notifications-Alerts
- Monitor performance via SvHCI Dashboard. From the Targets tab select statistics.
- View the Statistics page for your targets and verify there's no real concerns in the performance.
- Check Cluster Health often. Check for Synchronization, proper sessions, networking issues.
- Update SvHCI & BIOS as needed staying up to date on what both HPE and StorMagic recommend as advised firmware.
The below screenshots are firmware upgrade locations. However, when using SvHCI in a production environment when updating anything the following guide should be used to help ensure the updates happen while keeping the guest VMs operational. SvHCI Upgrade SvHCI non-disruptively – StorMagic
- SvHCI systems tab >> Upgrade Firmware.
- SvHCI System Firmware upgrade page.
Resources for Setting up backups with Veeam.
Agent Based Veeam Backup of Guest VMs on StorMagic SvHCI – StorMagic
Agent Based Veeam Restore Process of Guest VMs on StorMagic SvHCI – StorMagic
See Also
SvHCI Installation – StorMagic
Comments
0 comments
Article is closed for comments.