Note: All images are clickable for enlarging, or can be opened in a new tab
Video references on the DL145 Gen11 servers.
SvHCI install video on similar HPE hardware (2x HPE DL145 Gen11)
Video created by StorMagic Technical Services. SvHCI on HPE DL145 (Install and Cluster Configuration)
SvHCI install video on similar HPE hardware (2x HPE DL325 Gen11)
Video created by StorMagic Technical Services. https://youtu.be/DqWqOnAI82w
Expected Server Configuration:
Table of Contents
- HPE unboxing and Hardware Setup
- Network(s) Configuration
- Configuring iLO / updating SPP
- Installing StorMagic SvHCI
- Post-Installation Configuration
- SvHCI Cluster Creation
- Post-Deployment Validation
- Validating built for Redundancy and uptime.
1. HPE unboxing and Hardware Setup
1.1 Unboxing the Server
- Verify all components are present and undamaged.
1.2 Installing Additional Hardware and Connecting Peripherals
- Install RAM, SSDs/HDDs, additional NICs if needed.
- Connect power cables.
- Attach a monitor, keyboard, and mouse for initial setup or if not using iLO.
2. Network(s) Configuration.
Network Traffic types
SvHCI presents synchronously mirrored block-based disk devices, as a replication factor 2 (RF2) of the virtual machine (VM) disks.
SvHCI has two network traffic types:
- Management – Management
- iSCSI – used by iSCSI initiators to access the iSCSI storage targets
- Mirror Preferred/Failover/Excluded – used by the nodes for synchronization and live migration traffic
You can define which traffic type is allowed over each of your network interfaces.
In addition, you can define which interfaces are allowed to be used for mirror traffic, including specifying preferred interfaces.
The use of back-to-back or direct attach (DAC) cables frees up switch ports, enables load balancing, and potentially enables 10 Gb speed without the need for a 10 Gb switch.
Note the back-to-back links are on two logical vSwitches and two different IP subnets. This is important to prevent issues with the host routing IP traffic to the NIC it cannot reach due to the directly connected cabling.
- SvHCI fully supports back-to-back or direct attach cabling and is preferred for the Storage/infrastructure network.
- Verify network cabling is setup per best practices.
- Power the server on to start it if this hasn't been done.
•1x Broadcom BCM5719 1Gb quad port (QP) Base-T NIC
•1x Broadcom BCM57414 10Gb/25GB SFP28
BCM57410 High-Performance 10GbE/25GbE/40GbE/50GbE Solutions
3. Configuring iLO / updating firmware via ProLiant Service Pack ISO (SPP)
Required to allow remote access and remote mounting of ISO files for installation.
Configure the iLO port to access the iLO console through a web browser. The iLO console provides out-of-band remote server management and access to advanced management options.
The server is pre-configured with a default, static IP address to access iLO (192.168.0.120).
The iLO default username and password are found on the label attached to the top of the chassis.
The username is Administrator and the password is an eight-character alphanumeric string.
Password guidelines | HPE iLO 6 Security Technology Brief
- Browse to the default iLO IP address. The server is pre-configured with a default, static IP address to access iLO (192.168.0.120).
- Login to iLO with the default credentials. The username is Administrator and the password is an eight-character alphanumeric string. Usually the SKU of the server.
- Select the iLO Dedicated Network Port tab and set a Static IP address for remote management.
- Apply an iLO advanced license to allow remote access and remote ISO installation.
- Navigate to Administrator > Licensing Enter the license key, once licensed, “iLO Advanced” should be shown.
SPP is the Service Pack for the Server BIOS, drivers and firmware as well as iLO. For security reasons its best to upgrade to the latest – this process takes about 1 hour to complete. Complete the following from the iLO interface:
Download the latest SPP from HP for Gen 11 Server (check the Gen of the server in your setup first):HPE Service Pack for ProLiant (SPP) Home Page
- Follow the instructions to install the SPP, this can take 30 minutes to an hour and the server will reboot a couple of times.
- Click Download SPP > Then on the next page “Obtain Software”.
- Login with the deployment user
- Download the relevant version
- From iLO, open the HTML5 console (bottom left)
- Mount the SPP ISO via iLO Virtual Media, USB, or DVD.
- Reboot and boot from the SPP media.
- Select Automatic Firmware Update and let the system update.
- Once SPP is updated, reboot the server
- On the bootup screen, when given the option enter System Utilities menu by pressing f9
- System Utilities→ Click System Configuration → HPE MR416i-p Gen11
3.1 Configuring the Logical Volumes.
It is recommended if using a RAID 5 or 6 not to use Smartpath SvSAN and HPE SSD Smart Path – StorMagic
This Model server has two Storage Controllers on board. One for Boot and one for Capacity.
- System Utilities→ Click System Configuration → HPE MR416i-p Gen11 → Main Menu
- Config Management→ Create Logical Drive.
- Select the RAID level desired.
- Select the button that says Select Drives.
- Apply the changes. It's going to ask you to confirm, and you might even think the drive is creating. it's not yet.
- Give the Logical Drive a name, verify the size and other settings desired.
- Set the Read Policy to Read Ahead and the Write Policy to Write Back.
- Enable the Drive cache so the controller can use this ahead of the storage to assist in performance.
- Click Save configuration at the top.
- Confirm yes you want to create the volume.
- Showing, now you can see the new LV that was created.
- Back to System Utils.
- Proceed to next section.
Note: True up - Please match the 2-nodes configuration to this point if utilizing a cluster.
4. Installing StorMagic SvHCI
Common Installation Issues. Firewall rules / Ports closed. SvHCI 1.x.x - Port Numbers – StorMagic
- Download SvHCI ISO from StorMagic. SvHCI 1.3.0 - SvHCI Installer ISO – StorMagic
- Select the Disk Icon>CD/DVD.
- Select Local ISO and choose the SvHCI ISO.
- Start / Restart the server.
Optionally: SvHCI Create a bootable USB installer disk with Rufus – StorMagic
- On the reboot select F11 for the boot menu.
- Move down to the iLO Virtual CD-ROM.
- Follow the steps in the wizard to set a static IP, DNS, gateway ect..
- Select Manual Install.
- Select 1 to install SvHCI V.x.x.x
- Select The drive you want to install SvHCI on. Use the Boot Controller for the OS install.
- Select the adapter to use for your Management Switch that will be created on install to allow remote access.
- Enter the rest of the necessary information needed for deployment.
Note: ctrl + c during the install will restart the installation over if need be.
Note: In the prior screenshot, it warns all data will be lost do you wish to continue. When Y is entered it starts installing prebuilt grub images. Now if using old disks or mistakes were made and a new install is needed, and the disks have been used (data on them) it will ask you the same question and then proceed to scrub the disks and then installs images. This is expected and very similar in process as say an Ubuntu Install where it asks if you want to erase the disks and install the OS.
- Completed installation, console view upon initial boot post installation.
Note: True up - Please match the 2-nodes configuration to this point if utilizing a cluster.
5. Post-Installation Configuration
- Log in to SvHCI Web Interface (
https://<Node-IP>
). - Username: admin Password: password (default given on installation to be changed).
- Check to accept the licensing agreement.
- Review and select next.
- Apply a SvHCI license key.
When applying a license key the device will reach out to the StorMagic license server for the node to receive the proper license information. If the environment is being setup on a closed site or doesn't have outside access yet the keys can be applied via an offline method. For the steps for offline licensing can be found at the link below.
SvHCI & SvSAN Online and Offline Licensing – StorMagic
- Review the updated information the node received is correct per your purchase.
- Apply a Hostname and Domain.
- Create a new strong secure password thus changing the default one created on install.
- Confirm and select Finish. (The password change triggers the need to log in again).
Note: True up - Please match the 2-nodes configuration to this point if utilizing a cluster.
- Login to the web interface again.
- Username: admin Password: <Your new password>
5.1 SvHCI Networking:
SvHCI Configure networking – StorMagic
- StorMagic SvHCI web interface upon signing back in for the first time with the new settings.
- Network Information on the Management interface that was created during the installation.
- Create a Port Group on the LAN vSwitch for your VMs.
- Apply any necessary VLANs.
- Click apply.
- Create another vSwitch for the first SAN (Storage Area Network) this will be used for the mirror traffic.
- Select to create bottom left, an interface on this switch for the Sync / Mirroring.
- Give it a name, uncheck the DHCP, supply an IP and subnet mask.
- Supply the necessary VLAN if using them.
- Leave the box unchecked for Management and check the box for iSCSI which will check the second automatically as well.
- Set the Mirror Traffic Policy to "Preferred". This will make this the primary route for the storage traffic and will only use the management network as a failover in a worst case.
- Click Create.
- Create another vSwitch for the second SAN (Storage Area Network) this will also be used for the mirror traffic.
- Create a network interface on the SAN2 vSwich for the storage traffic.
- Give it a name, uncheck the DHCP, supply an IP and subnet mask.
- Supply the necessary VLAN if using them.
- Leave the box unchecked for Management and check the box for iSCSI which will check the second automatically as well.
- Set the Mirror Traffic Policy to "Preferred". This will make this the primary route for the storage traffic and will only use the management network as a failover in a worst case.
- Click Create.
- Complete basic but best practice networking setup for running StorMagic SvHCI.
5.2 SvHCI Storage Pools:
SvHCI & SvSAN Pool licensing – StorMagic
- Storage Pool view. Notice the pool meant for the shared storage has a generic name that was generated starting with a P. This means the pool was created on deployment when the storage was handed up RAW.
- Select the pool with the generic name starting with the P and select edit pool.
- Give it a new name and select apply.
- New Storage Pool view with a Boot pool (Local DS) and a Shared Pool for the mirroring of targets.
Note: True up - Please match the 2-nodes configuration to this point if utilizing a cluster.
At this stage you will want to ensure you have a two servers running SvHCI. The next stages include deploying a StorMagic Witness and creating the cluster. Therefore, it's a good point to do a check to make sure we have two single hosts ready to make a cluster.
5.3 SvHCI Witness Installation:
Witness for mirrored targets
To provide full data integrity in case of failure of a mirrored target, it is recommended to use a witness. A witness holds mirror state and acts as arbiter when the two sides of the mirror are uncertain as to which has the correct state. A witness is a third-party machine with the StorMagic Witness Service deployed to it. The Witness Service can be deployed to either a Windows or Linux OS on either a physical machine, or a virtual machine (off the HA cluster). The witness deployment procedure varies depending on the machine that you chose to host the Witness service. When you use a witness, your mirrored targets should use the Majority mirror isolation policy.
Note: Please refer to the link at our support portal to find the corresponds to your environment: StorMagic Witness: Deploy, Install, Upgrade and Migrate.
6. SvHCI Cluster Creation:
SvHCI Cluster Creation – StorMagic
- Select the Discovery tab to view the cluster details.
- Select the Join button.
- Select the other host to add and the Witness to be used.
- Click Apply.
- New Cluster view Showing two SvHCI hosts and a Witness.
7. Post-Deployment Validation
7.1 Network Connectivity Testing
Test remote access:
- Access iLO. Open
https:// <Hosts static iLO ip>
. - Access SvHCI. Open
https:// <Hosts static SvHCI management ip>
. - Access any created VMs to verify access over the VM Network / Management network.
Test Connectivity:
- You can utilize the onboard Ping tool to run different connectivity tests.
- You can utilize the onboard Traceroute to also run different connectivity tests.
Within the SvHCI dashboard there are some performance testing tools within the Networking tab these tools are basic but can help verify setup or test for networking issues.
- Network Speed Test.
This is set up with one side as a server and the other side the client. Then it is used to test the speed over a specific network interface. The next three images are the complete test. Server, client, results.
- Network IO Path test
The ability set specific parameters to run a test that will provide test results around Throughput, IOPS, and Latency. The first image is the setup and the second is the results.
7.2 Verify SvHCI Storage Functionality.
- Upload iSO's to start building VMs. SvHCI ISO or RAW Disk img Upload – StorMagic
- Create Windows guest VMs SvHCI Windows VM Creation – StorMagic
- Create Linux guest VMs. SvHCI Linux Guest VM Creation – StorMagic
Resources for working with guest VMs & SvHCI.
SvHCI Guest VM Resource Handling – StorMagic
VM disk (VMDK/VHD) Copy, Conversion & Import of a Linux guest from VMware to SvHCI – StorMagic
SvHCI Create a VMDK/VHD>RAW Disk Converter Virtual Machine (VM) – StorMagic
8.Validating built for Redundancy and uptime.
8.1 Ensuring High Availability.
Following the installation of the HPE Gen11 MicroServers with SvHCI it is always best practice to test the environment and ensure everything was configured properly and the configuration took as expected. Following is a link to evaluators guide that has both steps and videos that walk through the testing procedures.
- StorMagic SvHCI Evaluators Guide – StorMagic
- High Availability (HA) Virtual Machine (VM) Failover Test
8.2 Monitoring, Maintenance and DRP (Backups).
- Set up one of the options for reporting to receive event notifications. https://support.stormagic.com/hc/en-gb/articles/5203720099997-SvHCI-SvSAN-Reporting-Notifications-Alerts
- Monitor performance via SvHCI Dashboard. From the Targets tab select statistics.
- View the Statistics page for your targets and verify there's no real concerns in the performance.
- Check Cluster Health often. Check for Synchronization, proper sessions, networking issues.
- Update SvHCI & BIOS as needed staying up to date on what both HPE and StorMagic recommend as advised firmware.
The below screenshots are firmware upgrade locations. However, when using SvHCI in a production environment when updating anything the following guide should be used to help ensure the updates happen while keeping the guest VMs operational. SvHCI Upgrade SvHCI non-disruptively – StorMagic
- SvHCI systems tab >> Upgrade Firmware.
- SvHCI System Firmware upgrade page.
Resources for Setting up backups with Veeam.
Agent Based Veeam Backup of Guest VMs on StorMagic SvHCI – StorMagic
Agent Based Veeam Restore Process of Guest VMs on StorMagic SvHCI – StorMagic
See Also
SvHCI Installation – StorMagic
Comments
0 comments
Article is closed for comments.