Configuring and running virtual SAN with StarWind software
Since we decided to run our production server on 2-node vSphere virtual cluster connected to single SAS disk array where all data is stored we’d like to guarantee this array to be fault tolerant not only when controller (there are actually two redundant controllers) or single disk (secured with raid array) is concerned but as a whole unit.
One approach is to buy separate storage array which either can be configured to mirror the data from the first array (this is possible only with fibre channel controllers) or act as another storage where the data from primary array should be backed up by appropriate software – let’s say by popular Veeam.
Another approach is to create virtual SAN. Such solutions would consist of local data storage (usually one or more disks in hardware raid) of both of vSphere servers and the software responsible for mirroring data on these two local storages providing vSphere cluster with another fully redundant storage array.
Our configuration was setup according this technical paper published at http://www.starwindsoftware.com site.
To be honest I find this official PDF little confusing not only for a reason it contains mostly only screenshosts of vSphere’s web interface windows with limited information what and why is actually configured at a time. There is also lack of information regarding the virtual SAN configuration in general in the official technical paper. For example it doesn’t mention explicitly the way how the local vSphere server’s disks should be assigned to StarWind servers nor jumbo frames (which is necessary for ISCSI) configuration and one would miss many other configuration issues which can vastly influence the performance of the virtual SAN solution as a whole.
These are steps we’d have to configure to make the virtual SAN working
- Reconfigure both vSphere servers to boot from SD cards and thus leaving internal hardware array to be used for building up virtual SAN
- we proceeded with backup of vSphere configuration, complete vSphere reinstall to SD card and restore of configuration which showed up to work like a charm including the correct restore of all interface configurations and VLANs
- NOTES:
- newer IBM/Dell servers have support for booting up from redundant SD card module
- also PXE boot can be configured instead of SD cards but I find this method less reliable as more devices would be part of whole boot process
- Configure vSphere servers to enable DirectPath I/O for their local storage arrays
- we need to enable DirectPath for local storage so that it can be accessed directly from virtual server where StarWind software will be running
- the whole procedure of DirectPath configuration is described here
- the final step is to reboot the host
- NOTES:
- I had to enable DirectPath for both PERC H200 HW raid and 2-port SATA controller to be able to see the local disk array from Windows 2012 server
- Setup vSphere interfaces to be used with virtual SAN
- we need at least three physical interfaces on each vSphere server to be used for new virtual SAN cluster
- These physical interfaces connect the both physival vSphere servers together with ethernet cable. We don’t connect the servers together trough any switch
- I’ve setup the interfaces according the aforementioned official StartWind technical paper but I used different IP segments. This is my configuration in nutshell:
- physical interface vmnic1 on vSphere server #1 for virtual SAN data synchronization and access to virtual SAN disk from vSphere server #2:
- add virtual machine port group type named as ‘VSAN sync 1’
- add VMkernel port with MTU 9000 and 192.168.12.1 named as ‘VSAN ISCSI 1’
- physical interface vmnic2 on vSphere server #1 for virtual SAN data synchronization and access to virtual SAN disk from vSphere server #2:
- add virtual machine port group type named as ‘VSAN sync 2’
- add VMkernel port with MTU 9000 and 192.168.22.1 named as ‘VSAN ISCSI 2’
- physical interface vmnic3 on vSphere server #1 for heartbeat data between two Starwind servers:
- add virtual machine port group type named as ‘Heartbeat’
- NOTE:
- this doesn’t have to special physical interface only for virtual SAN purpose. Using the network which is common for all servers running in virtual cluster will be sufficient
- virtual interface (no physical interface) on vSphere server #1 for local access to virtual SAN disk from vSphere server #1:
- add virtual machine port group named as ‘VSAN local ISCSI’
- add VMkernel port with MTU 9000 and 172.16.1.1 named as ‘VSAN local ISCSI’
- physical interface vmnic2 on vSphere server #2:
- add virtual machine port group named as ‘VSAN sync 1’
- add VMkernel port with MTU 9000 and 192.168.12.2 named as VSAN ISCSI 1′
- physical interface vmnic2 on vSphere server #2:
- add virtual machine port group named as ‘VSAN sync 2’
- add VMkernel port with MTU 9000 and 192.168.22.2 named as VSAN ISCSI 2′
- physical interface vmnic3 on vSphere server #1:
- add virtual machine port group type named as ‘Heartbeat’
- virtual interface (no physical interface) on vSphere server #2:
- add virtual machine port group named as ‘VSAN local ISCSI’
- add VMkernel port with MTU 9000 and 172.16.1.2 named as ‘VSAN local ISCSI’
- physical interface vmnic1 on vSphere server #1 for virtual SAN data synchronization and access to virtual SAN disk from vSphere server #2:
- Create new virtual server on each vSphere server, install and configure WIndows 2012 R2 server
- we need to create two virtual machines – one on each vSphere server
- add 4 interfaces to each new virtual server to access these networks:
- Heartbeat network
- VSAN sync 1
- VSAN sync 2
- VSAN local ISCSI
- we also need to allow access to local storage data by adding appropriate PCI device with DirectPath support.
- then install Windows 2012 R2 server
- Configure Windows server interfaces and format new storage array
- we need to enable jumbo frame support for three interfaces designed for ISCSI communication and virtual SAN synchronization. This is actually easy task which is described for example here.
- we assign the IP addresses to all three interfaces. Let’s say one of the Windows servers is called server010 and another server020 so we can assign them these addresses
- VSAN sync1 interface: 192.168.11.10 and for server010
- VSAN sync2 interface: 192.168.21.10 for server010
- VSAN local ISCSI interface: 172.16.1.10 for server010
- …repeat all the steps with the corresponding ip addressses on second StarWind server
- at this point we also shave have access to local disk storage which is supposed to be formatted as NTFS and serve as storage for StarWind virtual disk images
- NOTE:
- as you can see we have two ip addresses configured for VSAN sync1 and VSAN sync2 interfaces on both servers. This is because 192.168.11.XX and 192.168.21.XX are used for data synchronization between StarWind servers and 192.168.12.XX and 192.168.22.XX ip addresses are dedicated for communication with vSphere servers (ISCSI data)
- Install and configure StarWind software
- By the time this article was written StarWind offered free 2-node vSphere licence
- configure automactic storage rescan as described in official technical paper. This should ensure the storage adapter rescan will be performed by vSphere servers when one of the StarWind is inaccessible.
- StarWind configuration is pretty well described in official technical paper and is pretty straightforward. We create a new virtual SAN disk on newly formatted local disk then add the replica and configure replication node interfaces to use Heartbeat interface for heartbeat and VSAN sync 1 and VSAN sync 2 (192.168.11.X and 192.168.21.X) interfaces for data mirroring.
- once the virtual disk is created and fully synced we can proceed with further step
- Configure vSphere servers to access new virtual SAN disk through ISCSI software adapter
- this step is also pretty well described in official StarWind technical paper. All we need to do is to add ISCSI software adapter and fill in the appropriate IP addresses for dynamic discovery:
- On vSphere server #1
- 172.16.1.1
- 192.168.12.2
- 192.168.22.2
- On vSphere server #2
- 172.16.1.2
- 192.168.12.1
- 192.168.22.1
- On vSphere server #1
- Then do the the full rescan on the adapters and we should see new virtual SAN disk as ISCSI disk.
- We should see StarWind’s ISCI virtual disk after adapter rescan. We also should see three paths available (VSAN sync 1 path, VSAN sync 2 path and VSAN local ISCSI path) after clicking on disk and choosing “manager path” item.
- this step is also pretty well described in official StarWind technical paper. All we need to do is to add ISCSI software adapter and fill in the appropriate IP addresses for dynamic discovery:
- ISCSI disks access tuning
- another performance tuning which is worth trying is the one described here
Feel free to comment this article or in case something is not clear regarding StarWind virtual SAN configuration in general.
Posted in High availability, Virtualization
4 Responses to “Configuring and running virtual SAN with StarWind software”
Leave a Reply
You must be logged in to post a comment.
Great article. Could you advise what storage/datastore you use to create the StarWind virtual machine for each host?
Hi Martin,
there is a raid array running on Dell PERC H200 on both hosts. For this array we are using host’s local disks.
Tomas
Thanks for this! It is great! I could not figure out the official technical guide and this helped clarify alot!
Two questions though:
1) When setting the IP address for the three network connections on each VM (step 5), what do I put for the Gateway of the Static IP? For example, what gateway for the IP you set of 192.168.11.10?
* Also, is that right that even though the vswitch setup for that network connection has a gateway of 192.168.12.1, you still set this IP as 192.168.11.10 (.11. instead of .12.)?
2) For step 7, is that right that two of the three IP’s added for dynamic discovery are the IP of the opposite host? e.g. Host 1 you actually put the Host 2 IP’s in of 12.2 and 22.2. Can you explain the logic here as I am pretty knew to Virtual SAN and iSCSI?
Thanks!
Hello,
I’ll try to answer your questions.
1) you don’t need to set the gateway for these subnets as the traffic between hosts doesn’t go but the ip addresses on this subner (L2 routing, not L3 routing)
2) yes, this should be right as you use ip 172.16.1.1 for local host and two ip addresses for remote hosts as redundancy
Let me know if anything is unclear.