VSAN for ROBO Changes with VSAN 6.5


Previously I have described why I think VSAN and ROBO use cases are a perfect match (see http://vsanteam.info/why-vsan-and-robo-is-a-perfect-match/) however here are several key changes for VSAN ROBO customers in Virtual SAN 6.5 which I believe make it even more compelling

  • Supporting the use of network crossover cables in 2-node configurations.
  • Extending workload support to physical servers and clustered applications with the introduction of an iSCSI target service.
  • Addition of VSAN Advanced for ROBO SKU (Allowing for Space Efficiency features)
  • VSAN Standard and VSAN Standard for ROBO (Allows All-Flash hardware, without Space Efficiency features)

Let’s focus on the first two initially.

In many cases customers have told me that the cost of a single or multiple 10gb switches at remote sites can be somewhat cost prohibitive. Whilst we have supported 1gb networking in these scenarios, almost all of the servers now days ship with 10gb interfaces and using them has become highly desirable for most customers. In addition, for some customers the network infrastructure onsite is sometimes out of scope or unable to be upgraded during this process. In this case, with VSAN 6.5 we allow customers to use a crossover cable to directly connect the two servers, essentially cutting out the network costs, providing significant savings especially when rolling out many sites.


In addition we now also allow customers to use VSAN as an iSCSI target. The use cases we support are allowing customers to connect clustered servers (SQL etc) or physical servers to use the VSAN storage capacity.


Why I say that this in an interesting addition for VSAN ROBO customers, is there are many many customers who have told me they also have the odd physical server on site which requires some form of redundant storage. By bringing these two features together customers can now remove the need for external storage for these servers, further reducing the cost of infrastructure to support remote office/branch office deployments and additionally reducing the operational overhead of supporting external storage.

The other two changes are purely from a pricing and packaging standpoint where we now allow customers who by VSAN Standard licences (including VSAN for ROBO Standard) to use All-Flash hardware. Many customers wanted the ability and flexibility to purchase Flash technology and didn’t feel they should have to pay a premium. However customers who purchase VSAN Standard are not able to turn on Space Efficiency features such as deduplication, compression and erasure coding.

Lastly, in previous releases VSAN for ROBO was only available as a Standard SKU meaning customers were unable to use the features mentioned above in the ROBO 25 VM pack. This is a great change for customers who are now standardising on VSAN Advanced and particularly All-Flash configurations.

For a further look at what we announced in Virtual SAN 6.5 check here https://blogs.vmware.com/virtualblocks/2016/10/18/vmware-virtual-san-6-5-whats-new/



10 Must See VSAN Sessions @ VMworld US

VMworld 2016 is only a few weeks away and once again it promises to be a HUGE event. I know many of the presenters are working on some awesome content and just maybe a few surprises. We will have plenty of stands, demos and a huge number of sessions to attend and be part of.

11220061_10152987425872691_8685634898144618972_nVirtual SAN Booth (2015)

If you are a VSAN customer, partner or just interested to learn more here is my list of non-negotiable sessions I would be adding to my schedule (in no particular order).

A Day in the Life of a VSAN I/O [STO7875]
Duncan Epping, Chief Technologist, VMware
John Nicholson, Senior Technical Marketing Manager, VMware

Virtual SAN – Day 2 Operations [STO7534]
Cormac Hogan, Senior Staff Engineer, VMware
Paudie ORiordan, Staff Engineer, VMware

VSAN Networking Deep Dive and Best Practices [STO8165R]
John Nicholson, Senior Technical Marketing Manager, VMware
Ankur Pai, Sr Manager VSAN R&D, VMware

Extreme Performance Series: Virtual SAN Performance Troubleshooting [STO8743]
Zach Shen, Sr. MTS, VMware, Inc.
Ruijin Zhou, Member of Technical Staff, VMware Inc.

Evolution of VMware Virtual SAN [STO9058]
Christos Karamanolis, VMware Fellow – CTO of Storage and Availability, VMware
Vijay Ramachandran, Sr. Dir Product Management, VMware

Virtual SAN Management Current & Future [STO7904]
Christian Dickmann, VSAN Architect, VMware

Virtual SAN: Introducing the Best HCI Platform for Containers and Cloud-Native Applications [STO8256]
Christian Dickmann, VSAN Architect, VMware
Rawlinson Rivera, Principal Architect Office of CTO – SABU, VMware, Inc

An Industry Roadmap: From storage to data management [STO7903]
Christos Karamanolis, VMware Fellow – CTO of Storage and Availability, VMware

The Future is Here: Turbocharge All Flash Virtual SAN with Next Generation Hardware [STO7953]
Bhumik Patel, Partner Architect, VMware
Rakesh Radhakrishnan, Product Management & Strategy Leader, VMware

Rawlinson Rivera, Principal Architect Office of CTO – SABU, VMware, Inc
John Whitman, Sr. Systems Engineer, VMware

There are many other great VSAN sessions so you’re sure to find something that takes your fancy if these don’t. The full content catalog can be viewed here http://www.vmworld.com/uscatalog.jspa if you are registered.

Do you have an interesting story to share or a question? we’d love to speak to you if you come by the VSAN booth. We’ll be there the entire event.

Changing your Default VSAN Storage Policy

Storage Policy Based Management is the lynchpin of VMware Software Defined Storage capability. It provides the ability for a user to define a set of policies, be they based on an SLA and application, workload type or other to be used both at the time of provisioning and the lifetime of the VM to enforce compliance.

Virtual SAN and Virtual Volumes both make use of SPBM and allow capabilities to be configured once and deployed many times and indeed, changed on the fly if and when requirements change. Importantly this is done at the granularity of an object which allows different VMDK’s to have different storage characteristics from eachother or Swap to be different than VMDK’s as an example.

Virtual SAN uses a Default Storage Policy which specifies a few basic rules which are applied to each VSAN object as it is created. In conversation this week I was presented a situation whereby a user wanted to maintain the settings of the Default Storage Policy without adaptation however wanted the ability to set a particular ‘user created’ policy as the default policy used for each newly create VSAN object.

The use case here would to ensure the default VSAN configuration remained largely untouched so in the event of a problem and the defaults could be configured without any delay, however there are also other uses for this. When you have more than 1 VSAN cluster in the vCenter it’s entirely feasible and highly common for these workloads to be different. In this case you may want to define ‘user created’ policy and specify individual defaults and naming to avoid any confusion. In addition, you might have 2 vCenters in Linked Mode both running VSAN Cluster in which case you would have multiple Default VSAN Storage Policies. As the Default Policy cannot be renamed it might be wise to define your own and once again configure defaults at the Datastore layer.

Screen Shot 2016-06-20 at 9.46.22 PM

Fig 1 – Multiple Virtual SAN Default Storage Policies

This process is quite easy and is actually possible within the U

  • Select a Default VSAN Policy from the Datastore UI

Screen Shot 2016-06-20 at 9.14.50 PM

Fig 2 – Assigning Default Datastore Policy

Screen Shot 2016-06-20 at 9.20.46 PM

Fig 3 – Selecting ‘user created’ new default policy

Given that there is a 1:1 mapping of SBPM to vCenter Servers and therefore 1 Default VSAN Policy per vCenter, we may need to define our own configuration if we have multiple clusters with different workloads. As each VSAN cluster has a 1:1 mapping to the Datastore it makes more sense for the SPBM to configure the policy at each Datastore. Hopefully this post explains why and how to achieve this.

Streamlining the Virtual SAN Configuration

For most VMware customers the integration of vSphere with Virtual SAN not only provides a simple an easy way to provide Enterprise storage and data services to your VM’s but also simplified operations in the Datacenter.

As Virtual SAN has grown through the releases there has been a need to accomodate further use cases for our customers. VDI, ROBO, Stretched Clusters etc. Virtual SAN 6.2 now has a new configuration wizard which provides a more streamlined approach for these more complex configurations.

A quick glance at the new Configuration Wizard shows how we provide customers the ability to select the disk claiming method, whether to enable deduplication and compression for the cluster and specifically which type of VSAN cluster deployment they might be after.

Screen Shot 2016-02-16 at 9.19.45 AM

In addition, the wizard also provides validation of the network configuration for Virtual SAN interfaces. Of course each host in the cluster needs to have a single VSAN vmkernel interface enabled to be able to participate. You can see by the screenshot below that I have missed one host. I also get information such as which vmk interface, portgroup and IP are assigned.

Screen Shot 2016-02-16 at 9.24.36 AM

If I rectify the mis-conifgured host but enabling the VSAN traffic type on one of my vmk ports I can now see that validation passes and I can move on.

Screen Shot 2016-02-16 at 9.26.42 AM

The next step is to claim my disks. Now If I had chose ‘Automatic’ back in the first step I could skip this. However for All Flash we do need to claim the different devices manually.  Starting in 6.2 there is a faster and more simplified way of bulk claiming disks for the VSAN cluster.

Screen Shot 2016-02-16 at 9.33.13 AM (3)

Screen Shot 2016-02-16 at 9.33.26 AM (3)

or alternatively I can do this by grouping by disk model or size.

Screen Shot 2016-03-21 at 8.18.11 AM

Either way, once cache and capacity disks have been claimed we are now ready to complete the wizard.

Screen Shot 2016-02-16 at 9.40.11 AM

Depending on the number of hosts in the cluster this will take a minute or two to configure and once completed you will have a fully running VSAN cluster.

For more in depth process on creating Stretched Clusters using this wizard please reference this article VSAN Stretched Clusters – Click Click done!

For a video on configuring VSAN using this wizard see https://www.youtube.com/watch?v=pAFPP98XEtk 

VSAN Stretched Clusters – Click Click done!

Creating Stretched Clusters could not be easier with VSAN. In fact the process of configuring different types of VSAN clusters, with features like deduplication and compression enabled is now a snap in the new Configuration Wizard.

In this article  will spend a few minutes walking through the wizard to get my Stretched Cluster setup. But first here is a quick description of my environment. Pretty standard, 2 DC’s in the metro area about 30km apart and a head office where I will run my witness vm.

Screen Shot 2016-03-20 at 9.03.18 AM

So let’s get started. First you need to download and deploy the witness appliance. It’s a basic appliance configuration and will ask you what size appliance to create based on the number of VM’s to be managed. Grab it from the vSphere download link on the website.

Screen Shot 2016-03-20 at 7.00.35 AM (2)

As it stands I have 4 hosts in the cluster but am yet to configure VSAN. Commencing the configuration wizard we are asked if we want to create a standard, 2 node or Stretched Cluster. In VSAN 6.2, as mentioned before, we streamlined the configuration of all of these deployments with one wizard.

Here I’m simply going to select Automatic claiming of my disks and select Stretched Cluster.

Screen Shot 2016-03-20 at 8.32.59 AM

We now have validation of the VSAN networks for each host in the cluster and I can see clearly my VSAN VMkernel network has been correctly configured, as per any normal VSAN host.

Screen Shot 2016-03-20 at 8.33.11 AM

We now have validation of the VSAN networks for each host in the cluster and I can see clearly my VSAN VMkernel network has been correctly configured, as per any normal VSAN host.

Firstly I have renamed both of my Fault Domains to reference the location of the Datacenters. Initially I need to place the hosts into the correct Fault Domain and I can do this with a single click.

Screen Shot 2016-03-20 at 8.40.42 AM

So now my Fault Domain mirror what my physical deployment represents. I have two Dell hosts in the City Datacenter and two in the Clayton Datacenter.

Screen Shot 2016-03-20 at 8.41.11 AM

Of course with VSAN Stretched Clusters I require access to the Witness appliance which I downloaded an deployed earlier. This  appliance actually runs back at my main site. Here in this step I simply need to select the Witness for the cluster. You can see that this Witness is essentially a virtual ESXi host however we have made it clear to differentiate by using the blue icon.

Screen Shot 2016-03-20 at 8.41.30 AM

Depending on my choices I may need to ensure I claim the storage on this Witness appliance. It is important to do this as VSAN will today always expect to find the a similar configuration across data nodes and witness nodes. Hence we have created the appliance with a Cache disks and a Capacity disk.

Screen Shot 2016-03-20 at 6.57.19 AM

Verify the settings and complete.

Screen Shot 2016-03-20 at 8.41.38 AM

Now if we navigate over to the Stretched Cluster Management we can verify that it is correctly enabled. We also can see that the preferred Fault Domain is the Clayton Datacenter. This means in the event of a split-brain scenario the Clayton Datacenter and the Witness will form a quorum and the City Datacenter will be isolated. In this case HA will power on VM’s in the Clayton Datacenter. The Preferred Site is also denoted by the yellow star on the Clayton node.

Today a Stretched Cluster can maintain only one failure however other mechanisms such as SRM and VSAN Replication can be used to provide further protection against failures if required.

Screen Shot 2016-03-20 at 6.58.46 AM

Monitoring the health of a Virtual SAN cluster is paramount and a Stretched Cluster is no exception. In VSAN we now have some additional information in the Health Checks which help to monitor a Stretched Cluster effectively. Below is a grab of the checks we now have to ensure you stay happy and healthy.

I have introduced some artificial latency too demonstrate what happens here. As you will note the requirement for RTT latency between the two datacenter is documented as up to 5ms.

Screen Shot 2016-03-20 at 7.13.56 AM

If I increase the latency I would see an error such as the one below and in addition in VSAN 6.2 now we also have Health Alarms on the Summary pages

Screen Shot 2016-03-20 at 7.34.53 AM

Alarms can be programmed to send Email or SNMP type alerts so other people or systems can get notification when things aren’t working correctly.

Screen Shot 2016-03-20 at 7.37.34 AM


Other Considerations for Stretched Clusters might be different networking topologies, Host Groups and Rules, HA settings. All are covered in the Stretched Cluster guide below.


There is a wealth of information available for Stretched Clusters today

VSAN 6.2 Stretched Cluster Guide https://www.vmware.com/files/pdf/products/vsan/VMware-Virtual-SAN-6.2-Stretched-Cluster-Guide.pdf

VSAN Stretched Cluster Bandwidth and Sizing Guide http://www.vmware.com/files/pdf/products/vsan/vmware-virtual-san-6.1-stretched-cluster-bandwidth-sizing.pdf

VSAN Stretched Cluster Performance and Best Practices http://www.vmware.com/files/pdf/techpaper/vmware-virtual-san-stretched-cluster.pdf

VSAN Stretched Clusters and SRM https://www.vmware.com/files/pdf/techpaper/Stretched_Clusters_and_VMware_vCenter_Site_Recovery_Manage_USLTR_Regalix.pdf

Designing a VSAN Stretched Cluster http://www.yellow-bricks.com/2015/09/23/designing-a-virtual-san-stretched-cluster

VSAN Stretched Cluster demo http://www.yellow-bricks.com/2015/09/10/virtual-san-stretched-clustering-demo

VSAN Stretched Cluster supported network topologies http://cormachogan.com/2015/09/10/supported-network-topologies-for-vsan-stretched-cluster


Hopefully you can now see how simple and intuitive it is to create VSAN clusters now with 6.2.