10 Must See VSAN Sessions @ VMworld US

VMworld 2016 is only a few weeks away and once again it promises to be a HUGE event. I know many of the presenters are working on some awesome content and just maybe a few surprises. We will have plenty of stands, demos and a huge number of sessions to attend and be part of.


11220061_10152987425872691_8685634898144618972_nVirtual SAN Booth (2015)

If you are a VSAN customer, partner or just interested to learn more here is my list of non-negotiable sessions I would be adding to my schedule (in no particular order).

A Day in the Life of a VSAN I/O [STO7875]
Duncan Epping, Chief Technologist, VMware
John Nicholson, Senior Technical Marketing Manager, VMware

Virtual SAN – Day 2 Operations [STO7534]
Cormac Hogan, Senior Staff Engineer, VMware
Paudie ORiordan, Staff Engineer, VMware

VSAN Networking Deep Dive and Best Practices [STO8165R]
John Nicholson, Senior Technical Marketing Manager, VMware
Ankur Pai, Sr Manager VSAN R&D, VMware

Extreme Performance Series: Virtual SAN Performance Troubleshooting [STO8743]
Zach Shen, Sr. MTS, VMware, Inc.
Ruijin Zhou, Member of Technical Staff, VMware Inc.

Evolution of VMware Virtual SAN [STO9058]
Christos Karamanolis, VMware Fellow – CTO of Storage and Availability, VMware
Vijay Ramachandran, Sr. Dir Product Management, VMware

Virtual SAN Management Current & Future [STO7904]
Christian Dickmann, VSAN Architect, VMware

Virtual SAN: Introducing the Best HCI Platform for Containers and Cloud-Native Applications [STO8256]
Christian Dickmann, VSAN Architect, VMware
Rawlinson Rivera, Principal Architect Office of CTO – SABU, VMware, Inc

An Industry Roadmap: From storage to data management [STO7903]
Christos Karamanolis, VMware Fellow – CTO of Storage and Availability, VMware

The Future is Here: Turbocharge All Flash Virtual SAN with Next Generation Hardware [STO7953]
Bhumik Patel, Partner Architect, VMware
Rakesh Radhakrishnan, Product Management & Strategy Leader, VMware

Rawlinson Rivera, Principal Architect Office of CTO – SABU, VMware, Inc
John Whitman, Sr. Systems Engineer, VMware

There are many other great VSAN sessions so you’re sure to find something that takes your fancy if these don’t. The full content catalog can be viewed here http://www.vmworld.com/uscatalog.jspa if you are registered.

Do you have an interesting story to share or a question? we’d love to speak to you if you come by the VSAN booth. We’ll be there the entire event.

Changing your Default VSAN Storage Policy

Storage Policy Based Management is the lynchpin of VMware Software Defined Storage capability. It provides the ability for a user to define a set of policies, be they based on an SLA and application, workload type or other to be used both at the time of provisioning and the lifetime of the VM to enforce compliance.

Virtual SAN and Virtual Volumes both make use of SPBM and allow capabilities to be configured once and deployed many times and indeed, changed on the fly if and when requirements change. Importantly this is done at the granularity of an object which allows different VMDK’s to have different storage characteristics from eachother or Swap to be different than VMDK’s as an example.

Virtual SAN uses a Default Storage Policy which specifies a few basic rules which are applied to each VSAN object as it is created. In conversation this week I was presented a situation whereby a user wanted to maintain the settings of the Default Storage Policy without adaptation however wanted the ability to set a particular ‘user created’ policy as the default policy used for each newly create VSAN object.

The use case here would to ensure the default VSAN configuration remained largely untouched so in the event of a problem and the defaults could be configured without any delay, however there are also other uses for this. When you have more than 1 VSAN cluster in the vCenter it’s entirely feasible and highly common for these workloads to be different. In this case you may want to define ‘user created’ policy and specify individual defaults and naming to avoid any confusion. In addition, you might have 2 vCenters in Linked Mode both running VSAN Cluster in which case you would have multiple Default VSAN Storage Policies. As the Default Policy cannot be renamed it might be wise to define your own and once again configure defaults at the Datastore layer.

Screen Shot 2016-06-20 at 9.46.22 PM

Fig 1 – Multiple Virtual SAN Default Storage Policies

This process is quite easy and is actually possible within the U

  • Select a Default VSAN Policy from the Datastore UI

Screen Shot 2016-06-20 at 9.14.50 PM

Fig 2 – Assigning Default Datastore Policy

Screen Shot 2016-06-20 at 9.20.46 PM

Fig 3 – Selecting ‘user created’ new default policy

Given that there is a 1:1 mapping of SBPM to vCenter Servers and therefore 1 Default VSAN Policy per vCenter, we may need to define our own configuration if we have multiple clusters with different workloads. As each VSAN cluster has a 1:1 mapping to the Datastore it makes more sense for the SPBM to configure the policy at each Datastore. Hopefully this post explains why and how to achieve this.

Streamlining the Virtual SAN Configuration

For most VMware customers the integration of vSphere with Virtual SAN not only provides a simple an easy way to provide Enterprise storage and data services to your VM’s but also simplified operations in the Datacenter.

As Virtual SAN has grown through the releases there has been a need to accomodate further use cases for our customers. VDI, ROBO, Stretched Clusters etc. Virtual SAN 6.2 now has a new configuration wizard which provides a more streamlined approach for these more complex configurations.

A quick glance at the new Configuration Wizard shows how we provide customers the ability to select the disk claiming method, whether to enable deduplication and compression for the cluster and specifically which type of VSAN cluster deployment they might be after.

Screen Shot 2016-02-16 at 9.19.45 AM

In addition, the wizard also provides validation of the network configuration for Virtual SAN interfaces. Of course each host in the cluster needs to have a single VSAN vmkernel interface enabled to be able to participate. You can see by the screenshot below that I have missed one host. I also get information such as which vmk interface, portgroup and IP are assigned.

Screen Shot 2016-02-16 at 9.24.36 AM

If I rectify the mis-conifgured host but enabling the VSAN traffic type on one of my vmk ports I can now see that validation passes and I can move on.

Screen Shot 2016-02-16 at 9.26.42 AM

The next step is to claim my disks. Now If I had chose ‘Automatic’ back in the first step I could skip this. However for All Flash we do need to claim the different devices manually.  Starting in 6.2 there is a faster and more simplified way of bulk claiming disks for the VSAN cluster.

Screen Shot 2016-02-16 at 9.33.13 AM (3)

Screen Shot 2016-02-16 at 9.33.26 AM (3)

or alternatively I can do this by grouping by disk model or size.

Screen Shot 2016-03-21 at 8.18.11 AM

Either way, once cache and capacity disks have been claimed we are now ready to complete the wizard.

Screen Shot 2016-02-16 at 9.40.11 AM

Depending on the number of hosts in the cluster this will take a minute or two to configure and once completed you will have a fully running VSAN cluster.

For more in depth process on creating Stretched Clusters using this wizard please reference this article VSAN Stretched Clusters – Click Click done!

For a video on configuring VSAN using this wizard see https://www.youtube.com/watch?v=pAFPP98XEtk 

VSAN Stretched Clusters – Click Click done!

Creating Stretched Clusters could not be easier with VSAN. In fact the process of configuring different types of VSAN clusters, with features like deduplication and compression enabled is now a snap in the new Configuration Wizard.

In this article  will spend a few minutes walking through the wizard to get my Stretched Cluster setup. But first here is a quick description of my environment. Pretty standard, 2 DC’s in the metro area about 30km apart and a head office where I will run my witness vm.

Screen Shot 2016-03-20 at 9.03.18 AM

So let’s get started. First you need to download and deploy the witness appliance. It’s a basic appliance configuration and will ask you what size appliance to create based on the number of VM’s to be managed. Grab it from the vSphere download link on the website.

Screen Shot 2016-03-20 at 7.00.35 AM (2)

As it stands I have 4 hosts in the cluster but am yet to configure VSAN. Commencing the configuration wizard we are asked if we want to create a standard, 2 node or Stretched Cluster. In VSAN 6.2, as mentioned before, we streamlined the configuration of all of these deployments with one wizard.

Here I’m simply going to select Automatic claiming of my disks and select Stretched Cluster.

Screen Shot 2016-03-20 at 8.32.59 AM

We now have validation of the VSAN networks for each host in the cluster and I can see clearly my VSAN VMkernel network has been correctly configured, as per any normal VSAN host.

Screen Shot 2016-03-20 at 8.33.11 AM

We now have validation of the VSAN networks for each host in the cluster and I can see clearly my VSAN VMkernel network has been correctly configured, as per any normal VSAN host.

Firstly I have renamed both of my Fault Domains to reference the location of the Datacenters. Initially I need to place the hosts into the correct Fault Domain and I can do this with a single click.

Screen Shot 2016-03-20 at 8.40.42 AM

So now my Fault Domain mirror what my physical deployment represents. I have two Dell hosts in the City Datacenter and two in the Clayton Datacenter.

Screen Shot 2016-03-20 at 8.41.11 AM

Of course with VSAN Stretched Clusters I require access to the Witness appliance which I downloaded an deployed earlier. This  appliance actually runs back at my main site. Here in this step I simply need to select the Witness for the cluster. You can see that this Witness is essentially a virtual ESXi host however we have made it clear to differentiate by using the blue icon.

Screen Shot 2016-03-20 at 8.41.30 AM

Depending on my choices I may need to ensure I claim the storage on this Witness appliance. It is important to do this as VSAN will today always expect to find the a similar configuration across data nodes and witness nodes. Hence we have created the appliance with a Cache disks and a Capacity disk.

Screen Shot 2016-03-20 at 6.57.19 AM

Verify the settings and complete.

Screen Shot 2016-03-20 at 8.41.38 AM

Now if we navigate over to the Stretched Cluster Management we can verify that it is correctly enabled. We also can see that the preferred Fault Domain is the Clayton Datacenter. This means in the event of a split-brain scenario the Clayton Datacenter and the Witness will form a quorum and the City Datacenter will be isolated. In this case HA will power on VM’s in the Clayton Datacenter. The Preferred Site is also denoted by the yellow star on the Clayton node.

Today a Stretched Cluster can maintain only one failure however other mechanisms such as SRM and VSAN Replication can be used to provide further protection against failures if required.


Screen Shot 2016-03-20 at 6.58.46 AM

Monitoring the health of a Virtual SAN cluster is paramount and a Stretched Cluster is no exception. In VSAN we now have some additional information in the Health Checks which help to monitor a Stretched Cluster effectively. Below is a grab of the checks we now have to ensure you stay happy and healthy.

I have introduced some artificial latency too demonstrate what happens here. As you will note the requirement for RTT latency between the two datacenter is documented as up to 5ms.

Screen Shot 2016-03-20 at 7.13.56 AM

If I increase the latency I would see an error such as the one below and in addition in VSAN 6.2 now we also have Health Alarms on the Summary pages

Screen Shot 2016-03-20 at 7.34.53 AM

Alarms can be programmed to send Email or SNMP type alerts so other people or systems can get notification when things aren’t working correctly.

Screen Shot 2016-03-20 at 7.37.34 AM

 

Other Considerations for Stretched Clusters might be different networking topologies, Host Groups and Rules, HA settings. All are covered in the Stretched Cluster guide below.

 

There is a wealth of information available for Stretched Clusters today

VSAN 6.2 Stretched Cluster Guide https://www.vmware.com/files/pdf/products/vsan/VMware-Virtual-SAN-6.2-Stretched-Cluster-Guide.pdf

VSAN Stretched Cluster Bandwidth and Sizing Guide http://www.vmware.com/files/pdf/products/vsan/vmware-virtual-san-6.1-stretched-cluster-bandwidth-sizing.pdf

VSAN Stretched Cluster Performance and Best Practices http://www.vmware.com/files/pdf/techpaper/vmware-virtual-san-stretched-cluster.pdf

VSAN Stretched Clusters and SRM https://www.vmware.com/files/pdf/techpaper/Stretched_Clusters_and_VMware_vCenter_Site_Recovery_Manage_USLTR_Regalix.pdf

Designing a VSAN Stretched Cluster http://www.yellow-bricks.com/2015/09/23/designing-a-virtual-san-stretched-cluster

VSAN Stretched Cluster demo http://www.yellow-bricks.com/2015/09/10/virtual-san-stretched-clustering-demo

VSAN Stretched Cluster supported network topologies http://cormachogan.com/2015/09/10/supported-network-topologies-for-vsan-stretched-cluster

 

Hopefully you can now see how simple and intuitive it is to create VSAN clusters now with 6.2.

Virtual SAN 6.2 – What’s in a release?

The Virtual SAN 6.2 release was a milestone for the company. In the release we really focused on not only bringing enterprise class data efficiency features but we also increased the simplicity and VSAN operations features in the vSphere Web Client for customers.

I can tell you the amount of work that goes into such a release takes all hands to the pump.

Although we make every effort to ensure our partners and customers have access to the latest material that we develop for this release it can some times be hard to keep up. I’ve collected a list of resources here that you hopefully are aware of.

New product enhancements include:

  • All-Flash Deduplication/CompressionDeduplication eliminates duplicate copies of repeating data within the same Disk Group. This feature is enabled/disabled for the whole cluster. Dedupe and compression happens during de-staging from the caching tier to the capacity tier (Deduplication first and then compression). The main value add is storage savings that could range between 2x to 7x depending on the workloads
  • All-Flash Erasure Coding(RAID5/RAID6): Erasure coding(EC) is a method of data protection in which data is broken into fragments that are expanded and encoded with a configurable number of redundant pieces of data and stored across different nodes. This provides the ability to recover the original data even if some fragments are missing. Erasure coding will provide a much more storage efficient way of providing FTT=1 and FTT=2 on VSAN. This is a setting per VMDK/Object (RAID5 is 3+1 and RAID6 is 4+2).
  • Performance monitoring: Comprehensive performance monitoring in vCenter UI with common metrics across all levels (cluster, individual physical SSD/HDD, virtual disks)
  • Capacity reporting: A VSAN specific capacity view to report cluster-wide space utilization as well as detailed breakdown by data/object types. When dedup/compression/EC is enabled, it shows normalized space utilization and savings of these features
  • Health Service enhancements: proactive rebalance from UI, event based VC alarming, etc.
  • SDK for Automation & Third party integration: VSAN management SDK extended from vSphere API to deploy/config/manage/monitor VSAN. This will be available through many language binding (Soap, .Net, Java, Perl, Python, and Ruby) & will include code samples.
  • Software Checksum: End-to-end checksum that helps provide data integrity by detecting and repairing data corruption that could be caused by bit rot or other issues in the physical storage media. checksum is enabled by default, but may be enabled or disabled on per virtual machine/object basis via SPBM.
  • Quality of Service (QoS): Set IOPS Limits: This provides the ability to set the maximum IOPS a VMDK can take (This will be a hard limit). This is a setting per VMDK, through Storage Policy-Based Management (SPBM). Customers wanting to mix diverse workloads will be interested in being able to keep workloads from impacting each other and avoiding the noisy neighbor issue.
  • IPv6 Support: Support for pure IPv6 environments

Updated Collateral

There will be more updates in the coming weeks. Also some blogs to look at for more information

There are plenty of others. The best place to stay up to date is to bookmark our Technical Resources place https://www.vmware.com/products/virtual-san/resources or follow us on twitter @vmwarevsan