7 May 2013

vSphere file systems: VMFS 5 plus VMFS vs. NFS

Storage admins are no doubt familiar with traditional Windows file systems (NTFS) and Linux file systems (ext3) running on servers with these operating systems installed. But they likely aren't as conversant with the most widely used file system for VMware's vSphere/ESXi hypervisor, VMFS.

VMFS owes its status as the most popular file system for VMware to the fact that it was purpose-built for virtualization. By enabling advanced vSphere features (like Storage vMotion) or powerful VM features (such as snapshots), this cluster-aware file system is a key (but often overlooked) piece of vSphere that must be considered to ensure a successful virtual infrastructure. And the latest version, VMFS 5, delivers a number of updates.

You make be wondering why we need a new file system just for vSphere, when NFS can suffice. There are a number of things that make VMFS special and necessary. Consider the following.

    Unlike other file systems, VMFS was designed solely for housing virtual machines.
    Multiple ESXi servers can read/write to the file system at the same time.
    ESXi servers can be connected or disconnected from the file system without any disruption to the other servers using it or virtual machines inside.
    VMFS' on-disk file locking ensures that two hosts don't try to power on the same virtual machine at the same time.
    It was designed to have performance as close as possible to native SCSI, even for demanding applications.
    In the event of a host failure, VMFS recovers quickly thanks to distributed journaling.
    VMFS can be run on top of iSCSI or Fibre Channel.
    Unlike the file-level NFS, VMFS is a block-level file system.
    Point-in-time snapshots can be taken of each virtual machine to preserve the OS and application state prior to installing patches or upgrades. Snapshots are also used by backup and recovery applications to perform backup without downtime to the VM.
    Running low on disk space? VMFS allows you to hot-add new virtual disks to running virtual machines.

You can't run Windows computers on VMFS, but you can run lots of Windows virtual machines, stored in virtual machine disk files (known as VMDKs) inside VMFS. You can think of the virtual disks that represent each virtual machine as being mounted SCSI disks. This enables you to run any operating system inside a virtual machine disk on your SAN, even if that OS is DOS and wasn't designed to run on, say, an iSCSI SAN.
VMFS vs. NFS

While VMware supports the use of both VMFS (SAN-based block storage) and NFS (NAS-based file storage) for vSphere shared storage, VMware has usually supported VMFS first (and NFS later) when new features are released. Today, there aren't significant differences to using NFS over VMFS but most people at VMware recommend VMFS (which makes sense as the company designed it just for this purpose). For more information on the VMFS vs. NFS debate, see this post by NetApp's Vaughn Stewart.

No matter the option you choose, by using shared storage with vSphere, you can use the following advanced features (assuming your version of vSphere is licensed for them):

    vMotion, which moves running virtual machines from one host to another
    Storage vMotion, which moves running virtual machine disk files from one vSphere datastore to another
    Storage Distributed Resource Scheduler (SDRS), which rebalances virtual machine disk files when a vSphere datastore is running slow (high latency) or is running out of storage capacity
    vSphere High Availability, whereby, if a host fails, the virtual machines are automatically restarted on another host

Keep in mind that shared storage and VMFS (or NFS) are required to perform these advanced features. While you might have local VMFS storage on each host, that local storage won't enable these features on its own, unless you use a virtual storage appliance (VSA) such as the vSphere Storage Appliance to achieve shared storage without a physical disk array.
New in VMFS 5

With the release of vSphere 5, VMFS has been updated with a number of new features. They are:

    New partition table; GUID partition table (GPT) is used instead of master boot record (MBR)
    Larger volume size, with support for volumes as large as 64 TB
    Unified block size of 1 MB
    Smaller sub-block size
    The capability to upgrade from VMFS 3 to VMFS 5 without disruption to hosts or VMs

While the benefits from these changes may not be immediately evident, they offer the largest volume size and the most efficient virtualization file system yet.

For more information on VMFS 5, see VMware's What's New in VMware vSphere 5 Storage.
Configuring VMFS

Assuming your server virtualization environment uses VMFS, how do you know what your capacity is, what your VMFS version is and what your block size is? It's easy: Go to the vSphere Client and then to Datastores and Datastore Clusters Inventory. Click on each datastore and you'll see basic information on the Summary tab. However, you'll see more detailed information by clicking on the Configuration tab, as shown below.



As you can see in the screenshot, this VMFS local storage is using VMFS 5.54 and has a block size of 1 MB block. There is just one path to get to the datastore, and it just has a single extent.

If a datastore is running an older version of the VMFS (like VMFS 3), this is where you would upgrade it from.

If you click on the Properties for the datastore, shown below, you could manage the paths, add more extents to increase the size of the volume or enable Storage I/O Control (SIOC).



Using the Datastore Browser (accessed from the Datastore Summary tab), shown below, you can look inside a VMFS or NFS datastore and see what's inside.



Unlike with Windows or Linux file systems, you won't see any operating system files inside your VMFS datastores. Instead, you'll just see folders for each of the virtual machines and, inside those, you'll find the virtual machine VMX configuration file and VMDK file (among other, less important VM files).

No comments:

Post a Comment