VMware vSphere 8: Identify NFS, iSCSI, and SAN Storage Access Protocols

VMware vSphere 8: Identify NFS, iSCSI, and SAN Storage Access Protocols

In this post, we’ll identify and discuss the storage access protocols that are used in VMware vSphere 8.0. This post covers Objective 1.3 of VMware 2V0-21.23 Exam Preparation Guide where we’ll learn about the different storage access protocols such as NFS, iSCSI, SAN, etc.

In traditional storage environments, the ESXi storage management process starts with storage space that is pre-allocated by the storage administrator on different storage systems.  vSphere 8.0 supports several storage types, including local storage and networked storage. The following are some storage types that VMware vSphere 8 supports, including:

Local Storage

Local storage is storage that is an internal hard disk(s) directly attached to the system and shows as a local datastore to the ESXi host. It does not require a network or network communication to communicate with the host; it only needs a cable connected to the storage device and, when required, a compatible HBA on the host.

Networked Storage

Networked storage contains external storage systems that the ESXi host uses to store virtual machine files remotely, and these storage systems are accessed through a high-speed storage network. These networked storage devices are shared over the network and called shared datastores, and these shared datastores are accessed by multiple ESXi hosts simultaneously.

ESXi supports multiple networked storage technologies, and in VMware vSphere 8, it supports virtualized shared storage, such as vSAN. vSAN transforms the internal storage resources of ESXi hosts into shared storage that provides High Availability (HA) and vMotion for VMs.

Fibre Channel (FC) SAN

Fibre Channel (FC) storage, or FC SAN, stores virtual machine files remotely on an FC storage area network (SAN) and is a specialized high-speed network that connects your ESXi hosts to high-performance storage devices. In FC SAN, the network uses the Fibre Channel protocol to transport SCSI traffic from VMs to the FC SAN devices.

To connect to the FC SAN, the ESXi host should be equipped with Fibre Channel host bus adapters (HBAs) and also need Fibre Channel switches to route storage traffic. If the ESXi host contains Fibre Channel over Ethernet (FCoE) adapters, you can connect to your shared Fibre Channel devices by using an Ethernet network.

Also Read: How to Manage ESXi 8.0 Roles in the VMware Host Client?

fibre channel (FC) SAN
Fibre Channel (FC) Storage

Internet SCSI (iSCSI)

Internet SCSI, or iSCSI, stores virtual machine files on remote iSCSI storage devices. It packages SCSI storage traffic into the TCP/IP protocol to travel through standard TCP/IP networks instead of the specialized FC network. With an iSCSI connection, your ESXi host serves as the initiator that communicates with a target located in remote iSCSI storage systems.

ESXi offers the following types of iSCSI connections:

  • Hardware iSCSI: Hardware iSCSI allows your ESXi host to connect to network storage through a third-party adapter that offloads the iSCSI and network processing. Hardware adapters can be dependent or independent.
  • Software iSCSI: Your host uses a software-based iSCSI initiator in the VMkernel to connect to storage. With this type of iSCSI connection, your host needs only a standard network adapter for network connectivity.
iSCSI storage in VMware vSphere 7
iSCSI Storage

Network-attached Storage (NAS)

Network Attached Storage, or NAS, stores virtual machine files on remote file servers accessed over a standard TCP/IP network. The NFS client built into ESXi uses Network File System (NFS) protocol versions 3 and 4.1 to communicate with the NAS and NFS servers, and for network connectivity, the host requires a standard network adapter.

The NFS volume can be mounted directly on the ESXi host, and VMs can be managed and stored on NFS datastores in the same way as they are stored on VMFS datastores.

NFS Storage depicts a virtual machine using the NFS datastore to store its files. In this configuration, the host connects to the NAS server, which stores the virtual disk files, through a regular network adapter.

Also Read: Management Options in Advanced VMware ESXi Administration

network attached storage (NAS)
NFS Storage

Virtual Machine File System (VMFS)

The datastores that you deploy on block storage devices use the native vSphere Virtual Machine File System (VMFS) format. It is a special high-performance file system format that is optimized for storing virtual machines.

Shared Serial Attached SCSI (SAS)

It stores VMs on direct-attached SAS storage systems that offer shared access to multiple hosts in the vSphere environment and allow these hosts to access the same VMFS datastore on a LUN.

NVMe over Fabrics Storage

VMware NVMe over Fabrics (NVMe-oF) offers distance connectivity between a host and a target storage device on a shared storage array. VMware supports NVMe over RDMA (with RoCE v2 technology) and NVMe over Fibre Channel (FC-NVMe) transports.

A device, or LUN, is identified by its UUID name. If a LUN is shared by multiple hosts, it must be presented to all hosts with the same UUID.

How Virtual Machines (VMs) Access Storage?

When a virtual machine communicates with its virtual disk stored on a datastore, the SCSI commands are issued as datastores can exist on various types of physical storage, and these commands are encapsulated into other forms, depending on the protocol that the ESXi host uses to connect to a storage device.

ESXi supports Fibre Channel (FC), Internet SCSI (iSCSI), Fibre Channel over Ethernet (FCoE), and NFS protocols. The virtual disk always appears to the VM as a mounted SCSI device, regardless of the type of storage device your host uses. The virtual disk hides a physical storage layer from the VM’s operating system (OS). This allows you to run operating systems that are not certified for specific storage equipment, such as SAN, inside the virtual machine.

VMs accessing storage
Virtual Machines accessing different types of storage

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top