Section 3 - Configure and Administer Advanced vSphere
6.x Storage
VMFS3 (datastore) creation is not
supported in vSphere 6.x
Physical Raw Device Mapping - allows the virtual machine to have direct
access to the LUN attached to it.
Objective 3.1 -
Manage vSphere Storage Virtualization
List of Storage Adapters: SCSI, iSCSI, RAID, Fibre
Channel, FCoE, Ethernet
Device drivers are part of the VMkernel and are accessed
directly by ESXi.
SCSI INQUIRY
Identifier: The host uses the SCSI INQUIRY command in order to use the page 83 information (Device Identification)
to generate a unique identifier:
- naa.number
- t10.number
- eui.number
Path-based
identifier (when device does not return page 83 information):
- mpx.path
e.g.:
mpx.vmhba1.C0.T0.L0
Note: This is
created for local devices during boot and is not unique or persistent.
Legacy Identifier:
- vml.number
The number digits
are unique to the device and can be taken from a part of the page 83
information if available.
A hardware iSCSI
adapter offloads the network and iSCSI processing from the host.
- Dependent
Hardware iSCSI Adapter
-- Depend on VMware networking and iSCSI management
interfaces within VMware
-- Depend upon host’s network configuration for IP and
MAC
- Independent
Hardware iSCSI Adapter
-- These types of adapters are independent from the host
and VMware
-- Provides its own configuration management for address
assignment
- Software iSCSI
Adapter (built into VMware’s code; uses host resources)
Statements
regarding iSCSI adapters...
... Dependent Hardware iSCSI adapters require
VMkernel networking
... Independent Hardware iSCSI adapters do
not require VMkernel networking
Virtual Disk Thin
Provisioning:
- Allows you to create virtual disks of a logical size
that initially differs from the physical space used on a datastore
- Can lead to over-provisioning of storage resources
Array Thin
Provisioning:
- Thin provision a LUN at the array level
- Allows you to create a LUN on your array with a logical
size that initially differs from the physical space allocated.
- Array thin provisioning is not ESXi aware without using
the storage APIs for array integration (VAAI).
- Using VAAI you can monitor space on the thin
provisioned LUNs, and tell the array when files are freed to reclaim free
space.
Opinion: If your
array supports array thin provisioning and VAAI then use array thin
provisioning and thick disks within vSphere.
Zoning:
- Use single-initiator
zoning or single-initiator single-target
zoning
- Reduces the number of LUNs and targets presented to a
particular host
- Controls/isolates paths in your SAN fabric
- Prevents unauthorized systems from accessing targets
and LUNs
LUN masking:
- Limits which hosts can see which LUNs
- Can be done at the array layer or the VMware layer
Scan/Rescan
storage:
- When adding a new storage device
- After adding/removing iSCSI targets
Rescan Storage
actions:
- Scan for new Storage Devices
- Scan for new VMFS Volumes
Configure FC/iSCSI
LUNs as ESXi boot devices:
- Each host must have access to their own boot LUN
- Individual ESXi hosts should not be able to see boot
LUNs other than their own
- Check with vendor to configure storage adapter to boot
from SAN
- iSCSI software adapter (or dependent adapter) can be
used if it supports iBFT (iSCSI Boot Firmware Table)
- Configure boot sequence in BIOS
Create NFS share
for use with vSphere:
- Create storage volume (and optional folder)
- Create an export for that volume (or folder) allowing
IP of host(s) read/write access to the storage
4 different storage filters in vSphere 5 that are enabled
by default:
-
config.vpxd.filter.vmfsFilter
-
config.vpxd.filter.rdmFilter
- config.vpxd.filter.SameHostAndTransportsFilter
-
config.vpxd.filter.hostRescanFilter
Authentication
methods for Outgoing and Incoming CHAP (iSCSI):
- None
- Use unidirectional CHAP if required by user
- Use unidirectional CHAP unless prohibited by target
- Use unidirectional CHAP
- Use bidirectional CHAP
Use case for Independent
hardware iSCSI initiator:
- if you have a very heavy iSCSI environment with a lot
of I/O (OLTP)
Use case for Dependent
hardware iSCSI initiator:
- a high iSCSI I/O environment
Use case for Software
iSCSI initiator:
- Low cost
Use case for array
thin provisioning:
- Uniformity
- Less overhead
- ease of use
Objective 3.2 -
Configure Software-defined Storage
Before you can
configure Virtual SAN (VSAN):
- You need at least 3 hosts to form a VSAN cluster
- Each host requires minimum of 6 GB memory
- Make sure devices/firmware are listed in VMware
Compatibility Guide
- Ensure you have the proper disks needed for the
intended configuration (SAS and SSD, or just SSD)
- Storage device for VSAN must be local to ESXi host with
no pre-existing partitions
- Each disk group will need one SAS drive and one SSD
drive, or two SSD (one SSD is for caching)
- Ensure enough space to account for availability
requirements
- The latest format 2.0 of VSAN requires 1% capacity per
device
- In all flash configurations, untag the SSD devices that
will be used for capacity
Note i: VSAN will
never put more than one replica of the same object in the same fault domain.
Note ii: Can
Enable/Disable Virtual SAN Fault Domains
Configure the VSAN
network:
- VSAN doesn’t support multiple VMkernel adapters on the
same subnet
- Multicast must be enabled on the physical switches
-- Allows metadata
to be exchanged between the different hosts in a VSAN cluster
-- Enables the
heartbeat connection between the hosts in the VSAN cluster
- Segment VSAN traffic on its own VLAN
- Use fault domains to spread data across multiple hosts
- Use 10 GbE adapter(s) on your physical hosts
- Create a port group on a virtual switch specifically
for VSAN traffic
- Ensure the physical adapter used for VSAN is assigned
as an active uplink on the port group
- Need a unique multicast address for each VSAN cluster
on same layer 2 network
Note i: Virtual SAN
is enabled (tick ‘Turn ON’) on the cluster
Note ii: Add disks
to storage modes: Automatic or Manual
Note iii: Licensing
for VSAN is required
Creating a VVOL (VMware
Virtual Volume) - high level steps:
- Register Storage Providers for the Virtual Volumes*
- Create a Virtual Datastore**
- Verify that the protocol endpoints exist
- (Optional) Change the PSP for the protocol endpoint
*Usually VASA
provider URL for the underlying storage array
**Select VVOL as
the type of datastore
VM Storage
Policies -> Create a new VM storage policy...
Note: Storage
Policies can have tag-based rules
Note: If you see
zero storage policies, you need to enable VM storage policies
Note: Options
include: Compatible and Incompatible
Note: The Virtual SAN network - is used for
HA Network Heartbeat configuration on a Virtual SAN cluster
Two virtual machine
states indicative of a fault in the Virtual SAN cluster...
... The virtual machine is non-compliant and
the compliance status of some of its objects is noncompliant
... The virtual machine object is
inaccessible or orphaned
Objective 3.3 -
Configure vSphere Storage Multi-pathing and Failover
Identify available
Storage Load Balancing options - paths status:
- Active -- used for active I/O*
- Standby -- The path will become active if the active
path fails
- Disabled -- this means the path is disabled and can’t
accept data
- Dead -- this path currently has no connectivity to the
datastore/device
*Active path
currently accepting data will be marked “Active (I/O)”
Identify available
Storage Multi-pathing policies:
- 2 types of multi-pathing policy available to storage
devices:
-- PSP = Path Selection Policy
-- SATP = Storage Array Type Policy
- The 3 types of PSPs available through the Native
Multipathing Plugin (NMP) are:
-- Round Robin (most common): selected
I/Os are sent down different available paths at a set interval (default set
interval is 1000)
-- Most Recently Used (MRU): sends all I/O
down the first working path that is discovered at boot time. There is no
failback after failover. This PSP is generally used for active/passive storage
arrays.
-- Fixed: sends all I/O down the path that
you set as the preferred path (or - if no path is set - first working path
discovered at boot time.) There is failback.
Note: A SATP is the
plugin that gets associated with the different paths to a device. Typically the
SATP relates to the storage vendor or to the type of storage array that the
device is connected to.
Identify features of
Pluggable Storage Architecture (PSA):
- VMware NMP
(Native Multi-Pathing Plugin)
- VMware SATP
(VMware Storage Array Type Plugin)
- VMware PSP
(Path Selection Policy plugin)
- Third-Party MPP
(Multi-Pathing Plugin)
- Third-Party SATP
(Storage Array Type Plugin)
Image: Pluggable
Storage Architecture (PSA)
Some things the
VMware NMP or Third-Party MPP are responsible for:
- Provides logical and physical path I/O statistics
- Loads and unloads Multipathing plugins
- Routes I/O requests for a specific logical device to
the MPP managing that device
- Handles I/O queuing to the physical HBAs
- Handles physical path discovery and removal
- Implements logical device bandwidth sharing between
virtual machines
Multi-Pathing
modules provide the following:
- Manage physical path claiming and unclaiming
- Manage creation, registration and deregistration of
logical devices
- Associate physical paths with logical devices
- Support path failure detection and remediation
- Process I/O requests to logical devices
Considerations for
Storage Multipathing...
...
The default SATP for iSCSI or FC devices is VMW_SATP_DEFAULT_AA and the
default PSP is VMW_PSP_FIXED
... If VMW_SATP_ALUA is assigned to a
specific storage device, but the device is not ALUA-aware, performance could be
degraded
Objective 3.4 -
Perform Advanced VMFS and NFS Configurations and Upgrades
Identify VMFS5
capabilities:
- VMFS5 datastore capacity is 64TB
- Block size is standardized at 1MB
- Greater than 2TB storage devices for each VMFS5 extent
- Supports virtual machines with greater than 2TB disks
- Greater than 2TB Raw Device Mappings (RDMs)
- Support for small files (1KB)
- Ability to reclaim physical storage space on thin
provisioned storage devices
Note: Any VMFS
datastore you want to put in maintenance mode must be in a datastore cluster.
Note: Preferred
path can only be set on a datastore that is using the Fixed Path selection
policy.
NFS datastores support NFS v3 and NFS v4.1
Identify available
Raw Device Mappings (RDM) solutions:
- RDMs come in two flavours, virtual compatibility mode and physical
compatibility mode
- An RDM is a file that exists inside a VMFS volume which
manages all the metadata for the raw device
- Some use cases for using RDMs:
-- Storage resource
management software
-- SAN management
agents
-- Replication
software
-- Microsoft
Failover Clustering
Note: Whenever
software needs direct access to the SCSI device the RDM needs to be in physical
compatibility mode.
To Disable VAAI:
Host > Manage > Settings> Advanced System Settings
-
HardwareAcceleratedMove = 0
-
HardwareAcceleratedInit = 0
-
HardwareAcceleratedLocking = 0
Note: To Enable
VAAI set back to 1.
Use case for
multiple VMFS/NFS Datastores:
- Datastores sit on backend storage that have physical
disks configured in a particular way
- Disk contention could be a problem
- HA and resiliency
Objective 3.5 -
Setup and Configure Storage I/O Control
Datastore Capabilities -> Edit -> Enable Storage
I/O Control
Manage the method of which SIOC is implemented:
- Percentage of peak throughput
- Manual (based on milliseconds)
Note: Can choose to
‘Exclude I/O Statistics’ from SDRS
3 different graphs to monitor SIOC:
- Storage I/O
Control Normalized Latency
- Storage I/O
Control Aggregate IOPs
- Storage I/O
Control Activity
Storage I/O Control will
not function correctly if...
... Two different datastores share the same
spindles
Miscellaneous
To create a 3TB
VMDK on a 2TB VMFS5 datastore on an ESXi 6.x host, two possible actions...
... Increase the LUN on which the VMFS5
datastore resides to more than 3TB, and then grow the datastore to use the
added capacity.
... Map a new LUN that is larger than 1TB to
the ESXi host then add the new LUN as an extent to the VMFS5 datastore.
Two solutions to
eliminate potential sources of SCSI reservation conflicts (which are causing
slow performance)...
... Reduce the number of snapshots
... Upgrade the host to the latest BIOS
Comments
Post a Comment