Tuesday, 16 August 2016

VCP6-DCV Exam Cram Notes: Section 10 of 10

Section 10: Administer and Manage vSphere Virtual Machines

When deploying multiple Windows 2003 virtual machines from the same template, to avoid network conflicts...
... Customize the guest operating system
... Copy the Microsoft Sysprep tools onto the vCenter Server system

Objective 10.1 - Configure Advanced vSphere Virtual Machine Settings

Identify available virtual machine configuration settings:
- VM Hardware (pure vSphere 6 environment, upgrade to version 11)
- Guest Operating System
- VMware Tools: upgrade automatically or not
- Virtual CPU
-- up to 128 cores
-- specify hot-add or not
-- set Hyperthreading Sharing Mode (Any, None or Internal)
-- set limits, reservations and shares
- Virtual Memory
-- up to 4TB of RAM
-- specify hot-add or not
-- specify memory allocation with a NUMA node
-- set limits, reservations and shares
- Swap file location (Default, Virtual machine directory, Datastore specified by host)
- Network Adapters
- Parallel and Serial Port devices
- Fibre Channel NPIV settings
- Hard Disks
-- different types of SCSI controllers
-- 3 provisioning types (thin, lazy zeroed and eager zeroed)
-- Raw Device Mappings (RDMs)
-- Disk shares
- CD/DVD drives
- Floppy drives

Two options for changing the virtual machine swap file location...
... Store in the host’s swapfile datastore
... Always store with the virtual machine

Interpret virtual machine configuration files:
.vmx - Virtual machine configuration file
.vmxf - Additional configuration file
.nvram - Stores the BIOS state
.log - Log file for the VM
.vmdk - Descriptor file for a virtual disk
-flat.vmdk - Data disk file
-delta.vmdk - Snapshot data disk files
.vswp - Memory swap file
.vmss - Stores state when the vm is suspended
.vmsd - Snapshot file; stores metadata
.vmsn - Stores state of the vm during snapshot
.ctk - used for changed block tracking

Identify virtual machine DirectPath I/O feature:
DirectPath I/O allows a VM to access the physical PCI functions
- you can have up to 6 PCI devices that a VM can access
- DirectPath I/O does not support:
-- can’t hot-add to the VM
-- no HA support
-- no FT support
-- snapshots are not supported
-- no vMotion
- DirectPath I/O is enabled on the VM by selecting the PCI device that you want to pass through

Objective 10.2 - Create and Manage a Multi-Site Content Library

Configure Content Library to work across sites:
vCenter Inventory lists -> Content Libraries
Content Library -> Actions -> Edit Settings... -> Tick “Publish this library externally”

Content Library authentication:
vCenter Inventory lists -> Content Libraries
Content Library -> Actions -> Edit Settings... -> Tick “Enable user authentication for access to this library” -> Enter password
Note: Error - The “Sync Library” operation failed ... Any sites subscribed to this content prior to enabling authentication, will need to be re-configured.

Set/Configure Content Library roles:
- To control permissions/roles on a content library you need to set the permission on the root level
Administration -> Global permissions -> + -> Add...
Assigned Role: Content Library Administrator

Content Library Administrator can:
- Create, edit and delete local or subscribed libraries
- Synchronize a subscribed library and synchronize items in a subscribed library
- View the item types supported by the library
- Configure the global settings for the library
- Import items to a library
- Export library items

Types of Content Library:
- Local content library
- Subscribed content library

Storage for your content library can be:
- local system path
- location to an NFS share
- an existing datastore

Objective 10.3 - Configure and Maintain a vCloud Air Connection

Identify vCenter Server and vCloud Air connection requirements:
- To connect vCenter Server and vCloud Air you’ll need vCloud Connector (VCC)
- VCC provides you with a single interface to manage many public and private “clouds”
- VCC lets you sync your content library to vCloud Air
- VCC provides offline data transfer from your private datacenter to vCloud Air
- VCC allows for datacenter extension
- VCC user interface comes in the form of a vCenter plugin and is available in the Web Client
- VCC server is an appliance deployed in the private datacenter and handles communication to vCloud Air
- VCC nodes are responsible for data transfer between private datacenter and vCloud Air instance

Requirements before you can install VCC in your private datacenter:
Note: VCC is already in your vCloud Air instance (taken care of by VMware)
- vSphere and vSphere Client 4.0 U3 or higher
- For datacenter extension with VCC, need vShield Manager 5.1.2 or higher
- IE 8 or 9, or Chrome 22 or 23
- Ports: 80, 443, 8190, 5480

Configure vCenter Server connection to vCloud air:
- Deploy the VCC OVA
- Navigate to https://VCC_NAME_or_IP:5480
- Default login is admin / vmware
- via vCenter tab configure vSphere Web Client extension*
*registers VCC extension in vSphere Web client
- deploy VCC node in your vSphere environment (same OVA as for VCC)
- via Nodes tab configure connection to the deployed Nodes

Connection types to vCloud Air network:
- Standard Connection: over internet and IPsec VPN, point-to-point
- Dedicated Connection: secure private link, point-to-point or multi-point

Configure replicated objects in vCloud Air Disaster Recovery services:
- requires vSphere Replication
- need to connect vSphere Replication to a Cloud provider*
*requires the address for your cloud provider
Note: Ensure the ‘Cloud Connection State’ shows Connected
- complete configuration in the vSphere Web Client -> Replicate to a cloud provider

VCP6-DCV Exam Cram Notes: Section 9 of 10

Section 9 - Configure and Administer vSphere Availability Solutions

Objective 9.1 - Configure Advanced vSphere HA Features

Enable/Disable Advanced vSphere HA Settings:
vSphere HA > Edit > Edit Cluster Settings > Advanced Options > Add
das.isolationAddress[...] - the address to ping to determine if a host is isolated from the network
das.useDefaultIsolationAddress {true/false} - by default HA uses the default gateway of the console network as an isolation address
das.isolationShutdownTimeout - period of time the system waits for a VM to shut down before powering it off (only applies if host’s isolation response is ‘Shut Down’). Default value 300 seconds.
das.slotMemInMb - Defines the maximum bound on the memory slot size
das.slotCPUinMhz - Defines the maximum bound on the CPU slot size
das.vmMemoryMinMb - Defines the default memory resource value assigned to a VM if its memory reservation is 0 (default 0 MB)
das.vmCPUminMhz - Defines a default CPU resource value assigned to a VM if its CPU reservation is 0 (default 32 MHz)
das.ioStatsInterval - Change the default I/O stats interval for VM Monitoring sensitivity (default 120 seconds)
das.ignoreInsufficientHbDatastore - Disables config issues created if host does not have sufficient heartbeat datastores
das.heartbeatDsPerHost - Change number of heartbeat datastores required (default 2)
fdm.isolationPolicyDelaySec - Wait time in seconds before executing the isolation policy
das.respectVmVmAntiAffinityRules - Determines if HA enforces VM-VM anti-affinity rules
das.maxResets - Max number of reset attempts made by VMCP
das.maxTerminates - Max number of retries made by VMCP for VM termination
das.terminateRetryIntervalSec - If VMCP fails to terminate a VM, wait time in seconds before retry
das.config.fdm.reportFailoverFailEvent - When set to 1, enables generation of a detailed per-VM event when an attempt by HA to restart a VM is unsuccessful (default 0)
vpxd.das.completeMetaDataUpdateIntervalSec - seconds after a VM-Host affinity rule is set during which HA can restart a VM (default 300 seconds)
das.config.fdm.memReservationMb - by default vSphere HA agents run with memory limit of 250 MB - use this option to lower the limit

vSphere HA utilizes the concept of master and slave hosts to build out a HA cluster:
- hosts communicate with each other using network heartbeats every second
- master host is responsible for detecting failure of slave hosts in the cluster
- if master stops receiving heartbeats from a slave, it checks slave’s datastore heartbeats, tries ICMP ping of slave’s management IP; if all checks fail, slave is considered failed and its VMs are restarted

Identify ‘VM Overrides’:
Each VM in a vSphere HA cluster is assigned cluster default settings for:
- VM Restart Priority
- Host Isolation Response
- VM Component Protection
- VM Monitoring

VM Component Protection (VMCP) Settings: Protects against datastore accessibility failures
- PDL: Permanent Device Loss - storage device reports datastore is no longer accessible
- APD: All Paths Down - transient or unknown accessibility loss, or unidentified delay in I/O processing
Configure: Tick ‘Protect against Storage Connectivity Loss’
- PDL Failures: VM is automatically failed over to a new host unless VMCP is configured to only ‘Issue Events
- ADP Events: ‘Delay for VM failover’ then Conservative or Aggressive restart

Objective 9.2 - Configure Advanced vSphere DRS Features

Identify Distributed Resource Scheduler (DRS) Affinity Rules:
- VM-Host Affinity (or Anti-Affinity)
- VM-VM Affinity (or Anti-Affinity)
Affinity = DRS tries to keep (groups) together
Anti-Affinity = DRS tries to keep (groups) apart
Note: Types of DRS Group = ‘Host Group’ or ‘VM Group’

VM-Host rule specifications:
- Must run on hosts in group
- Should run on hosts in group
- Must not run on hosts in group
- Should not run on hosts in group

VM-VM rule types:
- Separate Virtual Machines
- Keep Virtual Machines Together

Note: Create a VM to VM anti-affinity rule - to make sure that in the event of hardware failure of a host, VMs are not on the same host at the same time.

Configure Distributed Resource Scheduler (DRS) Automation Levels:
Manual: Initial placement - Recommended host is displayed
Manual: Migration - Recommendation is displayed
Partially Automated: Initial placement - Automatic
Partially Automated: Migration - Recommendation is displayed
Fully Automated: Initial Placement - Automatic
Fully Automated: Migration - Recommendation is executed automatically

VCP6-DCV Exam Cram Notes: Section 8 of 10

Section 8 - Deploy and Consolidate vSphere Data Center

Objective 8.1 - Deploy ESXi Hosts Using AutoDeploy

Identify ESXi AutoDeploy requirements:
- if using EFI on hosts they must be switched to BIOS compatibility mode
- if using VLANs, the UNDI driver must be set to tag frames with the proper VLAN in BIOS
- TFTP server is required
- ESXi port groups must be configured with correct VLANs
- VIBs, image profiles and auto deploy rules/rule sets are stored in a repository (2GB is recommended space)
- require DHCP server, and for AutoDeploy replace the gpxelinux.0 file with undionly.kpxe.vmw-hardwired
- setup remote syslog or use existing syslog server
- install ESXi dump collector
- AutoDeploy does not support pure IPv6 environment (deploy with IPv4)

Note: AutoDeploy server is included with the management node of the vCenter Server (Windows or VCA)

Configure AutoDeploy:
-  Auto Deploy service Startup Type set to Automatic
- Configure TFTP server: Web Client ‘Download TFTP Boot Zip’ and unzip it to TFTP server directory
- Set DHCP server to point to the TFTP server
- DHCP server options:
-- 66: IP Address of TFTP server
-- 67: Boot file “undionly.kpxe.vmw-hardwired”
- Set ESXi servers to network or PXE boot
- Set up the image profile, and write a rule that assigns that image profile to hosts

PowerCLI cmdlets for AutoDeploy:
- Add-EsxSoftwareDepot
- Get-EsxImageProfile
- New-DeployRule
- Add-DeployRule
- Get-DeployRule
- Copy-DeployRule

Note: Subsequent reboots of the ESXi host will then be re-provisioned by vCenter

Objective 8.2 - Customize Host Profile Settings

Create/Edit/Remove/Import/Export/Attach/Apply/Check Compliance/Remediate via: vSphere Web Client > ‘Host Profiles‘ icon

Objective 8.3 - Consolidate Physical Workloads using VMware Converter

Convert Physical Workloads using VMware Converter:
Install VMware Converter Standalone on the Physical Machine
- Source system: Powered-on machine
- Local or Remote: Local
- Destination: VMware Infrastructure virtual machine
- Data to copy: copy all disks and maintain layout
- Install VMware tools
- sync data (stop services on source machine first)

Note: During conversion can extend/remove/add a new volume, change memory and CPU configuration, add/remove networking...

Interpret and correct errors during conversion:
VM fails to boot after conversion...
... check SCSI controller selected during conversion process
... BSOD try running Windows repair
Conversion fails at 2%...
... check firewall, IPs, DNS resolution
VMware Converter Standalone log files:
- W2008: c:\users\all users\application data\vmware\vmware converter enterprise\logs
- W2012: c:\programdata\vmware\vmware vcenter converter standalone\logs

To maintain a physical NIC’s MAC Address for a P2V-d server...
... Configure the MAC address for the vNIC using the vSphere Web Client

VCP6-DCV Exam Cram Notes: Section 7 of 10

Section 7 - Troubleshooting a vSphere Deployment

Fault Tolerance (VM) - provides continuous availability for applications in the event of server failure.

Three valid uses cases for Fault Tolerance...
... Protecting business critical applications
... Clustering custom applications which have no other supported method
... Reducing complexity compared to other clustering solutions

Fault tolerance...
... Requires a dedicated 10GB NIC
... Supports thin provisioned disks

Three vSphere features that are supported with Fault Tolerance...
... vSphere Data Protection
... vMotion
... Enhanced vMotion Compatibility

Three features or devices incompatible with Fault Tolerance...
... N_Port ID Virtualization (NPIV)
... CD-ROM backed by a physical device
... 3D enabled Video Devices

The following features are not supported with FT:
Snapshots, Storage vMotion, Linked clones, Virtual SAN, Virtual Volumes, VM Component Protection, Storage-based policy management and I/O filters.

kernelLatency - data counter can be used to identify suspected issues with VMs on a host trying to send more throughput to the storage system than the configuration on the host supports

Objective 7.1 - Troubleshoot vCenter Server, ESXi Hosts and VMs

Troubleshoot Common Installation Issues:
Make sure your hosts meet the hardware requirements as well as the VMware HCL.

Monitor ESXi System Health:
The Common Information Model (CIM) allows for a standard framework to manage computing resources and presents information via the vSphere Client.

Note: Execute Reset Sensors from the host’s Hardware Status tab in vCenter - to remove all the CIM data

ESXi Log Files and Locations:
/var/log/auth.log = ESXi Shell authentication success and failure log
/var/log/dhclient.log = DHCP client service log
/var/log/esxupdate.log = ESXi patch and update installation log
/var/log/lacp.log = Link Aggregation Control Protocol log
/var/log/hostd.log = Host management service log (includes VM and host Tasks and Events, communication with vSphere Client, vCenter, SDK)
/var/log/hostd-probe.log = Host management service responsiveness checker
/var/log/rhttproxy.log = HTTP connections proxied on behalf of other ESXi host webservices
/var/log/shell.log = ESXi Shell usage logs, including enable/disable and every command entered
/var/log/sysboot.log = Early VMkernel startup and module loading
/var/log/boot.gz = A compressed file that contains boot log information
/var/log/syslog.log = Management service initialization, watchdogs, scheduled tasks and DCUI use
/var/log/usb.log = USB device arbitration events
/var/log/vobd.log = VMkernel Observation evnets
/var/log/vmkernel.log = Core VMkernel logs (including device discovery, storage and networking device and driver events, and VM startup)
/var/log/vmkwarning.log = A summary of Warning and Alert log messages from VMkernel.log
/var/log/vmksummary.log = A summary of ESXi host startup/shutdown, hourly heartbeat with uptime, number of VMs running, service resource consumption
/var/log/Xorg.log = Video acceleration

Note: vpxa = vCenter Server Agent

vCenter Log Locations...
... on Windows - C:\ProgramData\VMware\VMware VirtualCenter\Logs
... on VA - /var/log/vmware/vpx

vCenter Log Files:
vpxd.log = Main vCenter Server log
vpxd-profiler.log = Profiled metrics for operations performed in vCenter Server*
*Used by VPX Operational Dashboard (VOD) at https://VCHostnameOrIP/vod/index.html
vpxd-alert.log = Non-fatal info logged about vpxd process
cim-diag.log & vws.log = CIM monitoring info
drmdump = actions proposed and taken by DRS
ls.log = Health reports for the Licensing Services extension
vimtool.log = Dump of string used during installation of vCenter Server
stats.log = historical performance data collection from ESXi hosts
sms.log = Health reports for Storage Monitoring Service extension
eam.log = Health reports for ESX Agent Monitor extension
catalina.date.log = connectivity information and status of the VMware Webmanagement Services
jointool.log = Health status of VMwareVCMSDS service and individual ADAM database objects, and replication logs between linked-mode vCenter Servers

Identify Common Command Line Interface (CLI) Commands:
esxtop - used for real time performance monitoring and troubleshooting
vmkping - (like ping) allows for sending traffic out a specified vmkernel interface
esxcli network name space - used for monitoring or configuring ESXi networking
esxcli storage name space - used for monitoring or configuring ESXi storage
vmkfstools - allows for management of VMFS volumes and virtual disks

g - in vimtop displays the top four physical CPUs
f - display all available CPUs overview
o - network view
k - disk view
m - display memory overview information

To power off a virtual machine while connected to an ESXi host using SSH:
> vim-cmd vmsvc/power.off VMID

Identify Fault Tolerance Network Latency Issues:
- Use dedicated 10-Gbit network for Fault Tolerance traffic
- Use the vmkping command to verify low sub-millisecond network latency

Objective 7.2 - Troubleshooting vSphere Storage and Network Issues

Troubleshoot Physical Network Adapter Configuration Issues:
- Be sure that physical NICs that are assigned to a virtual switch are configured the same on the physical switch (speed, VLANs, MTU...)
- If using IP Hash for Load Balancing method, make sure the physical switch side has link aggregation enabled
- If using beacon probing for network failover detection, standard practice is to use a minimum of 3 uplinks

Troubleshoot Virtual Switch and Port Group Configuration Issues:
- Port Group/dvPort Groups - case sensitivity is required across hosts
- vSwitch settings must be the same across hosts (e.g. otherwise Motion will fail)

Troubleshoot Common Network Issues - areas:
- Virtual Machine
- ESX/ESXi Host Networking (uplinks)
- vSwitch or dvSwitch Configuration
- Physical Switch Configuration

Troubleshoot VMFS Metadata Consistency:
Use the vSphere On-disk Metadata Analyser (VOMA) to identify and fix incidents of metadata corruption (for VMFS datastores or a virtual flash resource):
# esxcli storage vmfs extent list
# voma -m vmfs -f check -d /vmfs/devices/disks/naa.1234567...

Identify Storage I/O Constraints:
Disk Metric: Threshold (ms): Description
KAVG: 2: The amount of time the command spends in the VMkernel
DAVG: 25: This is the average response time per command being sent to the device
GAVG: 25: This is the response time as it is perceived by the guest OS
Note: If KAVG is > 0 it usually means I/O is backed up in a device or adapter queue.

Objective 7.3 - Troubleshoot vSphere Upgrades

Monitor tab -> System Logs -> Export Systems Logs
Choose ESX/ESXi hosts you want to export logs from
(Optional selection) Include vCenter Server and vSphere Web Client Logs
Specify which system logs are to be exported:
- Storage
- ActiveDirectory
- VirtualMachines
- System
- Userworld
- Performance Snapshot
Download Log Bundle!

Note: CLI Tool> vm-support

Configure vCenter Logging Options: Logging settings
Select level of detail that vCenter Server uses for log files:
- none = Disable logging
- error = Errors only
- warning = Errors and Warnings
- info = Normal logging (Default)
- verbose = Verbose
- trivia = Extended Verbose

Objective 7.4 - Troubleshoot and Monitor vSphere Performance

Describe How Tasks and Events are Viewed in vCenter Server:
Monitor tab -> Tasks or Events

Identify Critical Performance Metrics:
Critical points to monitor are: CPU, Memory, Networking, and Storage

Explain Common Memory Metrics:
Metric = Description
SWR/s and SWW/s = Measured in megabytes, these counters represent the rate at which the ESXi hosts is swapping memory in from disk (SWR/s) and swapping memory out to disk (SWW/s)
SWCUR = This is the amount of swap space currently used by the virtual machine
SWTGT = This is the amount of swap space that the host expects the virtual machine to use
MCTL = Indicates whether the balloon driver is installed in the virtual machine
MCTLSZ = Amount of physical memory that the balloon driver has reclaimed
MCTLTGT = Maximum amount of memory that the host wants to reclaim via the balloon driver

Explain Common CPU Metrics:
Metric = Description
%Used = Percentage of physical CPU time used by a group of worlds
%RDY = Percentage of time a group was ready to run but was not provided CPU resources
%CSTP = Percentage of time the vCPUs of a virtual machine spent in the co-stopped state, waiting to be co-started
%SYS = Percentage of time spent in the ESXi VMkernel on behalf of the world/resource pool

Explain Common Network Metrics:
Metric = Description
MbTX/s = Amount of data transmitted in Mbps
MbRX/s = Amount of data received in Mbps
%DRPTX = Percentage of outbound packets dropped
%DRPRX = Percentage of inbound packets dropped

Explain Common Storage Metrics:
Metric = Description
DAVG = Average amount of time it takes a device to service a single I/O request
KAVG = The average amount of time it takes the VMkernel to service a disk operation
GAVG = The total latency seen from the virtual machine when performing an I/O request
ABRT/s = Number of commands aborted per second

Identify Host Power Management Policy:
Power Management Policy = Description
Not supported = Not supported / Disabled in BIOS
High Performance = The VMkernel detects certain power management features, but will not use them unless the system BIOS requests them for power capping or thermal events
Balanced (Default) = The VMkernel uses the available power management features conservatively to reduce host energy consumption with minimal compromise to performance
Low Power = The VMkernel aggressively uses available power management features to reduce host energy consumption at the risk of lower performance
Custom = The VMkernel bases its power management policy on the values of several advanced configuration parameters

Identify CPU/Memory Contention Issues - Monitor Performance through ESXTOP

Troubleshoot Enhanced vMotion Compatibility (EVC) Issues:
- EVC mode ensures that all ESXi hosts in a cluster present the same CPU level/feature set to VMs, even if the CPUs on the hosts differ
Note: CPUs still need to be of the same CPU manufacturer.

ESXi 6.0 Supports these EVC Modes:
AMD Opteron Generation: 1, 2, 3, 3 (no 3Dnow!), 4, “Piledriver”
Intel Generation: “Merom”, “Penryn”, “Nehalem”, “Westmere”, “Sandy Bridge”, “Ivy Bridge”, “Haswell”

Overview Charts: Display multiple data sets in one panel to easily evaluate different resource statistics, display thumbnail charts for child objects, and display charts for a parent and a child object
Advanced Charts: Display more information than overview charts, are configurable, and can be printed or exported to a spreadsheet

Objective 7.5 - Troubleshoot HA and DRS Configurations and Fault Tolerance

HA Requirements:
- All hosts must be licensed for vSphere HA
- You need at least 2 hosts in the cluster
- All hosts should be configured with static IP, or, if using DHCP, address must be persistent across reboots
- There should be at least 1 management network in common among all hosts
- All hosts should have access to the same VM networks and datastores
- For VM monitoring to work, VMware tools must be installed
- supports both IPv4 and IPv6

DRS Requirements:
- Shared Storage: can be either SAN or NAS
- Place the disks of VMs on datastores that are accessible by all hosts
- Processor Compatibility: same vendor (AMD or Intel), and supported family for EVC
Note: CPU Compatibility Masks - you can hide certain CPU features from the VM to prevent vMotion failing due to incompatible CPUs

vMotion Requirements:
- The virtual machine configuration file for ESXi hosts must reside on VMFS
- vMotion does not support raw disks, or migrations of applications using MSCS
- vMotion requires a private GbE (minimum) migration network between all of the vMotion enabled hosts

Verify vMotion/Storage vMotion Configuration:
- Proper networking (VMkernel interface for vMotion)
- CPU compatibility
- Shared storage access across all hosts

Note: When migrating a virtual machine, 3 available options...
... Change compute resource only
... Change storage only
... Change both compute resource and storage

Verify HA Network Configuration:
- On ESXi hosts in the cluster, vSphere HA communications, by default, travel over VMkernel networks, except those marked for use with vMotion

Verify HA/DRS Cluster Configuration:
You can monitor for errors by looking at the Cluster Operational Status and Configuration Issues screens

Troubleshoot HA Capacity Issues: The 3 Admission Control Policies:
- Host failures the cluster tolerates (default): Configure vSphere HA to tolerate a specified number of host failures
- Percentage of cluster resources reserved as failover spare capacity: Configure vSphere HA to perform admission control by reserving a specific percentage of cluster CPU and memory resources for recovery from host failure
- Specify failover hosts

When troubleshooting HA, look for:
- Failed or disconnected hosts
- Over size VM’s with high CPU/memory reservations (affects slot sizes)
- Lack of capacity/resources

Troubleshoot HA Redundancy Issues:
- Need to design in redundancy for a clusters HA network traffic (either using NIC teaming preferably to separate physical switches; or via secondary management network attached to a different virtual switch)

If after a host failure, a virtual machine has not restarted 2 possible reasons...
... Virtual machine was not protected by HA at the time of the failure
... Insufficient spare capacity on available hosts

Interpret the DRS Resource Distribution Graph and Target/Current Host Load Deviation:
- Accessed from Summary tab at cluster level, under section for VMware DRS “View Resource Distribution Chart”
- The DRS Resource Distribution Chart is used to display both memory and CPU metrics for each host in the cluster
- The DRS process runs every 5 minutes and analyses resource metrics on each host across the cluster

Troubleshoot DRS Load Imbalance/Overcommit Issues:
- host failure
- vCenter Server is unavailable and VMs are powered on via host connection, or changes are made to hosts or VMs
- cluster becomes invalid if user reduces reservation on a parent resource pool while a VM is in the process of failing over

Troubleshoot Storage vMotion Migration Issues:
- VMs disk must be in persistent mode or be RDMs
- For Virtual Compatibility Mode RDMs, you can migrate the mapping file, or convert to thick/thin during migration, as long as destination is not NFS
- For Physical Compatibility Mode RDMs, you can migrate mapping file only

Two scenarios that can cause Storage DRS to be disabled on a virtual disk...
... The disk is a CD-ROM/ISO file
... The virtual machine is a template

vMotion Resource Maps:
Provide a visual representation of hosts, datastores, and networks associated with the VM. Also which hosts in the VM’s cluster or datacenter are compatible.

Identify Fault Tolerance Requirements:
- physical CPUs must be compatible with vMotion or EVC
- physical CPUs must support hardware MMU virtualization (Intel EPT or AMD RCI)
- dedicated 10GB network for FT logging
- vSphere Standard and Enterprise allows up to 2 CPUs for FT
- vSphere Enterprise Plus allows up to 4 CPUs for FT

Features NOT supported if a VM is protected by Fault Tolerance:
- VM snapshots
-  Storage Vmotion
- Linked Clones
- Virtual SAN
- VM Component Protection (VMCP)
- Virtual Volume datastores
- Storage-based policy management
- I/O filters

When disabling Distributed Resource Scheduler (DRS) Cluster on vSphere 6.x Cluster...
... The resource pool hierarchy of the DRS cluster is removed
... The affinity settings of the DRS cluster are removed and not maintained when DRS is re-enabled

Features supported when using Fault Tolerance in vSphere 6.x (include)...
... vMotion
... vSphere Distributed Switches


If the vSphere Client is connected directly to an ESXi host - an administrator is unable to access the Clone Virtual Machine wizard

If you do not see the Hardware Status tab in the vSphere Web Client, two possible explanations...
... The Hardware Status Plug-In is disabled
... The VMware VirtualCenter Management Webservices service is not running

To address the warning “This host currently has no management network redundancy”...
... Add an additional uplink to the management vmknic
... Include the advanced HA parameter das.ignoreRedundantNetWarning

Two conditions that can cause orphaned VMs...
... The virtual machine was deleted outside of vCenter Server
... The ESXi host has lost access to the storage device

Three changes that could result in a Network rollback operation...
... Changing the IP settings of management VMkernel network adapters
... Changing the MTU of a distributed switch
... Updating the VLAN of the management VMkernel network adapter

Change the Data Collection Level to 3 - to review device statistics to troubleshoot an issue (for device level information)

When attempting to power on a virtual machine and getting “Unable to access a file since it is locked”, two actions to address...
... Investigate the logs for both the host and the virtual machine
... Reboot the host the virtual machine is running on

When attempting to migrate a virtual machine with a USB device attached, the compatibility check fails with the error message “Currently connected device uses backing path which is not accessible”, two resolutions...
... Make sure that the devices are not in the process of transferring data
... Re-add and enable vMotion for each affected USB device

In a Fully Automated Distributed Resource Scheduler (DRS) cluster with vMotion enabled, virtual machines are never migrated, three scenarios...
... DRS is disabled on the virtual machine
... Moving the virtual machine will violate an affinity rule
... Virtual machine has a local device mounted

DRS does not move a virtual machine when it is initially powered on despite insufficient resources on the host, three possible causes...
... DRS is disabled on the virtual machine
... The virtual machine has a device mounted
... The virtual machine has fault tolerance enabled

The following scenarios can cause Storage DRS to be disabled on a virtual disk...
... A virtual machine's swap file is host-local
... A certain location is specified for a virtual machine's .vmx swap file
... The relocate or Storage vMotion operation is currently disabled for the virtual machine in vCenter Server
... The home disk of a virtual machine is protected by vSphere HA and relocating will cause loss of vSphere HA protection
... The disk is a CD-ROM/ISO file
... If the disk is an independent disk, Storage DRS is disabled (except in the case of relocation or clone placement)
... If the virtual machine has system files on a separate datastore from the home datastore (legacy), Storage DRS is disabled on the home disk
... If the virtual machine has a disk whose base/redo files are spread across separate datastores (legacy), Storage DRS for the disk is disabled
... The virtual machine has hidden disks
... The virtual machine is a template
... The virtual machine is vSphere Fault Tolerance-enabled
... The virtual machine is sharing files between its disks
... The virtual machine is being Storage DRS-placed with manually specified datastores

VCP6-DCV Exam Cram Notes: Section 6 of 10

Section 6 - Backup and Recover a vSphere Deployment

Benefits of using VMware Data Protection (VDP)...
... Support for guest-level backups and restores of Microsoft SQL Servers, Exchange Servers, and Sharepoint Servers.
... Support for advanced storage services including replication, encryption, deduplication, and compression.
... Direct access to VDP configuration integrated into the vSphere Client

Objective 6.1 - Configure and Administer a vSphere Backup/Restore/Replication Solution

A VMware snapshot:
- Represents the state of a virtual machine at the time it was taken
- Includes the files and memory state of a virtual machine’s guest operating system
- Includes the settings and configuration of a virtual machine and its virtual hardware
- Is stored as a set of files in the same directory as other files that comprise a virtual machine
- Should be taken when testing something with unknown or potentially harmful effects
- Can take up as much disk space as the virtual machine itself

Identify VMware Data Protection (VDP) Requirements - software:
- Minimum requirement is vCenter Server 5.1; vCenter Server 5.5 or later is recommended
- VDP 6.0 supports the VCA and Windows vCenter Server
- Deploy VDP appliances on shared VMFS5 or later datastores to avoid block size limitations
- Make sure VMs are running hardware version 7 or later to support Change Block Tracking (CBT)
- Install VMware Tools on each VM that VDP will backup
- Unsupported VM disk types: Independent, RDM Independent Virtual Compatibility Mode, RDM Physical Compatibility Mode

VDP is deployed based on disk capacity - options:
0.5TB, 1TB, 2TB, 4TB, 6TB, 8TB

Based on disk/repository sizing the CPU/Memory resources minimum requirements:
0.5TB, 4 CPU*, 4GB RAM, 873GB Disk Space
1 TB, 4 CPU*, 4GB RAM, 1.6TB Disk Space
2TB, 4 CPU*, 4GB RAM, 3TB Disk Space
4TB, 4 CPU*, 8GB RAM, 6TB Disk Space
6TB, 4 CPU*, 10GB RAM, 9TB Disk Space
8TB, 4 CPU*, 12GB RAM, 12TB Disk Space
*CPU is 2GHz or better

Explain VMware Data Protection Sizing Guidelines:
- Up to 400 virtual machines supported per VDP appliance
- Up to 20 VDP appliances supported per vCenter
- Backup daily, weekly, monthly, or yearly

VMware Data Protection offering in vSphere 6:
- Agentless virtual machine backup
- Integration with EMC Data Domain for additional scale, efficiency, and reliability
- Agent support for application consistent backup and restores of Microsoft Exchange, SQL and Sharepoint
- Granular File Level Restores (FLR)
- Deployment of external proxies, enabling as many as 24 parallel backup operations
Note: vDP 6.0 appliance default login is root/changeme
Note: Configure appliance via https://IP_of_APPLIANCE:8543/vdp-configure
Note: The appliance is registered with vCenter

vSphere Replication (VR) Architecture:
- included feature with vSphere
- provides hypervisor-based VM replication
- allows for replicating to unlike storage
- supports both VCA and standard vCenter Server
- bundled components:
-- plug-in for vSphere Web Client
-- Embedded database that stores replication configuration and management info
-- vSphere Replication server that provides the core of the VR infrastructure
-- vSphere Replication management server
--- Configures the vSphere Replication server
--- Enables, manages, and monitors replications
--- Authenticates user and checks their permissions to perform vSphere Replication operations
- Topology 1: source site to target site
- Topology 2: single site from one cluster to another
- Topology 3: multiple source sites to a shared remote target site
Note: vSphere Replication appears under Home > vSphere Replication icon in vSphere Web Client
Note: The VAMI (Virtual Appliance Management Interface) of the vSphere Replication appliance is https://IP_of_APPLIANCE:5480
Note: By default the vSphere Replication appliance will use self-signed certs
Note: Configuring vSphere Replication is simple right-click VM(s) All vSphere Replication Actions > Configure Replication

Image: Topology 1 from source site to target site

Identify vSphere Replication Compression Methods:
Data compression is supported for vSphere Replication if the environment meets certain requirements. For full support of end-to-end compression, both the Source and Target ESXi hosts need to be running ESXi 6.x. If you have a “mixed” environment of 6.x hosts and earlier, the ability to compress data will be limited.

Recover a VM using vSphere Replication:
- This is a manual task
- Prior to attempting recovery, insure the VM at the source site is powered off
- vSphere Replication > Incoming Replication > Recover
- Recovery options:
-- Synchronize recent changes
-- Use latest available data
-- (Additionally:  Point in time recovery utilizing VM Snapshots)

Perform Failback using vSphere Replication: After successful recovery in target (DR) vCenter site, to perform failback:
- log into target site and configure new replication in reverse (target site to source site)
- disks on source site are (can be) used as replication seeds

VCP6-DCV Exam Cram Notes: Section 5 of 10

Section 5 - Administer and Manage vSphere 6.x Resources

Objective 5.1 - Configure Advanced/Multilevel Resource Pools

Describe a Resource Pool hierarchy:
- By default, all resources for a host or cluster will exist in the root resource pool
- In the resource pool hierarchy, there are 3 types of resource pools:
-- parent (root is the parent)
-- siblings
-- child
IMPORTANT: Don’t use resource pools as a way to logically group virtual machines (use folders for this)
Note: The ‘expandable reservation parameter’ allows a child resource pool to ask its parent for more should the child need it.

Describe vFlash architecture:
- vFlash Read cache allows for integrations and management of flash devices that are locally attached to ESXi servers
- vFlash enables pooling of one or more flash devices into a single pool of flash resources
- vFlash allows for write-through caching mode (read cache)
- vFlash provides read caching on a per-vmdk basis
- When enabling vFlash on a vmdk, a subset of the vFlash resource pool will be placed in the data path for that VMDK
- A VM must be powered on for data to be sitting in the vFlash cache
- Requires Enterprise+ licensing
- Works with HA and DRS - migrate cache or not (rewarm)
Note: vFlash Resource Pool configured via “Virtual Flash Resource Management” and “Virtual Cache Host Swap Cache Configuration”
Note: Assign vFlash resources to VMDKs via VM > Edit Settings > “Virtual flash read cache”

Evaluate appropriate shares, reservations, and limits for a Resource Pool based on virtual machine workloads:
- Shares: The amount of shares you allocate to a resource pool are relative to the shares of any sibling and relative to its parent’s total resources
- Reservations are the minimum amount of resources the resource pool will get
Note: Contention occurs if you have overcommitted the resources in your DRS cluster, or during short-term spikes.
- Expandable reservations gives you flexibility
- Limits is the maximum amount of resources a resource pool can have

VCP6-DCV Exam Cram Notes: Section 4 of 10

Section 4 - Upgrade a vSphere Deployment to 6.x

Platform Services Controller - enables an Administrator to authenticate into multiple vCenter Servers with a single login using Enhanced Link Mode.

Objective 4.1 - Perform ESXi Host and Virtual Machine Upgrades

Minimum hardware and system resources for ESXi 6.0:
- Supported server platform (refer to VMware Compatibility Guide)
- 2 CPU cores
- 64-bit x86 processor (released post Sept ’06)
- NX/XD bit enabled in BIOS
- Minimum 4GB RAM (recommended 8GB)
- For 64-bit VMs, support for hardware virtualization (Intel VT-x or AMD RVI)
- One or more Gigabit or faster Ethernet interfaces
- SCSI disk or a local, non-network, RAID LUN with un-partitioned space for VMs
- Minimum storage: 1GB boot device (5.2GB when booting from local disk, FC or iSCSI LUN - 4GB is used for scratch partition)
Note: If space cannot be found for scratch partition, /scratch is located on the ESXi host ramdisk.

Supported upgradeable (to version 6.0) versions of vSphere Distributed Switch (vDS) are versions 5.x or later. New features in version 6.0:
- Network I/O Control: Support for per-VM Distributed vSwitch bandwidth reservations
Note: Upgrade to vDS 6.0 is non-disruptive (right-click vSphere Distributed Switch -> Upgrade -> Upgrade Distributed Switch)

Upgrade VMware Tools, can be completed either:
- manually
- configure virtual machines to ‘Check and upgrade VMware Tools before each power on’
- VMware Update Manager (Predefined baseline: VMware Tools Upgrades to Match Host)

Upgrade Virtual Machine Hardware:
ESXi 6.0+: hardware version 11
ESXi 5.5+: hardware version 10
ESXi 5.1+: hardware version 9
ESXi 5.0+: hardware version 8
ESXi 4.0+: hardware version 7
ESXi 3.5+: hardware version 4
ESXi 2.x+: hardware version 3
Note 1: VM needs to be powered off for VM Hardware upgrade
Note 2: 7 is the maximum VM hardware version a VM built using vSphere 4.1 would be at before it gets moved to ESXi 6.0.

Upgrade an ESXi Host Using vCenter Update Manager (VUM):
- Both vCenter Server and vSphere Update Manager must have already been upgraded to vSphere 6.0
- You can upgrade ESXi 5.0, 5.1, and 5.5 hosts directly to ESXi 6.0
- You cannot use VUM to upgrade hosts to ESXi 5.x is the host was previously upgraded from ESX 3.x or ESX 4.x
- Hosts must have more than 350MB free space in /boot to support VUM upgrade process

Remember: ESXi 5.0 - is the minimum version supported for upgrade to ESXi 6.x using Update Manager

The following vSphere components are upgraded by VUM:
- ESX and ESXi kernel
- Virtual Machine hardware
- VMware Tools
- Virtual Appliances

Note: Must use the vSphere Client (not Web) for VUM: vSphere Client -> Solutions and Applications -> Update Manager (use ‘Import ESXi Image’ or create baseline via ‘Baselines and Groups’)

Determine Whether an In-Place Upgrade is Appropriate in a Given Upgrade Scenario: when you upgrade ESXi 5.X hosts that have custom VIBs to version 6.0, the custom VIBs are migrated.

Methods supported for direct upgrade from ESXi 5.x to 6.0 are:
- vSphere Update Manager
- Interactive upgrade from CD, DVD, or USB drive
- Scripted upgrade
- vSphere Auto Deploy (reprovision)
- The esxcli command

The access types supported for script retrieval by the host when performing ESXi scripted installation...
... HTTP
... FTP

Objective 4.2 - Perform vCenter Server Upgrades

Identify Steps Required to Upgrade a vSphere Implementation:
- Read the vSphere release notes
- Verify that you system meets vSphere hardware and software requirements
- Verify that you have backed up your configuration
- If your vSphere system includes VMware solutions or plug-ins, verify that they are compatible with the vCenter Server or vCenter Server Appliance version to which you are upgrading
- Upgrade vCenter Server

1. Upgrade each vCenter Single Sign-On one at a time (if separate)
2. Upgrade each vCenter Server one at a time
3. Upgrade each ESXi host one at a time

Identify Upgrade Requirements for vCenter:
- If vCenter Server service is not running as Local System, verify the service user account:
-- Member of the Administrators Group
-- Log on as a service
-- Act as part of the operating system (if domain user)
- Verify LOCAL SERVICE account has read permission on the folder in which vCenter Server is installed and on the HKLM registry

vCenter Server for Windows Hardware Requirements:
- Platform Services Controller (PSC): 2 CPUs and 2GB RAM
- Tiny: 10 hosts/10 VMs - 2 CPUs and 8GB RAM
- Small: 100 hosts/1000 VMs - 4 CPUs and 16GB RAM
-Medium: 400 hosts/4000 VMs - 8 CPUs and 24GB RAM
-Large: 1’000 hosts/10’000 VMs - 16 CPUs and 32GB RAM

vCenter Server for Windows Software Requirements:
- Supported operating system
- 64-bit system DSN for vCenter Server to connect to the external database

vCenter Server for Windows Database Requirements:
- Up to 20 hosts and 200 VMs, can use the bundled PostgreSQL DB
- vCenter Server supports Oracle and MS SQL

vCenter Server Appliance Requirements:
- ESXi host 5.0 or later
- Synchronize clocks
- Use FQDN

vCenter Server Appliance (VCA) Hardware Requirements:
- Platform Services Controller (PSC): 2 CPUs and 2GB RAM
- Tiny: 10 hosts/10 VMs - 1 CPU and 8GB RAM
- Small: 100 hosts/1000 VMs - 4 CPUs and 16GB RAM
-Medium: 400 hosts/4000 VMs - 8 CPUs and 24GB RAM
-Large: 1’000 hosts/10’000 VMs - 16 CPUs and 32GB RAM

Note: VCA is SUSE 11 U3
Note: PostgreSQL supports up to 1’000 hosts and 10’000 VMs
Note: VCA supports ONLY Oracle for external connected databases
Note: For embedded PSC, add hardware requirements to hardware requirements for vCenter

Upgrade vCenter Server Appliance (VCA) using vcsa-setup

Identify the Methods of Upgrading vCenter:
- vSphere 5.5 and earlier using Simple Install option
- vSphere 5.5 and earlier using Custom Install option

Image: Simple Install

Image: Custom Install

Identify/Troubleshoot vCenter Upgrade Errors:
Log location for Windows Based vCenter Server:
- C:\ProgramData\VMware\CIS\logs
- C:\Users\USERNAME\AppData\Local\Temp

Log Collection for VCA:
- Access appliance shell
- run pi shell to access Bash
- in Bash run vc-support.sh to generate support bundle
- .tgz is in /var/tmp
- Determine which firstboot script failed

VCP6-DCV Exam Cram Notes: Section 3 of 10

Section 3 - Configure and Administer Advanced vSphere 6.x Storage

VMFS3 (datastore) creation is not supported in vSphere 6.x

Physical Raw Device Mapping - allows the virtual machine to have direct access to the LUN attached to it.

Objective 3.1 - Manage vSphere Storage Virtualization

List of Storage Adapters: SCSI, iSCSI, RAID, Fibre Channel, FCoE, Ethernet

Device drivers are part of the VMkernel and are accessed directly by ESXi.

SCSI INQUIRY Identifier: The host uses the SCSI INQUIRY command in order to use the page 83 information (Device Identification) to generate a unique identifier:
- naa.number
- t10.number
- eui.number

Path-based identifier (when device does not return page 83 information):
- mpx.path
e.g.: mpx.vmhba1.C0.T0.L0
Note: This is created for local devices during boot and is not unique or persistent.

Legacy Identifier:
- vml.number
The number digits are unique to the device and can be taken from a part of the page 83 information if available.

A hardware iSCSI adapter offloads the network and iSCSI processing from the host.
- Dependent Hardware iSCSI Adapter
-- Depend on VMware networking and iSCSI management interfaces within VMware
-- Depend upon host’s network configuration for IP and MAC
- Independent Hardware iSCSI Adapter
-- These types of adapters are independent from the host and VMware
-- Provides its own configuration management for address assignment
- Software iSCSI Adapter (built into VMware’s code; uses host resources)

Statements regarding iSCSI adapters...
... Dependent Hardware iSCSI adapters require VMkernel networking
... Independent Hardware iSCSI adapters do not require VMkernel networking

Virtual Disk Thin Provisioning:
- Allows you to create virtual disks of a logical size that initially differs from the physical space used on a datastore
- Can lead to over-provisioning of storage resources

Array Thin Provisioning:
- Thin provision a LUN at the array level
- Allows you to create a LUN on your array with a logical size that initially differs from the physical space allocated.
- Array thin provisioning is not ESXi aware without using the storage APIs for array integration (VAAI).
- Using VAAI you can monitor space on the thin provisioned LUNs, and tell the array when files are freed to reclaim free space.

Opinion: If your array supports array thin provisioning and VAAI then use array thin provisioning and thick disks within vSphere.

- Use single-initiator zoning or single-initiator single-target zoning
- Reduces the number of LUNs and targets presented to a particular host
- Controls/isolates paths in your SAN fabric
- Prevents unauthorized systems from accessing targets and LUNs

LUN masking:
- Limits which hosts can see which LUNs
- Can be done at the array layer or the VMware layer

Scan/Rescan storage:
- When adding a new storage device
- After adding/removing iSCSI targets

Rescan Storage actions:
- Scan for new Storage Devices
- Scan for new VMFS Volumes

Configure FC/iSCSI LUNs as ESXi boot devices:
- Each host must have access to their own boot LUN
- Individual ESXi hosts should not be able to see boot LUNs other than their own
- Check with vendor to configure storage adapter to boot from SAN
- iSCSI software adapter (or dependent adapter) can be used if it supports iBFT (iSCSI Boot Firmware Table)
- Configure boot sequence in BIOS

Create NFS share for use with vSphere:
- Create storage volume (and optional folder)
- Create an export for that volume (or folder) allowing IP of host(s) read/write access to the storage

4 different storage filters in vSphere 5 that are enabled by default:
- config.vpxd.filter.vmfsFilter
- config.vpxd.filter.rdmFilter
- config.vpxd.filter.SameHostAndTransportsFilter
- config.vpxd.filter.hostRescanFilter

Authentication methods for Outgoing and Incoming CHAP (iSCSI):
- None
- Use unidirectional CHAP if required by user
- Use unidirectional CHAP unless prohibited by target
- Use unidirectional CHAP
- Use bidirectional CHAP

Use case for Independent hardware iSCSI initiator:
- if you have a very heavy iSCSI environment with a lot of I/O (OLTP)

Use case for Dependent hardware iSCSI initiator:
- a high iSCSI I/O environment

Use case for Software iSCSI initiator:
- Low cost

Use case for array thin provisioning:
- Uniformity
- Less overhead
- ease of use

Objective 3.2 - Configure Software-defined Storage

Before you can configure Virtual SAN (VSAN):
- You need at least 3 hosts to form a VSAN cluster
- Each host requires minimum of 6 GB memory
- Make sure devices/firmware are listed in VMware Compatibility Guide
- Ensure you have the proper disks needed for the intended configuration (SAS and SSD, or just SSD)
- Storage device for VSAN must be local to ESXi host with no pre-existing partitions
- Each disk group will need one SAS drive and one SSD drive, or two SSD (one SSD is for caching)
- Ensure enough space to account for availability requirements
- The latest format 2.0 of VSAN requires 1% capacity per device
- In all flash configurations, untag the SSD devices that will be used for capacity

Note i: VSAN will never put more than one replica of the same object in the same fault domain.
Note ii: Can Enable/Disable Virtual SAN Fault Domains

Configure the VSAN network:
- VSAN doesn’t support multiple VMkernel adapters on the same subnet
- Multicast must be enabled on the physical switches
-- Allows metadata to be exchanged between the different hosts in a VSAN cluster
-- Enables the heartbeat connection between the hosts in the VSAN cluster
- Segment VSAN traffic on its own VLAN
- Use fault domains to spread data across multiple hosts
- Use 10 GbE adapter(s) on your physical hosts
- Create a port group on a virtual switch specifically for VSAN traffic
- Ensure the physical adapter used for VSAN is assigned as an active uplink on the port group
- Need a unique multicast address for each VSAN cluster on same layer 2 network

Note i: Virtual SAN is enabled (tick ‘Turn ON’) on the cluster
Note ii: Add disks to storage modes: Automatic or Manual
Note iii: Licensing for VSAN is required

Creating a VVOL (VMware Virtual Volume) - high level steps:
- Register Storage Providers for the Virtual Volumes*
- Create a Virtual Datastore**
- Verify that the protocol endpoints exist
- (Optional) Change the PSP for the protocol endpoint

*Usually VASA provider URL for the underlying storage array
**Select VVOL as the type of datastore

VM Storage Policies -> Create a new VM storage policy...
Note: Storage Policies can have tag-based rules
Note: If you see zero storage policies, you need to enable VM storage policies
Note: Options include: Compatible and Incompatible

Note: The Virtual SAN network - is used for HA Network Heartbeat configuration on a Virtual SAN cluster

Two virtual machine states indicative of a fault in the Virtual SAN cluster...
... The virtual machine is non-compliant and the compliance status of some of its objects is noncompliant
... The virtual machine object is inaccessible or orphaned

Objective 3.3 - Configure vSphere Storage Multi-pathing and Failover

Identify available Storage Load Balancing options - paths status:
- Active -- used for active I/O*
- Standby -- The path will become active if the active path fails
- Disabled -- this means the path is disabled and can’t accept data
- Dead -- this path currently has no connectivity to the datastore/device
*Active path currently accepting data will be marked “Active (I/O)”

Identify available Storage Multi-pathing policies:

- 2 types of multi-pathing policy available to storage devices:
-- PSP = Path Selection Policy
-- SATP = Storage Array Type Policy

- The 3 types of PSPs available through the Native Multipathing Plugin (NMP) are:
-- Round Robin (most common): selected I/Os are sent down different available paths at a set interval (default set interval is 1000)
-- Most Recently Used (MRU): sends all I/O down the first working path that is discovered at boot time. There is no failback after failover. This PSP is generally used for active/passive storage arrays.
-- Fixed: sends all I/O down the path that you set as the preferred path (or - if no path is set - first working path discovered at boot time.) There is failback.

Note: A SATP is the plugin that gets associated with the different paths to a device. Typically the SATP relates to the storage vendor or to the type of storage array that the device is connected to.

Identify features of Pluggable Storage Architecture (PSA):
- VMware NMP (Native Multi-Pathing Plugin)
- VMware SATP (VMware Storage Array Type Plugin)
- VMware PSP (Path Selection Policy plugin)
- Third-Party MPP (Multi-Pathing Plugin)
- Third-Party SATP (Storage Array Type Plugin)

Image: Pluggable Storage Architecture (PSA)

Some things the VMware NMP or Third-Party MPP are responsible for:
- Provides logical and physical path I/O statistics
- Loads and unloads Multipathing plugins
- Routes I/O requests for a specific logical device to the MPP managing that device
- Handles I/O queuing to the physical HBAs
- Handles physical path discovery and removal
- Implements logical device bandwidth sharing between virtual machines

Multi-Pathing modules provide the following:
- Manage physical path claiming and unclaiming
- Manage creation, registration and deregistration of logical devices
- Associate physical paths with logical devices
- Support path failure detection and remediation
- Process I/O requests to logical devices

Considerations for Storage Multipathing...
...  The default SATP for iSCSI or FC devices is VMW_SATP_DEFAULT_AA and the default PSP is VMW_PSP_FIXED
... If VMW_SATP_ALUA is assigned to a specific storage device, but the device is not ALUA-aware, performance could be degraded

Objective 3.4 - Perform Advanced VMFS and NFS Configurations and Upgrades

Identify VMFS5 capabilities:
- VMFS5 datastore capacity is 64TB
- Block size is standardized at 1MB
- Greater than 2TB storage devices for each VMFS5 extent
- Supports virtual machines with greater than 2TB disks
- Greater than 2TB Raw Device Mappings (RDMs)
- Support for small files (1KB)
- Ability to reclaim physical storage space on thin provisioned storage devices

Note: Any VMFS datastore you want to put in maintenance mode must be in a datastore cluster.
Note: Preferred path can only be set on a datastore that is using the Fixed Path selection policy.

NFS datastores support NFS v3 and NFS v4.1

Identify available Raw Device Mappings (RDM) solutions:
- RDMs come in two flavours, virtual compatibility mode and physical compatibility mode
- An RDM is a file that exists inside a VMFS volume which manages all the metadata for the raw device
- Some use cases for using RDMs:
-- Storage resource management software
-- SAN management agents
-- Replication software
-- Microsoft Failover Clustering
Note: Whenever software needs direct access to the SCSI device the RDM needs to be in physical compatibility mode.

To Disable VAAI: Host > Manage > Settings> Advanced System Settings
- HardwareAcceleratedMove = 0
- HardwareAcceleratedInit = 0
- HardwareAcceleratedLocking = 0
Note: To Enable VAAI set back to 1.

Use case for multiple VMFS/NFS Datastores:
- Datastores sit on backend storage that have physical disks configured in a particular way
- Disk contention could be a problem
- HA and resiliency

Objective 3.5 - Setup and Configure Storage I/O Control

Datastore Capabilities -> Edit -> Enable Storage I/O Control

Manage the method of which SIOC is implemented:
- Percentage of peak throughput
- Manual (based on milliseconds)
Note: Can choose to ‘Exclude I/O Statistics’ from SDRS

3 different graphs to monitor SIOC:
- Storage I/O Control Normalized Latency
- Storage I/O Control Aggregate IOPs
- Storage I/O Control Activity

Storage I/O Control will not function correctly if...
... Two different datastores share the same spindles


To create a 3TB VMDK on a 2TB VMFS5 datastore on an ESXi 6.x host, two possible actions...
... Increase the LUN on which the VMFS5 datastore resides to more than 3TB, and then grow the datastore to use the added capacity.
... Map a new LUN that is larger than 1TB to the ESXi host then add the new LUN as an extent to the VMFS5 datastore.

Two solutions to eliminate potential sources of SCSI reservation conflicts (which are causing slow performance)...
... Reduce the number of snapshots
... Upgrade the host to the latest BIOS