The following work-in-progress is a place for me to put
some notes around topic headings based on the NS0-156 syllabus.
Note: There are
also updates in here for Clustered ONTAP 8.2
1. Clustered ONTAP
Features
The Scale-Out architecture of Clustered ONTAP makes it
possible for storage administrators to add more than two storage systems to a
cluster.
Clustered ONTAP
Storage Scalability Features include:
Capacity scaling
- allowing more than two storage systems to be combined with larger disk
shelves
Operational
scaling - allowing the entire cluster to be administrated through a single
resilient interface
Performance
scaling - by using Flash Cache, SSDs, and Quality of Service
Clustered ONTAP 8.1 supports the following five data
protocols: CIFS, FC, FCoE, iSCSI, NFS
Clustered ONTAP 8.1
supports the following four features:
- Data Protection Mirrors
- Load Sharing Mirrors
- NDMP Tape Backup
- Non-Disruptive Upgrades
2. Hardware
Clustered ONTAP requires 2 cluster switches (cluster-interconnect
switches.)
Note: CDOT 8.2
introduces switchless clusters and single-node clusters which don’t require
cluster interconnect switches.
NetApp CN1610 (10G) is used as the Cluster Interconnect
Switch.
NetApp CN1601 (1G) is used as the Management Switch (or
2240 clusters)
Supported Cisco
switches for Clustered ONTAP 8.1 are the:
- Cisco Nexus 5010 (supports a maximum of 18 nodes in a
cluster)
- Cisco Nexus 5020
The Cisco Catalyst 2960 is a supported management switch
that can be used only on the management network for clusters that have 24
nodes.
Platform mixing
support in a Clustered ONTAP cluster (includes):
A pair of FAS6280s and a pair of FAS3240s
Chart: Scale-Out
Clusters - Homogeneous
3. Set up and
manage Aggregates/Volumes/LUNs
3.1 Aggregates
Data ONTAP 8.1 and later aggregates default to 64-bit
aggregates and have 0% aggr reserve set.
When upgrading a FAS controller from Data ONTAP 8.0.2P1
with a 32-bit root aggregate, to Data ONTAP 8.1, the root aggregate remains
32-bit.
Converting an aggregate from 32-bit to 64-bit is
supported when adding disks.
To expand an aggregate from 32-bit to 64-bit in place,
the expansion is triggered by adding disks to exceed 16TB.
3.2 Volumes
Volumes can be moved from 32-bit aggregates to 32-bit
aggregates and 64-bit aggregates.
A node’s vol0
volume:
- Cannot be moved with the volume move command
- Contains RDB databases and log files
- It is not part of the namespace
- One exists on every node in the cluster
A Virtual Server (Vserver
or - now - SVM or Storage Virtual Machine) provides Protocol Access to Data.
3.3 Storage
Efficiency
Clustered ONTAP 8.1
supports the following storage efficiency features:
- Compression
- Deduplication
- FlexClone
- Thin Provisioning
Data ONTAP 8.1 supports deduplication up to the maximum volume size for
the platform. FlexVol structures can be deduplicated.
4. Set up and
manage clusters/High Availability/Epsilon
The OnCommand System Manager GUI can be used to manage a
cluster.
In a 2-node cluster with Cluster HA Failover enabled,
neither node has Epsilon.
Before shutting down two nodes of a 4-node cluster, set
both nodes that are coming down to “ineligible”.
Storage Failover allows disks to failover between HA
partners.
If one node of a 2-node cluster is shut down to replace a
NIC, and all the RDB rings fall out of quorum, issue the cluster ha show command from the Clustershell to diagnose
(likely, cluster ha has not been enabled!)
Some LIF failover
behaviours:
A Data (Ethernet) LIF fails over when the cluster is in
quorum.
A Cluster Management (Ethernet) LIF fails over when a
port is down.
A Data (Ethernet) LIF will not revert back by default to
its home port.
VLDB is
out-of-quorum on the node serving as the master of the VLDB ring:
i. A new VLDB ring master is elected
ii. Clients can write to volumes on that node
iii. Volumes cannot be moved to or from that node
iv. Clients can read from volumes hosted on that node
5. Manage and
troubleshoot SAN performance
A SAN cluster in Clustered ONTAP 8.1 can have a maximum
of 4 nodes.
Note: 8.1.1
introduced support for 6 nodes in a SAN cluster. 8.2 introduced support for 8
nodes.
Clustered ONTAP 8.1 SAN requires that ALUA (Asymmetric Logical
Unit Access) is enabled on both the initiator and target.
6. Configure and
manage SAN (FC/iSCSI) components
The Data-Protocol parameter must be properly defined when
configuring an iSCSI logical interface in Clustered ONTAP 8.1.
To identify the target iqn name in Clustered ONTAP 8.1
and later, use the command:
vserver iscsi show
7. Configure,
administer and troubleshoot NAS (CIFS/NFS)
A NAS cluster
can have up to 24 nodes
(12 HA-pairs)
Four
characteristics of a NAS LIF:
1. It has a logical interface connection
2. It can be associated with an Ethernet port
3. It can be associated with an interface group (ifgrp)
4. It can be associated with an IP address
Two methods to
expose volumes to NAS clients:
i. Mount directories of a volume in a namespace
ii. Mount the required volumes in a namespace
Qtrees and Directories can be used to define paths (beyond
what volumes and namespaces alone would provide.)
Only Data LIFS of protocol type CIFS or NFS are able to
failover or migrate to other nodes in the cluster.
Two ways to improve
operational availability of NAS application data:
- Provide multiple data logical interfaces
- Utilize the volume move feature
7.1 CIFS
Clustered ONTAP 8.1 supports SMB versions SMB 1.0 and SMB 2.0
Steps to take
before establishing a CIFS share:
1. Create a volume
2. Mount a volume within the Vserver namespace
3. Configure an export policy
4. Configure name mapping
7.2 NFS
Clustered ONTAP 8.1 supports NFS v3.0, NFS v4.0, and NFS
v4.1.
In order to create an NFS export within Cluster-Mode, an
export policy and rules under a Vserver (SVM) must be defined.
With NFS Exports in Clustered ONTAP - Export Policies and
Rules are stored in the RDB.
Note: Exports can
only be persistent!
8. Set up and
maintain Snapshot copies
Clustered ONTAP LUN Clone technology can clone an
individual LUN inside a volume.
FlexClone and LUN Clone will lock a Snapshot copy.
9. Configure,
administer and troubleshoot SnapMirror
Establishing a SnapMirror relationship, sets up the
relationship, but does not start the initial transfer.
In a SnapMirror LS
mirror relationship:
- Client requests to write data are denied unless
accessing the admin share
- Client requests to read data are redirected to the LS
mirror destination volumes
A Data Protection (DP) type SnapMirror destination volume
can be on a different disk type than the source volume.
An intercluster LIF (think international) allows
replication between two clusters.
In order to establish cluster peer relations between two
clusters, the intercluster interface on each node must be able to communicate
with the intercluster interface on each node in the peer cluster.
To support cross-cluster SnapMirror relationship, a
Cluster Peer Relationship must be configured.
A SnapMirror destination Vserver must be created with the
same language type
as the source Vserver.
10. Configure
SnapVault
Snapmirror configured with the -xdp switch is SnapVault!
11. Configure
Remote Support Agent (RSA)
To configure the Remote Support Agent (RSA):
system services web
The NetApp RSA (Remote Support Agent) initiates a secure
connection to NetApp Support.
NetApp Remote Support Agent (RSA) is supported on the RLM
and SP modules.
12. Understand
significance of Vserver-aware tape backup
NDMP backups can be performed from the node that owns the
volume.
13. Understand
underlying LIF structure and benefits
All logical interfaces are associated with ports.
14. Describe
Infinite Volumes
NetApp Infinite Volume is a software abstract hosted over
Data ONTAP 8.1.1 operating in Cluster-Mode.
“How infinite is an Infinite Volume?” It provides a
single mount point that can scale to 20PB or 2 billion files, and it integrates
with NetApp’s proven efficiency technologies and products, such as
deduplication, compression, NetApp SnapMirror replication technology, and
System Manager.
15. Describe
Virtual Storage Tier
VST automatically identifies and stores hot data blocks
in Flash while storing cold data on slower, lower-cost media.
Controller level:
Storage controller-based Flash (NetApp Flash Cache) retains hot, random read
data.
Disk-subsystem
level: NetApp Flash Pool technology uses a hybrid model with a combination
of SSDs and HDDs in a NetApp aggregate. Hot random read data is cached and
repetitive write data is automatically stored on SSDs.
Server level:
NetApp Flash Accel technology extends VST into the server. It uses any
server-side Flash device (PCI-e Flash card or SSD) as a local cache that
off-loads I/O from networks and back-end storage to deliver optimum I/O
efficiency to your busiest applications while freeing up server CPU and memory
resources.
16. Describe pNFS
4.1 pathing and support
NFS v4.1 pNFS reduces traffic over the cluster network.
17. Describe
Single-node and Two-node configurations
CDOT 8.2 introduces switchless clusters and single-node
clusters which don’t require cluster interconnect switches.
18. Identify new
transition tools
DTA2800
is a hardware appliance used for online/offline SAN (FC/iSCSI) migration from
NetApp/3rd-Party storage to NetApp.
VTW
(volume transition wizard) is a NetApp PS only (Professional Services) CLI based
tool to migrate NAS (CIFS/NFS) from Data ONTAP 8.1 7-Mode to Clustered ONTAP
8.1.
7MTT
(7-mode transition tool) is a GUI driven tool (also has CLI) to migrate NAS
from Data ONTAP 7.3.3+ 7-Mode to Clustered ONTAP 8.2.
Comments
Post a Comment