A few additional notes
scrabbled together whilst studying from various Clustered ONTAP certifications (the intention is to aim for the NSO-504...)
Contents:
1. General Architecture
2. SnapMirror
3. SAN (FC and iSCSI)
4. NAS (CIFS and NFS)
5. Troubleshooting
6. Miscellaneous
7. More Miscellaneous (3rd Oct)
8. Some 7-Mode Commands and Its Clustered ONTAP Equivalent
9. Further Reading
7. More Miscellaneous (3rd Oct)
8. Some 7-Mode Commands and Its Clustered ONTAP Equivalent
9. Further Reading
## 1. General
Architecture ##
The scale out feature of Clustered ONTAP allows a storage
administrator to add more than two storage systems to a cluster.
Three storage features that can be scaled by storage
administrators in Clustered Data ONTAP:
- Capacity scaling, by allowing more than two storage
systems to be combined with larger disk shelves
- Operational scaling, by allowing the entire cluster to
be administered through a single resilient interface
- Performance scaling, by using Flash Cache, SSDs, and
Quality of Service
Data ONTAP 8.1 Cluster-Mode supports CIFS, FC, FCoE, iSCSI, NFS
Clustered ONTAP supports:
- Data protection mirrors
- Load sharing mirrors
- NDMP tape backup
- Non-disruptive upgrades
A port is associated with all logical interfaces (LIFs).
Protocol access is provided to data via a virtual server.
Storage
failover (SFO) allows disks to fail over between HA partners.
2 cluster switches are required for a NetApp cluster.
The following Cisco Nexus switches are supported as
cluster switches:
- Cisco Nexus 5010 (supports up to 18 nodes)
- Cisco Nexus 5020
The following Cisco switch is supported as a management
switch:
- Cisco Catalyst 2960 (used only for clusters that have
24 nodes)
Chart: Scale-out
Clusters - Homogeneous
Table: Clustered
Data ONTAP Platform Support
For example: A pair of FAS6280s and a pair of FAS3240s is a
supported combination in a Cluster-Mode cluster.
## 2. SnapMirror
##
SnapMirror types in Clustered ONTAP 8.2
- DP: Data protection relationship.
- LS: Load-sharing relationship.
- XDP:
Vault relationship.
- RST: Temporary relationship created during a restore
operation, and deleted if the operation completes successfully.
- TDP: 7-mode to Cluster-Mode transition data protection
relationship.
A Data Protection (DP) type SnapMirror destination volume can be on
a different disk type
than the source volume.
SnapMirror LS mirrors:
- client requests
to write data are denied unless accessing the admin share
- client requests
to read data are redirected to the LS mirror destination volumes
snapmirror
initialize
Usage:
Throttle (KB/sec)
-throttle|-k {integer|unlimited}
snapmirror modify
Usage:
Throttle (KB/sec)
-throttle {integer|unlimited}
set -privilege
advanced
storage failover
show -fields local-takeover-info
storage failover
show -fields partner-takeover-info
NFO_DISK_SHELF_ENABLED
= “Negotiated failover for
disk shelf module”
An intercluster LIF allows replication between two
clusters.
To support cross-cluster SnapMirror relationships a
cluster peer relationship must be configured.
A SnapMirror destination Vserver must be created with the
same language type as the source Vserver.
In order to establish cluster peer relations between two
clusters - the intercluster interface on each node must be able to communicate
with the intercluster interface on each node in the peer cluster.
## 3. SAN (FC
and iSCSI) ##
The maximum number of nodes in a SAN cluster in CDOT 8.1
is 4, in CDOT 8.1.2 it went up to 6, and in CDOT 8.2 it went up to 8.
To identify the target iqn name in Data ONTAP 8.1
Cluster-Mode and later use:
vserver
iscsi show
The data-protocol must be properly defined when
configuring an iSCSI logical interface in Data ONTAP 8.1 Cluster-Mode and
later.
To operate Data ONTAP 8.1 Cluster-Mode SAN, ALUA (Asymmetric Logical
Unit Access) must be configured on both the initiator and target.
LUN clone in Clustered ONTAP can clone an individual LUN
inside a volume without cloning the entire volume.
## 4. NAS (CIFS
and NFS) ##
A cluster can have 24 NAS nodes.
Data ONTAP Cluster-Mode 8.1.X supports:
SMB 1.0 and SMB 2.0
Four steps an administrator must take before establishing
a CIFS share:
- create a volume
- configure an export policy
- configure name mapping
- mount a volume within the Vserver namespace
Data ONTAP 8.1 cluster-mode supports NFSv3.0, NFSv4.0 and
NFSv4.1
Data LIFs of protocol CIFS and NFS are able to failover
or migrate to other nodes in the cluster.
A NAS LIF:
- has a logical interface connection
- can be associated with an Ethernet port
- can be associated with an interface group (ifgrp)
- can be associated with an IP address
In order to create an NFS export in Cluster-Mode an
export policy and rules under a Vserver must be defined.
With NFS exports in Clustered ONTAP, export polices and
rules are stored in the RDB (exports cannot be temporary!)
To expose volumes to NAS clients you can:
- mount directories of a volume in a namespace
- mount the required volumes in a namespace
NFSv4.1 pNFS reduces traffic over the cluster network.
To improve operational availability of NAS application
data:
- provide multiple data logical interfaces
- utilize the volume move feature
## 5. Troubleshooting
##
True statements about LIF failover:
- a data Ethernet LIF fails over when the cluster is in
quorum
- a cluster management Ethernet LIF fails over when a
port is down
- a data Ethernet LIF will not revert back by default to
its home port
Before shutting down two nodes of a 4-node cluster, set
both nodes that are coming down to “ineligible”.
If the VLDB is out-of-quorum on the node serving as the
master of the VLDB ring:
- A new VLDB ring master is elected
- Clients can write to volumes on that node
- Volumes cannot be moved to or from that node
- Clients can be read from volumes hosted on that node
If one node of a 2-node cluster is shutdown to replace a
NIC and all RDB rings fall out of quorum, run cluster ha show from the
clustershell (cluster ha is only used in 2 node clusters, 4 nodes or more have
cluster ha disabled.)
## 6. Miscellaneous
##
Data ONTAP 8.1 Cluster Mode (and 7-Mode) aggregates are
64-bit by default.
Data ONTAP 8.1, system root aggregates default to 64-bit
and 0% aggr reserve.
The GUI-based tool “OnCommand System Manager” can be used
to manage a cluster.
The “system services web”
command configures the Remote Support Agent (RSA)
In a 2-node cluster with cluster HA enabled, neither node
has Epsilon.
The vol0 volume:
- Cannot be moved with the volume move command
- Contains RDB databases and log files
- Is not part of the namespace
- One exists on every node in the cluster
Volumes can be moved from 32-bit aggregates to 32-bit and
64-bit aggregates.
Four storage efficiency features in Clustered ONTAP 8.1:
- compression
- deduplication
- flexclone
- thin provisioning
In Data ONTAP 8.1: Deduplication is supported up to the
maximum volume size for the platform.
The following two Clustered Data ONTAP clone technologies
will lock a Snapshot copy:
- FlexClone
- LUN Clone
NDMP backups can be performed from the node that owns the
volume.
## 7. More Miscellaneous (3rd Oct) ##
Cisco NX-OS: the status of NPIV can be checked with the
command:
show npiv status
Brocade FabOS, the command below shows NPIV capability
and status:
portcfgshow
To examine the local FC topology (not possible to enquire
NPIV status on switch directly):
node run -node node01 fcp topology show
Only use virtual WWPNs in zone configuration:
network interface show
Unix or Linux HUK:
sanlun lun show all
sanlun lun show -p
*the -p is for path
NetApp SnapDrive
for Linux:
Set vsadmin password:
snapdrive config set vsadmin vs
View snapdrive config:
snapdrive config list
## 8. Some
7-Mode Commands and Its Clustered ONTAP Equivalent ##
7-Mode Command
Clustered ONTAP Equivalent
iscsi session show
vserver iscsi session show
iscsi interface
show
vserver iscsi interface show
iscsi security show
vserver iscsi security show
snap delta
system node run -node
{nodename/local} snap delta
fcadmin config
system node run -node
{nodename/local} fcadmin config
sis on /vol/volname
sis start -s
/vol/volnanme
df -s
sis on -vserver vs1 -volume
vol1
volume efficiency start
-vserver vs1 -volume vol1 -scan-old-data true
df -s
portset create
lun portset create
fcp topology
system node run -node
{nodename/local} fcp topology
fcp zone show
system node run -node
{nodename/local} fcp zone show
fcp stats
statistics show -object fcp*
lun stats
statistics show -object lun*
cf takeover
cf takeover -node NODENAME
## 9. Further
Reading ##
Comments
Post a Comment