Notes on NVA-1117-Deploy “FlexPod Datacenter with Microsoft Exchange 2016, SharePoint 2016, and NetApp AFF A300”
Mostly CLI commands
(abridged) with some brief notes on NVA-117:
Image: FlexPod
Datacenter for Microsoft Exchange 2016 and Microsoft SharePoint 2016
5 Deployment Procedures
5.1 NetApp Storage Configuration
Detail required: CLUSTER_NAME,
LICENSES, CLUSTER_MGMT_IP/MASK/GW/PORT, NODEx_IP/MASK/GW, NODEx_SP_IP/MASK/GW,
CLUSTER_PASSWORD, CLUSTER_DNS_DOMAIN, NAMESERVER_IPs, LOCATION, NODEx_NAME,
NODEx_AGGR_NAME, NFS_VLAN_ID, iSCSI_A_VLAN_ID, iSCSI_B_VLAN_ID...
Basic Setup
cluster
setup
disk
show
disk
assign -disk DISK -owner NODEx
disk
zerospares
aggr
create -aggregate Nx_AGGR1 -nodes NODEx -diskcount ?
system node run -node NODEx aggr
options Nx_AGGR1 nosnap on
system node run -node NODEx snap delete
-A -a -f Nx_AGGR1
storage
failover show
cluster
ha show
storage failover hwassist show
storage failover modify
-hwassist-partner-ip NODEy_IP -node NODEx
ucadmin
show
net
port modify -node * -port e0e..e0g -flowcontrol-admin none
net
int modify -vserver CLUSTER_NAME -lif cluster_mgmt -auto-revert true
system
service-processor network modify -node NODEx -address-family IPv4 -enable true
-dhcp none -ip-address NODEx_SP_IP -netmask NODEx_SP_MASK -gateway NODEx_SP_GW
system
node run -node * options cdpd.enable on
timezone
TIMEZONE
date
DATE
cluster
time-service ntp server create -server NTP_SERVER_IP
system
node autosupport modify -node * -state enable –mail-hosts MAIL_HOST -transport https
-support enable -to STORAGE_TEAM
Node Networking
Setup
broadcast-domain
remove-ports -broadcast-domain Default -ports ...
broadcast-domain
show
broadcast-domain
create -broadcast-domain Infra_NFS -mtu 9000
broadcast-domain
create -broadcast-domain Infra_iSCSI-A -mtu 9000
broadcast-domain
create -broadcast-domain Infra_iSCSI-B -mtu 9000
ifgrp create -node NODEx -ifgrp a0a
-distr-func port -mode multimode_lacp
ifgrp
add-port -node NODEx -ifgrp a0a -port e0e
ifgrp
add-port -node NODEx -ifgrp a0a -port e0f
ifgrp
add-port -node NODEx -ifgrp a0a -port e0g
ifgrp
add-port -node NODEx -ifgrp a0a -port e0h
ifgrp
show
network
port modify -node * -port a0a -mtu 9000
network
port vlan create –node NODEx -vlan-name a0a-NFS_VLAN_ID
broadcast-domain
add-ports -broadcast-domain NFS_BD -ports
NODEx:a0a-NFS_VLAN_ID,
NODEy...
network
port vlan create -node NODEx -vlan-name a0a-iSCSI_A_VLAN_ID
network
port vlan create -node NODEx -vlan-name a0a-iSCSI_B_VLAN_ID
broadcast-domain
add-ports -broadcast-domain iSCSI_A_BD -ports
NODEx:a0a-iSCSI_A_VLAN_ID,
NODEy...
broadcast-domain
add-ports -broadcast-domain iSCSI_B_BD -ports
NODEx:a0a-iSCSI_B_VLAN_ID,
NODEy...
Configure SNMP
snmp
contact SNMP_CONTACT
snmp
location SNMP_LOCATION
snmp init 1
options snmp.enable on
snmp traphost add OCUM_SERVER_FQDN
snmp community delete all
snmp community add ro SNMP_COMMUNITY
security
snmpusers
security
login create -user-or-group-name snmpv3user -authmethod usm -application snmp
Note: Require
authoritative entity’s engine ID. Select md5 as the authentication protocol. Set des as the privacy protocol.
Min. 8 char.s for protocol passwords.
Configure HTTPS
Access
Note: Secure access
to Cluster/Controller/SVM is configured by default with self-signed
certificates.
Set Up Storage VM
(NFS)
vserver
create -vserver Infra-SVM -rootvolume rootvol -aggregate N1_AGGR1
-rootvolume-securitystyle unix
vserver
remove-protocols –vserver Infra-SVM -protocols fcp,cifs,ndmp
vserver
modify -vserver Infra-SVM -aggr-list N1_AGGR1,N2_AGGR1...
nfs
create -vserver Infra-SVM -udp
disabled
vserver
nfs modify –vserver Infra-SVM –vstorage enabled
vserver
nfs show
iscsi
create -vserver Infra-SVM
iscsi
show
Note: Turn on SVM
vstorage parameter for the NetApp NFS VAAI plug-in.
Create a volume to be the load-sharing mirror of the root volume of the
infrastructure SVM on each node:
volume
create –vserver Infra-SVM –volume rootvol_m0x –aggregate Nx_AGGR1 –size 1GB
–type DP
job
schedule interval create -name 15min -minutes 15
snapmirror
create –source-path Infra-SVM:rootvol –destination-path InfraSVM:rootvol_m0x
–type LS -schedule 15min
snapmirror
initialize-ls-set –source-path Infra-SVM:rootvol
Create a new rule for each ESXi host in the default export policy and
assign to the SVM root
volume:
vserver
export-policy rule create -vserver Infra-SVM -policyname default -ruleindex 1
-clientmatch ESXI_HOST_1_NFS_IP -rorule sys -rwrule sys -superuser sys
-allow-suid false
...
volume
modify -vserver Infra-SVM -volume rootvol -policy default
Create FlexVol Volumes:
volume
create -vserver Infra-SVM -volume infra_datastore_1
-aggregate N1_AGGR1 -size 1TB -state online -policy default -junction-path
/infra_datastore_1 -space-guarantee none -percent-snapshot-space 0
-snapshot-policy none
volume
create -vserver Infra-SVM -volume infra_swap
-aggregate n02_ssd01 -size 100GB -state online -policy default -junction-path
/infra_swap -space-guarantee none -percent-snapshot-space 0 -snapshot-policy
none
volume
create -vserver Infra-SVM -volume esxi_boot
-aggregate n02_ssd01 -size 500GB -state online -policy default -space-guarantee
none -percent-snapshot-space 0 -snapshot-policy none
snapmirror
update-ls-set -source-path Infra-SVM:rootvol
Create Boot LUNs for the ESXi Hosts:
volume
efficiency on –vserver Infra-SVM –volume esxi_boot
lun
create -vserver Infra-SVM -volume esxi_boot -lun VM-Host-Infra-01 -size 15GB
-ostype vmware -space-reserve disabled
...
Create iSCSI LIFs (two on each node):
network
interface create -vserver Infra-SVM -lif iscsi_lif01a -role data -data-protocol
iscsi -home-node NODE01 -home-port a0a-iSCSI_VLAN_A -address var_node01_iscsi_lif01a_ip
-netmask var_node01_iscsi_lif01a_mask -status-admin up -failover-policy disabled
-firewall-policy data -auto-revert false
...
Create NFS LIFs (one per node):
network
interface create -vserver Infra-SVM -lif nfs_infra_node_1 -role data
-data-protocol nfs -home-node var_node01 -home-port a0a-var_nfs_vlan_id
-address var_node01_nfs_ip -netmask var_node01_nfs_mask -status-admin up
-failover-policy broadcast-domain-wide -firewall-policy data -auto-revert true
...
Add
Infrastructure SVM Administrator:
network
interface create -vserver Infra-SVM -lif vsmgmt -role data -data-protocol none
-home-node var_node02 -home-port e0M -address var_svm_mgmt_ip -netmask
var_svm_mgmt_mask -status-admin up -failover-policy broadcast-domain-wide
-firewall-policy mgmt -auto-revert true
network
route create -vserver Infra-SVM -destination 0.0.0.0/0 -gateway
var_svm_mgmt_gateway
network
route show
security
login password -username vsadmin -vserver Infra-SVM
security
login unlock -username vsadmin -vserver Infra-SVM
Configure iSCSI Boot:
igroup
create –vserver Infra-SVM –igroup VM-Host-Infra-01 –protocol iscsi –ostype
vmware –
initiator
<<var_vm_host_infra_01_iqn>>
...
lun
map –vserver Infra-SVM –volume esxi_boot –lun VM-Host-Infra-01 –igroup
VM-Host-Infra-01 –lunid 0
...
Set Up SVM for
Exchange (and SharePoint) Workload
Note: For explanations
see above.
vserver
create -vserver Work-SVM -rootvolume rootvol -aggregate n02_ssd01
-rootvolume-security-style unix
vserver
remove-protocols -vserver Work-SVM -protocols fcp,cifs,ndmp
vserver
modify -vserver Work-SVM -aggr-list n01_ssd01, n02_ssd01
nfs
create -vserver Work-SVM -udp
disabled
vserver
nfs modify -vserver Work-SVM -vstorage
enabled
vserver
nfs show
iscsi
create -vserver Work-SVM
iscsi
show
Load-Sharing Mirrors of SVM Root Volume (example below for
2 node cluster):
volume
create –vserver Work-SVM –volume rootvol_m01 –aggregate n01_ssd01 –size 1GB
–type DP
volume
create –vserver Work-SVM –volume rootvol_m02 –aggregate n02_ssd01 –size 1GB
–type DP
snapmirror
create –source-path Work-SVM:rootvol –destination-path Work-SVM:rootvol_m01
–type LS -schedule 15min
snapmirror
create –source-path Work-SVM:rootvol –destination-path Work-SVM:rootvol_m02
–type LS -schedule 15min
snapmirror
initialize-ls-set –source-path Work-SVM:rootvol
Configure NFSv3 (for X hosts) and an NFS VMware datastore:
vserver
export-policy rule create -vserver Work-SVM -policyname default -ruleindex X
-clientmatch esxi_hostX_nfs_ip -rorule sys -rwrule sys -superuser sys
-allow-suid false
...
volume
modify -vserver Work-SVM -volume rootvol -policy default
volume
create -vserver Work-SVM -volume VM_datastore_1 -aggregate n01_ssd01 -size 1TB
-state online -policy default -junction-path /VM_datastore_1 -space-guarantee
none -percent-snapshot-space 0
snapmirror
update-ls-set -source-path Work-SVM:rootvol
iSCSI LIFs (2 per node) and NFSs LIFs (1 per node):
network
interface create -vserver Work-SVM -lif iscsi_lif01a -role data -data-protocol
iscsi -home-node node1 -home-port a0a-iSCSI-A_vlan_id -address node01_iscsi_lif01a_ip
-netmask node01_iscsi_lif01a_mask -status-admin up -failover-policy disabled
-firewall-policy data -auto-revert false
...
network
interface create -vserver Work-SVM -lif nfs_work_node_1 -role data
-data-protocol nfs -home-node node01 -home-port a0a-nfs_vlan_id -address nfs_work_1
-netmask nfs_work_1_mask -status-admin up -failover-policy
broadcast-domain-wide -firewall-policy data -auto-revert true
...
network
interface show
Add Infrastructure SVM Administrator:
network
interface create -vserver Work-SVM -lif vsmgmt -role data -data-protocol none
-home-node node02 -home-port e0M -address svm_mgmt_ip -netmask svm_mgmt_mask
-status-admin up -failover-policy broadcast-domain-wide -firewall-policy mgmt
-auto-revert true
network
route create -vserver Work-SVM -destination 0.0.0.0/0 -gateway svm_mgmt_gateway
network route show
security
login password -username vsadmin -vserver Work-SVM
security
login unlock -username vsadmin -vserver Work-SVM
5.3 Cisco Nexus Storage Networking Configuration
FlexPod Cisco
Nexus iSCSI Storage vSphere on ONTAP
Repeat on both Nexus 9396PX-A and 9396PX-B after completing
the setup script:
config
t
feature
interface-vlan
feature
lacp
feature
vpc
feature
lldp
spanning-tree
port type network default
spanning-tree
port type edge bpduguard default
spanning-tree
port type edge bpdufilter default
port-channel
load-balance src-dst l4port
ntp
server var_global_ntp_server_ip use-vrf default
ntp
source var_switch_ntp_ip
ntp
master 3
ip
route 0.0.0.0/0 var_ib-mgmt-vlan_gateway
copy
run start
vlan
var_ib-mgmt_vlan_id
name
IB-MGMT-VLAN
exit
vlan
var_native_vlan_id
name
Native-VLAN
exit
vlan
var_vmotion_vlan_id
name
vMotion-VLAN
exit
vlan
var_nfs_vlan_id
name
NFS-VLAN
exit
vlan
var_iscsi-a_vlan_id
name
iSCSI-A-VLAN
exit
vlan
var_iscsi-b_vlan_id
name
iSCSI-B-VLAN
exit
Add NTP Distribution Interface (on A & B):
interface
Vlan var_ib-mgmt_vlan_id
ip
address var_switch_ntp_ip/var_ib-mgmt_vlan_netmask_length
no
shutdown
exit
Add Individual Port Descriptions for Troubleshooting
(example):
interface
Eth1/Y
description
...
exit
Create Port Channels (on A & B):
interface
Po10
description
vPC peer-link
exit
interface
Eth1/47-48
channel-group
10 mode active
no
shutdown
exit
interface
Po11
description
var_node01
exit
interface
Eth1/5-6
channel-group
11 mode active
no
shutdown
exit
interface
Po12
description
var_node02
exit
interface
Eth1/7-8
channel-group
12 mode active
no
shutdown
exit
interface
Po13
description
var_ucs_clustername-A
exit
interface
Eth1/1
channel-group
13 mode active
no
shutdown
exit
interface
Eth1/3
channel-group
13 mode active
no
shutdown
exit
interface
Po14
description
var_ucs_clustername-B
exit
interface
Eth1/2
channel-group
14 mode active
no
shutdown
exit
interface
Eth1/4
channel-group
14 mode active
no
shutdown
exit
copy
run start
Configure Port Channels (on A&B):
interface
Po10
switchport
mode trunk
switchport
trunk native vlan var_native_vlan_id
switchport
trunk allowed vlan var_ib-mgmt_vlan_id, var_nfs_vlan_id, var_vmotion_vlan_id, var_iscsi-a_vlan_id,
var_iscsi-b_vlan_id
spanning-tree
port type network
exit
interface
Po11
switchport
mode trunk
switchport
trunk native vlan var_native_vlan_id
switchport
trunk allowed vlan var_nfs_vlan_id, var_iscsi-a_vlan_id, var_iscsi-b_vlan_id
spanning-tree
port type edge trunk
mtu
9216
exit
interface
Po12
switchport
mode trunk
switchport
trunk native vlan var_native_vlan_id
switchport
trunk allowed vlan var_nfs_vlan_id, var_iscsi-a_vlan_id, var_iscsi-b_vlan_id
spanning-tree
port type edge trunk
mtu
9216
exit
interface
Po13
switchport
mode trunk
switchport
trunk native vlan var_native_vlan_id
switchport
trunk allowed vlan var_ib-mgmt_vlan_id, var_nfs_vlan_id, var_vmotion_vlan_id, var_iscsi-a_vlan_id,
var_iscsi-b_vlan_id
spanning-tree
port type edge trunk
mtu
9216
exit
interface
Po14
switchport
mode trunk
switchport
trunk native vlan var_native_vlan_id
switchport
trunk allowed vlan var_ib-mgmt_vlan_id, var_nfs_vlan_id, var_vmotion_vlan_id, var_iscsi-a_vlan_id,
var_iscsi-b_vlan_id
spanning-tree
port type edge trunk
mtu
9216
exit
copy
run start
Configure Virtual Port Channels for A:
vpc
domain var_nexus_vpc_domain_id
role
priority 10
peer-keepalive
destination var_nexus_B_mgmt0_ip source var_nexus_A_mgmt0_ip
peer-switch
peer-gateway
delay
restore 150 auto-recovery
exit
interface
Po10
vpc
peer-link
exit
interface
Po11
vpc
11
exit
interface
Po12
vpc
12
exit
interface
Po13
vpc
13
exit
interface
Po14
vpc
14
exit
copy
run start
Configure Virtual Port Channels for B:
vpc
domain var_nexus_vpc_domain_id
role
priority 20
peer-keepalive
destination var_nexus_A_mgmt0_ip source var_nexus_B_mgmt0_ip
peer-switch
peer-gateway
delay
restore 150 auto-recovery
exit
interface
Po10
vpc
peer-link
exit
interface
Po11
vpc
11
exit
interface
Po12
vpc
12
exit
interface
Po13
vpc
13
exit
interface
Po14
vpc
14
exit
copy
run start
Uplink into Existing Network Infrastructure:
Depending on the
available network infrastructure, several methods and features can be used to
uplink the FlexPod environment. If an existing Cisco Nexus environment is present, NetApp recommends using vPCs
to uplink the Cisco Nexus 9396PX switches included in the FlexPod environment
into the infrastructure. The previous
procedures can be used to create an uplink vPC to the existing
environment.
5.6 NetApp VSC 6.2.1
Deployment
6 Solution Verification
6.1 Exchange 2016 Verification
Microsoft Exchange LoadGen 2013 Verification.
Comments
Post a Comment