This blog post “Create a NetApp NFS SVM” may help you if you find yourself in an environment with tightly controlled change management where you are required to document your implementation and rollback plans in greater detail. Personally, I do not like describing how I navigate UIs and wizards, and therefore I prefer to use the ONTAP’s command-line interface.
Table of contents
Create a NetApp NFS SVM
The process starts by creating a Storage Virtual Machine called “svm_1” where we need to specify a root volume, hosting aggregate, and security style which is unix for NFS, language and ipspace:
vserver create -vserver svm_1 -rootvolume svm_1_root -aggregate node_1_SAS_aggr -rootvolume-security-style unix -language C.UTF-8 -ipspace Default
Configure the NFS SVM
Since the SVM will be used to host NFS v3 volumes, we will enable NFS and disable the protocols that are not needed:
vserver add-protocols -vserver svm_1 -protocols nfs
vserver remove-protocols -vserver svm_1 -protocols fcp,ndmp,iscsi
Then we create Logical InterFaces (LIF), 1 for management and 2 for data (in two-node clusters). Since the aggregates will be owned by each node, having a data LIF on each node will offer the shortest path and lowest latency
network interface create -vserver svm_1 -lif svm_1_admin_lif1 -role data -data-protocol none -firewall-policy mgmt -home-node node_1 -home-port a0a-3579 -address 10.6.190.31 -netmask 255.255.255.0 -status-admin up -auto-revert true
network interface create -vserver svm_1 -lif svm_1_nfs_lif1 -role data -data-protocol nfs -firewall-policy data -home-node node_1 -home-port a0a-2372 -address 10.250.172.20 -netmask 255.255.255.0 -status-admin up -auto-revert true
network interface create -vserver svm_1 -lif svm_1_nfs_lif2 -role data -data-protocol nfs -firewall-policy data -home-node node_2 -home-port a0a-2372 -address 10.250.173.20 -netmask 255.255.255.0 -status-admin up -auto-revert true
The following commands configure the default gateway and DNS servers
network route create -vserver svm_1 -destination 0.0.0.0/0 -gateway 10.6.190.1
vserver services name-service dns create -vserver svm_1 -domains ukconsult.local -name-servers 10.4.1.10, 10.4.1.11 -state enabled
Next, since this SVM will be used to host VMDKs, we do not need any storage-based snapshots and therefore these can be disabled – the following command is set at SVM level and will set defaults for any newly created FlexVols
vserver modify -vserver svm_1 -snapshot-policy none
The following command is how we join the SVM to an AD domain. This creates an Active Directory computer account for it and this would allow us to grant access via AD groups later. Of course, if you have already joined another SVM to a domain, and are using tunnelling of authentication requests via that SVM, this step is not needed.
vserver active-directory create -vserver svm_1 -account-name svm_1 -domain ukconsult.local -ou CN=Computers
To serve NFS clients, an NFS server needs to be created like this
vserver nfs create -vserver svm_1 -v3 enabled -v4.0 disabled -v4.1 disabled
Create an export policy
Next, we create an export policy which controls access to the FlexVol part of this SVM
vserver export-policy create -vserver svm_1 -policyname UAT-Cluster
Once the policy has been created, we can populate with entries for each VMware ESXi host, where the IP addresses below are the NFS vmkernel IP addresses for each host
vserver export-policy rule create -vserver svm_1 -policyname UAT-Cluster -ruleindex 1 -protocol nfs -clientmatch 10.250.172.101,10.250.173.101 -rorule sys -rwrule sys -superuser any
vserver export-policy rule create -vserver svm_1 -policyname UAT-Cluster -ruleindex 2 -protocol nfs -clientmatch 10.250.172.102,10.250.173.102 -rorule sys -rwrule sys -superuser any
vserver export-policy rule create -vserver svm_1 -policyname UAT-Cluster -ruleindex 3 -protocol nfs -clientmatch 10.250.172.103,10.250.173.103 -rorule sys -rwrule sys -superuser any
vserver export-policy rule create -vserver svm_1 -policyname UAT-Cluster -ruleindex 4 -protocol nfs -clientmatch 10.250.172.104,10.250.173.104 -rorule sys -rwrule sys -superuser any
vserver export-policy rule create -vserver svm_1 -policyname UAT-Cluster -ruleindex 5 -protocol nfs -clientmatch 10.250.172.105,10.250.173.105 -rorule sys -rwrule sys -superuser any
vserver export-policy rule create -vserver svm_1 -policyname UAT-Cluster -ruleindex 6 -protocol nfs -clientmatch 10.250.172.106,10.250.173.106 -rorule sys -rwrule sys -superuser any
vserver export-policy rule create -vserver svm_1 -policyname UAT-Cluster -ruleindex 7 -protocol nfs -clientmatch 10.250.172.107,10.250.173.107 -rorule sys -rwrule sys -superuser any
vserver export-policy rule create -vserver svm_1 -policyname UAT-Cluster -ruleindex 8 -protocol nfs -clientmatch 10.250.172.108,10.250.173.108 -rorule sys -rwrule sys -superuser any
Finally, there is one more entry we need to create to allow access and this is because all parent volumes in the junction path need to allow root read access – this is done via the export policy of the SVM root volume
vserver export-policy rule create -vserver svm_1 -policyname default -ruleindex 1 -protocol nfs -clientmatch 0.0.0.0/0 -rorule sys -rwrule never -superuser sys
Create a FlexVol
At this point, we can create a FlexVol in which the hosts will store the data. In the following command, we are specifying which aggregate will hold the data as well as configuring auto-sizing, setting up export policy etc. Once the volume is created we can turn volume efficiency on [FAS systems] and optionally turn on compression for UAT workloads as required. In the case of AFF systems volume efficiency is already enabled.
volume create -vserver svm_1 -volume nfs_vol_1 -aggregate node_1_SAS_aggr -size 4TB -min-autosize 4TB -max-autosize 6TB -autosize-mode grow_shrink -junction-path /nfs_vol_1 -security-style unix -space-guarantee none -percent-snapshot-space 0 -policy UAT-Cluster -autosize-grow-threshold-percent 90
volume efficiency on -vserver svm_1 -volume nfs_vol_1
/OPTIONAL/
volume efficiency modify -vserver svm_1 -volume nfs_vol_1 -compression true
Mount in vSphere
This is the point where the FlexVols created can be presented to vCenter by mounting them as datastores. Again, we are using the command line in vCenter (PowerCLI) where we first connect to vCenter and then mount the datastore on all hosts part of our cluster using the IP address of the NFS LIF owned by the node hosting the aggregate in question
Connect-VIServer vc01.ukconsult.local
Get-Cluster UAT_Cluster_01 | Get-VMHost | New-Datastore -Nfs -Name nfs_vol_1 -Path /nfs_vol_1 -NfsHost 10.250.172.20