NetApp ONTAP Advanced Drive Partitioning (ADP) – How to adopt root-data-data partitioning in newly installed shelves

Table of contents

  1. ADP
  2. Objective
  3. Solution

ADP

There are few quirks around ADP and I had to follow the process below to ensure that a newly added shelf uses the same partitioning as the original factory-formatted one.

NetApp Advanced Drive Partitioning (ADP) is a technology used in NetApp storage systems to optimize disk utilization, performance, and data protection. ADP creates multiple partitions on each disk drive, with each partition being used for specific purposes such as one root partition and two more data partitions owned by each cluster node. The benefits of this are that disk space is utilized more efficiently, resulting in increased storage capacity and lower storage costs. Essentially, whole disks are no longer dedicated to the root aggregates and instead, something called root-data-data partitioning is in use where a disk is split into three.

Objective

When I had to install the first additional shelf, although the disks in the original shelf (below left) were partitioned like this: 23.4Gb root, and two 435.29Gb partitions for each node’s data aggregate – the “-simulate true” option of the “storage aggregate add-disks” command was showing that the newly added disks will not be partitioned the same way.

Solution

To ensure the newly added shelf followed the same partitioning scheme, I had to create a new raid group while using one of the existing already partitioned spares (1.0.23). This way the new raid group adopted the existing root-data-data partitioning and the disks from the new shelf (1.1.12,1.1.13,1.1.14,1.1.15) were also formatted using that scheme.

storage aggregate add-disks -aggregate node2_SSD_1_aggr -disklist 1.0.23,1.1.12,1.1.13,1.1.14,1.1.15 -raidgroup new
ONTAP Advanced Drive Partitioning (ADP) – root-data-data

The second time I had to install a shelf to the same system, I no longer had any partitioned spares, which complicated things a little bit more.

Initially, I tried to follow the following article which suggested that “storage disk replace” operation can vacate a disk while preserving the partition scheme How to add partitioned disks to a new raidgroup – NetApp Knowledge Base, but that did not work for me for some reason.

Instead, I had to use the following diag mode command to mirror the partition scheme to one of the new disks and create a partitioned spare:

cluster::*> storage disk create-partition -source-disk 2.0.0 -target-disk 2.1.10

Warning: Disk "2.1.10" will be partitioned as follows:

Usable Container Container
Partition Size Type Name Owner
------------- ---------- ---------- ---------- ----------
2.1.10.P1 435.3GB spare Pool0 node2
2.1.10.P2 435.3GB spare Pool0 node1
2.1.10.P3 23.39GB spare Pool0 node1

Do you want to continue? {y|n}: y

Disk "2.1.10" is now partitioned.

Having a partitioned spare allowed me to create a new raid group using the same method I used the first time I added a shelf:

storage aggregate add-disks -aggregate node1_SSD_1_aggr -disklist 2.1.10,2.2.12,2.2.13,2.2.14,2.2.15 -raidgroup new -simulate true

Disks would be added to aggregate "node1_SSD_1_aggr" on node "node1" in the following manner:

First Plex

RAID Group rg2, 5 disks (block checksum, raid_dp)
Usable Physical
Position Disk Type Size Size
---------- ------------------------- ---------- -------- --------
shared 2.1.10 SSD - -
shared 2.2.12 SSD - -
shared 2.2.13 SSD 435.3GB 435.3GB
shared 2.2.14 SSD 435.3GB 435.3GB
shared 2.2.15 SSD 435.3GB 435.3GB

Aggregate capacity available for volume use would be increased by 1.15TB.

The following disks would be partitioned: 2.2.12, 2.2.13, 2.2.14, 2.2.15.

Leave a Reply

Your email address will not be published. Required fields are marked *