How to define PCI DSS-compliant custom RBAC for NetApp storage

Few general NetApp concepts

A NetApp cluster is a group of storage nodes that work together to provide high availability and performance to the storage environment.

The storage systems in a cluster can be divided into Storage Virtual Machines (SVMs), which are isolated storage entities that provide storage services to clients. Each SVM can have its own IP addresses, protocols, security settings, and storage resources, allowing for a flexible and scalable storage infrastructure.

Export policies in a NetApp cluster are used to control access to storage resources within an SVM. These policies specify which clients are allowed to access the storage and what level of access they have etc.

Role-based Access Control (RBAC) is used to manage user access to storage resources in a NetApp cluster. RBAC allows administrators to define roles and assign permissions to those roles, which are then assigned to users. This allows for fine-grained control over user access to storage resources, making it possible to achieve compliance with regulations such as the Payment Card Industry Data Security Standard (PCI DSS). For example, RBAC can be used to enforce security best practices such as allowing only authorized users to access sensitive data and limiting the types of actions that users can perform on the data.

Payment Card Industry Data Security Standard (PCI DSS)

The Payment Card Industry Data Security Standard (PCI DSS) is a set of security standards designed to ensure that all companies that accept, process, store or transmit credit card information maintain a secure environment. The PCI DSS requirements for audited domain access specifically relate to the control and monitoring of access to sensitive information.

The PCI DSS requires that organizations implement access control measures to prevent unauthorized access to cardholder data. This includes access to both systems and applications that store, process or transmit cardholder data. In order to achieve compliance, organizations must:

  • Assign a unique ID to each person with computer access
  • Restrict access to cardholder data to only those individuals who need it for their job responsibilities
  • Control physical access to cardholder data
  • Regularly monitor and test security systems and processes
  • Maintain an audit trail of system activity, including all access to cardholder data

What does this mean in practice for a payments company

in addition to vendor, default accounts having to be disabled, they need to satisfy the requirements of the PCI DSS for audited domain access. This can be quite simple if you are dedicating a NetApp cluster to your PCI DSS environment but you may find it more cost-effective to only dedicate one Storage Virtual Machines (SVM) instead.

Our focus here will be the latter when only one Storage Virtual Machines (SVM) is used by the PCI DSS environment and this is how we set it up.

NetApp CLI configuration examples

In a nutshell, due to limitations of the implementation of the System Manager’s web server, only cluster administrators can access the system manager but we cannot allow such global administrators under PCI DSS. Instead, administrators from our non-pci domain will have administrative privileges only to non-PCI SVMs and the PCI admins (separate domain) will have admin (vsadmin) access to the PCI SVM.

This is how we set up the non-PCI side of this

We join one of our non-PCI SVMs to a domain the usual way and configure tunnelling so that other SVM can use this authentication source too (nothing new here):

vserver active-directory create -vserver non_PCI_SVM_1 -account-name non_PCI_SVM_1 -domain domain.com -ou CN=Computers

security login domain-tunnel create -vserver non_PCI_SVM_1

then we configure our non-PCI custom role – note the extra command text at the end where we use query to limit the effect of the command only to a selected list of non_PCI SVMs:

-query “-vserver ClusterSVM,non_PCI_SVM_1,non_PCI_SVM_2,non_PCI_SVM_3”

security login role create -role Non-PCI_Admin -cmddirname DEFAULT -access readonly

security login role create -role Non-PCI_Admin -cmddirname "cluster" -access all -query "-vserver ClusterSVM,non_PCI_SVM_1,non_PCI_SVM_2,non_PCI_SVM_3"
security login role create -role Non-PCI_Admin -cmddirname "event" -access all -query "-vserver ClusterSVM,non_PCI_SVM_1,non_PCI_SVM_2,non_PCI_SVM_3"
security login role create -role Non-PCI_Admin -cmddirname "job" -access all -query "-vserver ClusterSVM,non_PCI_SVM_1,non_PCI_SVM_2,non_PCI_SVM_3"
security login role create -role Non-PCI_Admin -cmddirname "lun" -access all -query "-vserver ClusterSVM,non_PCI_SVM_1,non_PCI_SVM_2,non_PCI_SVM_3"
security login role create -role Non-PCI_Admin -cmddirname "network" -access all -query "-vserver ClusterSVM,non_PCI_SVM_1,non_PCI_SVM_2,non_PCI_SVM_3"
security login role create -role Non-PCI_Admin -cmddirname "qos" -access all -query "-vserver ClusterSVM,non_PCI_SVM_1,non_PCI_SVM_2,non_PCI_SVM_3"
security login role create -role Non-PCI_Admin -cmddirname "security" -access all -query "-vserver ClusterSVM,non_PCI_SVM_1,non_PCI_SVM_2,non_PCI_SVM_3"
security login role create -role Non-PCI_Admin -cmddirname "snapmirror" -access all -query "-vserver ClusterSVM,non_PCI_SVM_1,non_PCI_SVM_2,non_PCI_SVM_3"
security login role create -role Non-PCI_Admin -cmddirname "storage" -access all -query "-vserver ClusterSVM,non_PCI_SVM_1,non_PCI_SVM_2,non_PCI_SVM_3"
security login role create -role Non-PCI_Admin -cmddirname "system" -access all -query "-vserver ClusterSVM,non_PCI_SVM_1,non_PCI_SVM_2,non_PCI_SVM_3"
security login role create -role Non-PCI_Admin -cmddirname "volume" -access all -query "-vserver ClusterSVM,non_PCI_SVM_1,non_PCI_SVM_2,non_PCI_SVM_3"
security login role create -role Non-PCI_Admin -cmddirname "vserver" -access all -query "-vserver ClusterSVM,non_PCI_SVM_1,non_PCI_SVM_2,non_PCI_SVM_3"

Once the custom role is defined, we can grant access to the non-PCI administrators in the usual way:

security login create -user-or-group-name domain\NetApp_non-PCI_Admins -role Non-PCI_Admin -application http -authmethod domain
security login create -user-or-group-name domain\NetApp_non-PCI_Admins -role Non-PCI_Admin -application ontapi -authmethod domain
security login create -user-or-group-name domain\NetApp_non-PCI_Admins -role Non-PCI_Admin -application ssh -authmethod domain

Optional - check with "vserver services web access show -vserver ClusterSVM" and if the entries for Non-PCI_Admin are missing use these commands:
vserver services web access create -vserver ClusterSVM -name sysmgr -role Non-PCI_Admin
vserver services web access create -vserver ClusterSVM -name security -role Non-PCI_Admin
vserver services web access create -vserver ClusterSVM -name rest -role Non-PCI_Admin

Now onto the PCI SVM

Things are simpler here. We join the SVM to the active directory domain in our PCI DSS environment and grant access using the pre-defined vsadmin role:

vserver active-directory create -vserver PCI_SVM -account-name PCI_SVM -domain pcidomain.com -ou CN=Computers

security login create -vserver PCI_SVM -user-or-group-name pcidomain\NetApp_PCI_Admins -role vsadmin -application http -authmethod domain
security login create -vserver PCI_SVM -user-or-group-name pcidomain\NetApp_PCI_Admins -role vsadmin -application ontapi -authmethod domain
security login create -vserver PCI_SVM -user-or-group-name pcidomain\NetApp_PCI_Admins -role vsadmin -application ssh -authmethod domain

The difference here is that you can only SSH to the PCI SVM due to the System Manager’s limitation.

Leave a Reply

Your email address will not be published. Required fields are marked *