Pages

Friday, February 3, 2012

Symmetrix VMAX Auto Provisioning Groups

Pre-2018 post from old blog...please check support.emc.com for latest information.

For those new to Symmetrix VMAX, Auto Provisioning is probably what you'll assume has been the norm for Symmetrix administrators for years but if you've experienced provisioning on the previous models with many devices to large clusters over many front end ports you'll recall either the endless hours you put in to ensure everything was mapped and masked properly.

Those days are gone!


With Auto Provisioning the entire provisioning mechanism has been vastly simplified and is now extremely rapid with some new SYMCLI commands and presentation wizards in Unisphere for VMAX.

Auto Provisioning Groups are a very welcomed paradigm shift for storage administrators during the presentation and reclamation of devices from the array and there are a few concepts to first understand and we'll cover each of them in turn before getting into Auto-Provisioning itself.

If you've been doing this on Symmetrix prior to VMAX then skip the first two sections as they will only tell you how to suck eggs!

Symmetrix Device Mapping and Masking 101

For a Symmetrix Logical Volume which I'll call a symvolid to be presented to a host it must first be mapped to a front-end port and given a channel address which we'll call a LUN number - this task is known as mapping.

Once the devices have been mapped up to a front-end port and given a LUN number an HBA must then be granted access to it - this task is known as masking and on DMX the front-end ports must be VCM-enabled and on VMAX ACLX-enabled. When a port is VCM or ACLX-enabled this means device masking must be carried out to grant access.

On Symmetrix it is recommended to map the symvolid to at least two front-end ports and then mask it to at least two HBAs.  Prior to Auto-Provisioning this procedure took a few commands each of which took up to 20 minutes to perform on the array.

With Auto-Provisioning - storage can be presented to even the largest of clusters in minutes.

Channel Addressing and Dynamic LUN Addressing

With Solutions Enabler 7.0 and Enginuity 5772 the Symmetrix gained a feature known as Dynamic LUN Addressing (DLA) which enables storage administrators to dynamically assign LUN addresses on a per HBA level - and now at the initiator group level on VMAX.  Previously you could only ever use the channel address assigned to the device on the front-end ports and this at times caused havoc for those operating systems like Windows and ESXi that are restricted to an address range of 00-FF (0-256).  If you had a Windows box and masked out a device with a LUN number higher than FF then you simply couldn't discover the device as the address range is beyond what Windows can natively address. DLA is optional on 5772 and 5773 but is default on Auto-Provisioning (5874 and higher) and empowers storage administrators with the ability to serve hosts with restricted address ranges on FA ports that have more than 256 devices. In a nutshell, with DLA it really doesn't matter what channel address is assigned to the device on the port as DLA will modify this at an initiator group level.


...enter Auto-Provisioning Groups

Auto-Provisioning introduces device management by joining groups of objects in a masking view. Lets take a look at the groups. In the examples shown we'll have a two node ESXi cluster - each host with its own unique boot from SAN device and a shared device accessible by both hosts. Host 1 will have four local devices and Host 2 one.


Initiator Groups contain the World Wide Port Name (WWPN) objects of the HBAs used by your host or cluster.

Auto-Provisioning: Initiator Group
Auto-Provisioning: Cascaded Initiator Group

  • a VMAX can contain up to 8192 initiator groups
  • cascaded initiator groups allow for groupings of initiator groups and can only go one deep - a real handy feature for clusters
  • cascaded initiator groups have a maximum of 1024 initiators total
  • a maximum of 32 Fibre Channel initiators or 8 iSCSI names per group or a combination of both per group
  • an initiator can be a direct member of only one group
  • an initiator group can be a member of more than one masking view
  • port flags can be set at the initiator group level
  • in general its more efficient to group all HBAs from one host in an initiator group
  • the consistent_lun parameter can be used to ensure the same address is assigned to a device presented to an initiator group and is generally what I use
  • akin to the Hostss folder on Navi/Unisphere Storage Groups
  • suggested naming conventions: IG_HOSTNAME or CIG_CLUSTERNAME

We'll first create the initiator groups unique to each ESXi host for the local devices and another cascaded initiator group to use for the shared devices masking view that will be presented the cluster shared storage to both hosts.

symaccess -sid 1234 create -type initiator -name IG_ESXiNODE1 -consistent_lun <<< create the group and name it for the host
symaccess -sid 1234 -type initiator -name IG_ESXiNODE1 add -wwn 211100243581230d <<< add the WWN of the first HBA to the group for the host
symaccess -sid 1234 -wwn 211100243581230d set hba_flags on D -enable <<< toggle any port flags at the initiator level
symaccess -sid 1234 -type initiator -name IG_ESXiNODE1 add -wwn 211100243541230e
symaccess -sid 1234 -wwn 211100243541230e set hba_flags on D -enable
symaccess -sid 1234 create -type initiator -name IG_ESXiNODE2 -consistent_lun
symaccess -sid 1234 -type initiator -name IG_ESXiNODE2 add -wwn 211100243581123f
symaccess -sid 1234 -wwn 211100243581123f set hba_flags on D -enable
symaccess -sid 1234 -type initiator -name IG_ESXiNODE2 add -wwn 211100243541123e
symaccess -sid 1234 -wwn 211100243541123e set hba_flags on D -enable
symaccess -sid 1234 create -type initiator -name IG_ESXiCLUS -consistent_lun <<< create the cascaded initiator group and name (this is the parent object)
symaccess -sid 1234 -type initiator -name IG_ESXiCLUS add -ig IG_ESXiNODE1 <<< add the initiator groups (these are the child objects)
symaccess -sid 1234 -type initiator -name CIG_ESXiCLUS add -ig IG_ESXiNODE2


Port Groups contain front-end port objects from the array that are to be used by your host or cluster.


Auto-Provisioning: Port Group


There are generally two ways to implement port groups and I generally term these as rigid or elastic. Rigid being where you configure a fixed number of port groups with a fixed number of ports. Elastic being were you have a defined or modeled port group layout but create a new port group for every host based on these definitions or models and can grow or shrink these on a per host basis by adding or removing ports. Each have pro and cons from a day to day management perspective but my preference would be to go with the elastic approach.

  • a VMAX can contain up to 8192 port groups
  • the maximum number of ports equals maximum number of ports in the VMAX - currently 128 for an 8-engine box
  • ports can be a member of more than one group which is handy if you decide to implement a port group per server or cluster
  • port groups can be a member of more than one masking view
  • for availability and performance port groups should contain a minimum of two ports from different directors and with thin provisioning a minimum of four ports should be considered - remember that for platforms like ESXi increasing the number of ports beyond four decreases the maximum number of LUNs the host can address as it 256 LUNs / 1024 paths.
  • loosely akin to the advanced HBA registration in Navi/Unisphere Storage Groups
  • suggested naming conventions: PG_NAME (hard) or PG_HOSTNAME (soft) or PG_CLUSTERNAME (soft)

In this example we will create three hard port groups on this VMAX - two we'll use for the local device masking (each with four ports) and the third (with eight ports) for the cluster shared device masking.

symaccess -sid 1234 create -type port -name PG_01_4MEM <<< create the 4 member port group
symaccess -sid 1234 -type port -name PG_01_4MEM add -dirport 8E:0 <<< add ports to the group
symaccess -sid 1234 -type port -name PG_01_4MEM add -dirport 7E:0
symaccess -sid 1234 -type port -name PG_01_4MEM add -dirport 9E:1
symaccess -sid 1234 -type port -name PG_01_4MEM add -dirport 10E:1
symaccess -sid 1234 create -type port -name PG_02_4MEM
symaccess -sid 1234 -type port -name PG_02_4MEM add -dirport 8F:0
symaccess -sid 1234 -type port -name PG_02_4MEM add -dirport 7F:0
symaccess -sid 1234 -type port -name PG_02_4MEM add -dirport 9F:1
symaccess -sid 1234 -type port -name PG_02_4MEM add -dirport 10F:1
symaccess -sid 1234 create -type port -name PG_07_8MEM <<< create the 8 member port group
symaccess -sid 1234 -type port -name PG_07_8MEM add -dirport 8E:0 <<< add ports to the group
symaccess -sid 1234 -type port -name PG_07_8MEM add -dirport 7E:0
symaccess -sid 1234 -type port -name PG_07_8MEM add -dirport 9E:1
symaccess -sid 1234 -type port -name PG_07_8MEM add -dirport 10E:1
symaccess -sid 1234 -type port -name PG_07_8MEM add -dirport 8F:0
symaccess -sid 1234 -type port -name PG_07_8MEM add -dirport 7F:0
symaccess -sid 1234 -type port -name PG_07_8MEM add -dirport 9F:1
symaccess -sid 1234 -type port -name PG_07_8MEM add -dirport 10F:1


Storage Groups contain device objects from the array that are to be presented to your host or cluster.


Auto-Provisioning: Storage Group

  • a VMAX can contain up to 8192 storage groups
  • the maximum number of devices per group is 4096
  • a device can be a member of more than one storage group
  • a storage group can be a member of more than one masking view
  • devices are assigned dynamic LUN addresses using the Dynamic LUN Adressing (DLA) feature
  • used by FAST DG and FAST VP to associate the content of a storage group with a tiering policy
  • akin to the LUNs folder on Navi/Unisphere Storage Groups
  • suggested naming conventions: SG_HOSTNAME or SG_CLUSTERNAME

In this example we have three storage groups: two for the unique local devices and one for the cluster shared devices.

symaccess -sid 1234 create -type storage -name SG_ESXiNODE1 <<< create the storage group for each host
symaccess -sid 1234 create -type storage -name SG_ESXiNODE2
symaccess -sid 1234 -type storage -name SG_ESXiNODE1 add dev 11A <<< add devices to the storage group
symaccess -sid 1234 -type storage -name SG_ESXiNODE1 add dev 11B
symaccess -sid 1234 -type storage -name SG_ESXiNODE1 add dev 11C
symaccess -sid 1234 -type storage -name SG_ESXiNODE1 add dev 11D
symaccess -sid 1234 -type storage -name SG_ESXiNODE2 add dev 222
symaccess -sid 1234 create -type storage -name SG_ESXiCLUS <<< create the storage group for the cluster shared devices
symaccess -sid 1234 -type storage -name SG_ESXiCLUS add dev 333 <<< add devices to the cluster shared devices storage group


Masking Views knit these groups of objects together on the array and automatically perform the mapping of the devices in the storage group to the front-end ports in the port group and the masking of the devices in the device group to the initiators in the initiator group.

Note this example shows example names used in the SYMCLI examples.


Auto-Provisioning: Masking View


  • a single view can only contain one initiator group, one port group and one storage group
  • akin to Navi/Unisphere Storage Groups
  • suggested naming conventions: MV_HOSTNAME or MV_CLUSTERNAME

The final SYMCLI command we need to run is to create the view. Here we have three views: one for the local storage on each node and one for the cluster shared devices.

symaccess -sid 1234 create view -name MV_ESXiNODE1 -sg SG_ESXiNODE1 -ig IG_ESXiNODE1 -pg PG_01_4MEM <<< create the masking view for the host
symaccess -sid 1234 create view -name MV_ESXiNODE2 -sg SG_ESXiNODE2 -ig IG_ESXiNODE2 -pg PG_02_4MEM
symaccess -sid 1234 create view -name MV_ESXiCLUS -sg SG_ESXiCLUS -ig CIG_ESXiCLUS -pg PG_01_8MEM <<< create the masking view for the cluster shared devices

New Storage Presentation and Reclamation

So when you've configured your masking view and your cluster is now in production you get a request to present more storage. With Auto Provisioning to present additional devices all you have to do is add the device into the storage group and Auto Provisioning will automatically map and mask the device for you.

symaccess -sid 1234 -type storage -name SG_ESXiCLUS add dev 786  <<< new device presented to all cluster nodes in one command

Storage Reclamation is also fairly straightforward. If you need to remove a device or a number of devices from a host or cluster you simply remove the device from the storage group and you have the option to unmap in the same command.

symaccess -sid 1234 -type storage -name SG_ESXiCLUS remove dev 786 -unmap  <<< device removed from all cluster nodes and unmapped from all front-end end ports


Further information on Auto Provisioning and Dynamic LUN Addressing can be found @ http://support.emc.com/

1 comment:

  1. symaccess -sid 1234 -type storage -name SG_ESXiCLUS remove dev 786 -unmap <<< device removed from all cluster nodes and unmapped from all front-end end ports
    is there any ways we can remove DEVICE 786 FROM child storage group only. not from the parent or other child group.



    ReplyDelete