Expanding an Xsan volume

05 Dec 2023

We just expanded a volume for a customer, adding 2 times their existing storage.  Apple does have a short section on the process in the Xsan Management Guide for Mac v1.2, but there are several other things to consider when doing this.

The first is sizing.  When expanding a volume, you are adding storage pools.  Ideally for performance all the storage pools should be the same size.  In this case, the volume was a pair of Infortrend head nodes and JBODs and we were adding 4 more sets of head node and JBOD, so it was easy to make our new storage pools match the existing storage pools.  If the current volume is older storage this may make it harder to match.

Another concern is labeling the LUNs so we can specify which go into which storage pool.  This is important to ensure a storage pool isn’t spread across multiple raid controllers.  The volume will have multiple pools, so the volume spans raid controllers, but we want each storage pool to use the fewest raid controllers for the number of LUNs.  A fool proof way to do this is to only bring one LUN online on the fibre at a time.  With Infortrend storage, this means creating a Host LUN Mapping for the Volume.  We use the default map to all fibre interfaces and then use Fibre Channel Zoning on the fibre switch to control which clients see which luns.  Once a LUN map is in place, an MDC generally needs to be restarted to see it.  By default you will get a Finder warning about an unreadable disk; click Ignore.  We covered the labeling process in Create a new Xsan on macOS Monterey.  The short version is for each LUN, run sudo cvlabel -c > ~/Desktop/labels.txt, edit the file to change `CvfsDisk_UNKNOWN` to a useful name, delete all the other lines, save the file, and then run `sudo cvlabel ~/Desktop/labels.txt`, following the prompts to apply the new label.

Now that all the LUNs are on the fibre fabric, we need to make sure all the relevant computers can see them.  Using your fibre switch interface, add the LUNs to the proper zones for the MDCs and any volume clients.  For us this means creating aliases for each head node, adding those aliases to each connected computer’s zone, and enabling the new zone config. 

We will now create the command to expand the volume (but not run it yet!).  The command is sudo xsanctl editVolume <volumeName> --storagePool <poolName> --addLUNs <lunName> <lunName> --storagePool.... There will be a --storagePool ... section for each storage pool being added and each of those will need a --addLUNS ... section for the makeup of the storage pool.  You can also use --addLUN <lunName> --addLUN <lunName>... to be more explicit about it if you want.  Our command ended up being sudo xsanctl editVolume Production --storagePool DataPool5 --addLUN Prod_Data9 --addLUN Prod_Data10 --storagePool DataPool6 --addLUN Prod_Data11 --addLUN Prod_Data12 --storagePool DataPool7 --addLUN Prod_Data13 --addLUN Prod_Data14 --storagePool DataPool8 --addLUN Prod_Data15 --addLUN Prod_Data16 --storagePool DataPool9 --addLUN Prod_Data17 --addLUN Prod_Data18 --storagePool DataPool10 --addLUN Prod_Data19 --addLUN Prod_Data20 --storagePool DataPool11 --addLUN Prod_Data21 --addLUN Prod_Data22 --storagePool DataPool12 --addLUN Prod_Data23 --addLUN Prod_Data24.  This added 8 storage pools to the existing 4.  Each pool is made of 2 LUNs matching the existing setup.

Now that we know what we are going to run, we will backup the current config and stop the volume.  To create the backup, we can run cd /Library/Preferences; sudo tar cvf ~/Desktop/mdc.tar Xsan.  Then to stop the volume, try sudo xsanctl stopVolume <VolumeName> for each volume that will be expanded.  In our environment, we have some clients that like to hold onto the volume and this command fails (partially our own fault for having auto mount launch daemons).  We can force it with sudo cvadmin -F $xsan_volume -e "clientunmount hard 1 1", but we chose to shutdown the Xsan clients.  Once the volume is stopped, we can run our command we planned above.  cvfsck will be run on the volume and you will see an ASCII progress bar and status message that will eventually look like:

Step:  1 of  1
Step Description: COMPLETED
[|||||||||||||||||||||||||||||||||||||||100%|||||||||||||||||||||||||||||||||||]

At this point, you can start the volume again with sudo xsanctl startVolume <volumeName> and reboot your clients.

Share

Eric Hemmeter