Posts Tagged ‘force’

Force10 MXL: Firmware Upgrade

March 19, 2015

Uploading new firmware image

MXL switches keep two firmware images – A and B. You can set either one of them to be active. Use the following command to list firmware version of all stack members and see which one is active:

 # show boot system stack-unit all

mxl_firmware

To upload firmware you will need a TFTP server. You can use TFTPD64 (also called TFTPD32), which can be downloaded from Philippe Jounin page here .

If the active firmware image is in A, upload new firmware to B. You’ll also be asked if you want to upload firmware to all switches in the stack.

# upgrade system tftp://10.0.0.1/FTOS-XL-9.5.0.0P2.bin b:

firmware_upload

At the time of upgrade the latest version was 9.7. This version was 2 weeks old and wasn’t recommended for use in production. Version 9.6 had a major bug. As a result version 9.5SP2 was chosen for the upgrade.

Double-check that new firmware has been successfully distributed to all switches:

 # show boot system stack-unit all

check_firmware

Backing up startup config

Make sure you do not miss this step. If something goes wrong and switch looses its config, you’ll have to recreate all configuration from scratch. Imagine the consequences.

# copy start tftp://10.0.0.1/MXL_01.01.15.conf

Reloading the stack

Once firmware is uploaded and config is saved, reload the stack. Be mindful that it’s a disruptive procedure and all links connected to the stack will go down. A reboot shouldn’t take more than a couple of minutes, but make sure you do that after hours.

# conf t
# boot system stack-unit all primary system b:
# exit
# copy run start
# reload

Confirm that the active firmware image is now B. And that concludes the switch stack upgrade.

firmware_upgraded

 

Advertisement

How to move aggregates between NetApp controllers

September 25, 2013

Stop Sign_91602

 

DISCLAMER: I ACCEPT NO RESPONSIBILITY FOR ANY DAMAGE OR CORRUPTION OF DATA THAT MAY OCCUR AS A RESULT OF CARRYING OUT STEPS DESCRIBED BELOW. YOU DO THIS AT YOUR OWN RISK.

 

We had an issue with high CPU usage on one of the NetApp controllers servicing a couple of NFS datastores to VMware ESX cluster. HA pair of FAS2050 had two shelves, both of them owned by the first controller. The obvious solution for us was to reassign disks from one of the shelves to the other controller to balance the load. But how do you do this non-disruptively? Here is the plan.

In our setup we had two controllers (filer1, filer2), two shelves (shelf1, shelf2) both assigned to filer1. And two aggregates, each on its own shelf (aggr0 on shelf0, aggr1 on shelf1). Say, we want to reassign disks from shelf2 to filer2.

First step is to migrate all of the VMs from the shelf2 to shelf1. Because operation is obviously disruptive to the hosts accessing data from the target shelf. Once all VMs are evacuated, offline all volumes and an aggregate, to prevent any data corruption (you can’t take aggregate offline from online state, so change it to restricted first).

If you prefer to reassign disks in two steps, as described in NetApp Professional Services Tech Note #021: Changing Disk Ownership, don’t forget to disable automatic ownership assignment on both controllers, otherwise disks will be assigned back to the same controller again, right after you unown them:

> options disk.auto_assign off

It’s not necessary if you change ownership in one step as shown below.

Next step is to actually reassign the disks. Since they are already part of an aggregate you will need to force the ownership change:

filer1> disk assign 1b.01.00 -o filer2 -f

filer1> disk assign 1b.01.01 -o filer2 -f

filer1> disk assign 1b.01.nn -o filer2 -f

If you do not force disk reassignment you will get an error:

Assign request failed for disk 1b.01.0. Reason:Disk is part of a failed or offline aggregate or volume. Changing its owner may prevent aggregate or volume from coming back online. Ownership may be changed only by using the appropriate force option.

When all disks are moved across to filer2, new aggregate will show up in the list of aggregates on filer2 and you’ll be able to bring it online. If you can’t see the aggregate, force filer to rescan the drives by running:

filer2> disk show

The old aggregate will still be seen in the list on filer1. You can safely remove it:

filer1> aggr destroy aggr1