Posts Tagged ‘Master’

How to Configure VNX Unisphere Domains

March 7, 2016

unisphere_domainVNX storage arrays have a concept of Unisphere Domains which let you manage multiple arrays from one Unisphere GUI. To manage two or more arrays from a single pain of glass you need to join their storage domains. There are typically two scenarios:

  1. Joining arrays of the same generation, such as VNX to VNX or Clariion to Clariion (VNX1 and VNX2 are considered as one generation)
  2. Joining arrays of different generations, such as Clariion to VNX

Same generation arrays

When joining same generation arrays to a single domain you get the benefit of having consistent domain-wide settings across all arrays in the domain, such as: DNS, NTP, LDAP and Global Users. If you go to Unisphere Home screen and click on the Domain button you will find where all domain-wide settings are configured. Once they are set up these settings propagate to all systems within the domain.

domain_settings

There’s also a concept of Domain Master, which keeps and distributes domain-wide settings. Domain Master can be changed manually if you wish to do so by using the Select Domain Master wizard.

To add a new system to the same domain simply click on Add/Remove Systems and follow the wizard.

Different generation arrays

It’s very uncommon to see a Clariion these days, but if you still have one and want to have a single management interface across both your Clariion and VNX arrays you have to use Multi-Domains. You won’t get the benefit of having the same domain-wide settings, but if you have just 2 or 3 arrays it’s not really that hard to set them up manually.

To add a new domain to Multi-Domain configuration click on Manage Multi-Domain Configurations, specify VNX Service Processor IP and assign a name. System will be added to the list of Selected Domains.

add_vnx

Always add another domain to Multi-Domain configuration on a system which is running the highest release of Unisphere within the Multi-Domain, otherwise you’ll get the following error:

This version of user interface software does not support the management server software versions on the provided system.

add_vnx_error

Once the system is added you will see both arrays in the systems list and will be able to manage both from one Unisphere interface. For the sake of demonstration I used two VNX arrays in the screenshot below. But the same process applies to Clarrion arrays.

two_arrays

Local and global users

Unisphere has two types of user accounts – local and global. Local account can manage the system you have connected to and global account can manage all systems within the same domain.

By default, when array is being installed, global security is initialized and one global user is created. There are no local user accounts on the system by default, which is fine, because each array is created as a member of its own local domain.

In a Multi-Domain configuration you need to make sure you’re logging in to Unisphere using an account, which exists in every domain being managed. Otherwise, each time you log in to Unisphere you will have to manually login to the remote domain on the domain tab, which is quite annoying.

If you have different accounts on each of the arrays, make sure to make them consistent across all systems.

domain_login

Conclusion

In this post in a few simple steps we went through the Unisphere single domain and multi-domain configurations. If you want to know more details about Unisphere Domain management refer to EMC white paper “Domain Management with EMC Unisphere for VNX“.

Advertisement

Force10 MXL Switch: Stacking

March 3, 2015

Overview

There are two typical scenarios for stacking MXL’s – within the chassis and across the chassis. In both cases it’s recommended to use ring topology. Daisy chaining is also supported, but not desirable because of the lack of redundancy.

In this post I will be describing the more common case, which is intra-stacking. For inter-stacking configuration you can refer to Dell or Force10 documentation.

Cabling

dell_chassis

In my case I have four MXL switches in bays A1, B1, B2, A2. Cabling is simple, you basically daisy chain all switches and then plug the last switch to the first one.

Stack roles and unit numbers

When stack is built each switch is assigned an ID starting 0 and a role in the stack. There are three roles: Master, Standby and Member:

  • Master – is the switch you’ll use for all configuration. If you currently have IPs assigned to all your MXL switches, all of them except for one will be reset and only the Master will be accessible via SSH.
  • Standby – is the switch which takes over if Master switch fails. Master switch IP address is transferred to Standby in a failover scenario and stack continues to be managed via the same IP.
  • Member switch provides port capacity and doesn’t play any additional roles in the stack.

When you plug cables in, assign stack ports and restart the switches, they will go through election process and automatically pick up roles, as well as IDs. There’s an algorithm that assigns stack IDs and roles, which switches follow. But this algorithm has nothing to do with interconnect bay IDs in the chassis or order in which you cable the switches. You end up with pretty much random numbering.

If order matters, then you’ll have to reboot switches one by one in a particular order to have the desired IDs assigned. In that case IDs are assigned sequentially in a controlled fashion.

Stack configuration

If you don’t have any additional 40GbE modules in slots 0 and 1, then you’ll end up with two QSFP+ ports in a built-in module – ports 33 and 37 (refer to my Force10 MXL Switch: Port Numbering post for port numbering details). All you need to do is to designate them as stack ports on all switches, save config and reboot.

# stack-unit 0 stack-group 0
# stack-unit 0 stack-group 1
# copy run start
# reload

By default each switch is unit 0 in its own stack and stack-group is basically just a 40GbE stack port. You can have maximum of six such ports numbered from 0 to 5. To check that stack ports have been enabled run:

# do show system stack-unit 0 stack-group configured

enabled_ports

It could be that your 40GbE ports are in quad 10GbE mode and are not shown. You’ll need to convert them back to 40GbE mode to proceed. To show the list of available ports type in the command below. Switch shows empty expansion slots as stack ports as well (port 0/41 and 0/45), which is a bit confusing.

# show system stack-unit 0 stack-group

port_list

After a reboot, switches will join the stack and get a role and an id. This process is automatic by default. To see if stack ports have come up after a reboot type:

# show system stack-port status

stack_up

Conclusion

In my example I let switches to go through election process and select roles and IDs on their own. If you want to control the assignment process refer to Dell and Force10 documentation for instructions.

Now you may wonder if unit IDs are assigned automatically, how do you know which stack unit corresponds to which chassis bay ID. The hint for that is to show system inventory and map them by the Service Tag ID which is also shown in the Chassis Management Controller:

# show system brief
# show inventory