Posts Tagged ‘Filer’

Present NetApp iSCSI LUN to Linux host

March 7, 2012

Consider the following scenario (which is in fact a real case). You have a High Performance Computing (HPC) cluster where users usually generate hellova research data. Local hard drives on a frontend node are almost always insufficient. There are two options. First is presenting a NFS share both to frontend and all compute nodes. Since usually compute nodes  connect only to private network for communication with the frontend and don’t have public ip addresses it means a lot of reconfiguration. Not to mention possible security implications.

The simpler solution here is to use iSCSI.  Unlike NFS, which requires direct communication, with iSCSI you can mount a LUN to the frontend and then compute nodes will work with it as ordinary NFS share through the private network. This implies configuration of iSCSI LUN on a NetApp filer and bringing up iSCSI initiator in Linux.

iSCSI configuration consists of several steps. First of all you need to create FlexVol volume where you LUN will reside and then create a LUN inside of it. Second step is creation of initiator group which will enable connectivity between NetApp and a particular host.  And as a last step you will need to map the LUN to the initiator group. It will let the Linux host to see this LUN. In case you disabled iSCSI, don’t forget to enable it on a required interface.

vol create scratch aggrname 1024g
lun create -s 1024g -t linux /vol/scratch/lun0
igroup create -i -t linux hpc
igroup add hpc linux_host_iqn
lun map /vol/scratch/lun0 hpc
iscsi interface enable if_name

Linux host configuration is simple. Install iscsi-initiator-utils packet and add it to init on startup. iSCSI IQN which OS uses for connection to iSCSI targets is read from /etc/iscsi/initiatorname.iscsi upon startup. After iSCSI initiator is up and running you need to initiate discovery process, and if everything goes fine you will see a new hard drive in the system (I had to reboot). Then you just create a partition, make a file system and mount it.

iscsiadm -m discovery -t sendtargets -p nas_ip
fdisk /dev/sdc
mke2fs -j /dev/sdc1
mount /dev/sdc1 /state/partition1/home

I use it for the home directories in ROCKS cluster suite. ROCKS automatically export /home through NFS to compute nodes, which in their turn mount it via autofs. If you intend to use this volume for other purposes, then you will need to configure you custom NFS export.

Random DC pictures

January 19, 2012

Several pictures of server room hardware with no particular topic.

Click pictures to enlarge.

10kVA APC UPS.

UPS’s Network Management Card (NMC) (with temperature sensor) connected to LAN.

Here you can see battery extenders (white plugs). They allow UPS to support 5kVA of load for 30 mins.

Two Dell PowerEdge 1950 server with 8 cores and 16GB RAM each configured as VMware High Availability (HA) cluster.

Each server has 3 virtual LANs. Each virtual LAN has its own NIC which in its turn has multi-path connection to Cisco switch by two cables, 6 cables in total.

Two Cisco switches which maintain LAN connections for NetApp filers, Dell servers, Sun tape library and APC NMC card. Two switches are tied together by optic cable. Uplink is a 2Gb/s trunk.

HP rack with 9 HP ProLiant servers, HP autoloader and MSA 1500 storage.

HP autoloader with 8 cartridges.

HP MSA 1500 storage which is completely FC.

Hellova cables.

Sun StorageTek SL500

October 22, 2011

I made several pictures of tape library which serves as primary backup facility in our data centre. Here what you can achieve with this library in maximum configuration:

  • 18 drives with more than 9TB/hour throughput.
  • Up to 575 cartridges and 862TB of uncompressed storage.

Our configuration is rather small, 2 drives and 45TB of storage.

Click pictures to enlarge.



Here you can see how robo-arm performs library inventorization by reading barcodes with infrared scanner:


Both tape drives and robot are connected to NetApp filer with SCSI cables. All data is backed up from disk shelves directly to tape library via NDMP protocol. There is no need to feed data through backup server which eliminates any LAN congestion.




Here are connections to NetApp:


Security on NetApp Filer

October 9, 2011

Storage systems usually store data critical for organization like databases, mailboxes, employee files, etc. Typically you don’t provide access to NAS from Internet. If Filer has real IP address to provide CIFS or NFS access inside organization you can just close all incoming connections from outside world on frontier firewall. But what if networking engineer mess up firewall configuration? If you don’t take even simple security measures then all your organization data is at risk.

Here I’d like to describe basic means to secure NetApp Filer:

  • Disable rsh:

options rsh.enable off

  • Disable telnet:

options telnet.enable off

  • Restrict SSH access to particular IP addresses. Take into consideration that if you enabled AD authentication Administrator user and Administrators group will implicitly have access to ssh.

options ssh.access host=ip_address_1,ip_address_2

  • You can configure Filer to allow files access via HTTP protocol. If you don’t have HTTP license or you don’t use HTTP then disable it:

options http.enable off

  • Even if you don’t have HTTP license you can access NetApp FilerView web interface to manage Filer. You can access it via SSL or plain connection, apparently SSL is more secure:

options http.admin.enable off

options http.admin.ssl.enable on

  • Restrict access to FilerView:

options httpd.admin.access host=ip_address_1,ip_address_2

  • If you don’t use SNMP then disable it:

options snmp.enable off

  • I’m using NDMP to backup Filer’s data. It’s done through virtual network. I restrict NDMP to work only between Filers (we have two of them) and backup server and only through particular virtual interface:

On Filer1:

options ndmpd.access “host=backup_server_ip,filer2_ip_address AND if=interface_name”

options ndmpd.preferred_interface interface_name

On Filer2:

options ndmpd.access “host=backup_server_ip,filer1_ip_address AND if=interface_name”

options ndmpd.preferred_interface interface_name

  • Disable other services you don’t use:

options snapmirror.enable off

options snapvault.enable off

  • Module which is responsible for SSH and FilerView SSL connections is called SecureAdmin. You probably won’t need to configure it since it’s enabled by default. You can verify if ssh2 and ssl connections are enabled by:

secureadmin status

  • Make sure all built-in users have strong passwords. You can list built-in users by:

 useradmin user list

  • By default Filer has home directory CIFS shares for all users. If you don’t use them, disable them by deleting:

/etc/cifs_homedir.cfg

  • Filer also has ETC$ and C$ default shares. I’d highly recommend to restrict access to these shares only to local Filer Administrator user. In fact, if you enabled AD authentication then also domain Administrator user and Administrators group will implicitly have access to these shares, even if you don’t  specify them in ACL. Delete all existing permissions and add:

cifs access share etc$ filer_system_name\Administrator Full Control
cifs access share c$ filer_system_name\Administrator Full Control

Basically this is it. Now you can say that you know hot to configure simple NetApp security.

Mad IT workdays

February 10, 2010

Today I needed to get Symantec Storage Exec to work with NetApp filer. This software allows to enforce file blocking and allocation policies on filer’s volumes.

I spent whole day resolving numerous problems while integrating them:

  1. When I was trying to install Symantec, installer said that it was interrupted and installation had to be rolled back. I couldn’t find ANY information regarding this issue in the whole Internet. I found posts with similar problems with other Symantec products but they didn’t help. Then somehow I found installation log and made search with lines from it. Finally I ran into this solution: http://seer.entsupport.symantec.com/docs/284901.htm. So the Symantec uninstaller for some damn reason left keys in the registry and couldn’t install itself for the second time because of it’s own fault.
  2. Then I’ve got “Can’t connect to host (err=10061)” after adding filer to the list of managed appliances. This link (http://seer.entsupport.symantec.com/docs/326973.htm) says I need to enable HTTP access. What? We don’t even have HTTP license. After half an hour of playing with filer configure options I found out that it’s not an access to httpd server it’s an access to magic filer administration area which is governed by httpd.admin.enable option (don’t forget also to add Storage Exec server IP to httpd.admin.access).
  3. The next error is: “HTTP POST authorization failed” in Storage Exec and “HTTP XML Authentication failed” from the filer side. It turned out that I also needed the user with the same user name and password as the user from which Storage Exec is being run. This user should be in the filer’s Administrators group.

Symantec’s documentation doesn’t have a word about all this stuff. It doesn’t say about access to filer’s administrative area and necessary user names.  You have to find this all out by yourself. I think Symantec’s docs leave too much to be desired and it’s the most mild way to describe it. And also there is little information about Storage Exec in the Internet. It seems that not many people are actually using it.