Posts Tagged ‘NAS’

Scripted CIFS Shares Migration

March 8, 2018

I don’t usually blog about Windows Server and Microsoft products in general, but the need for file server migration comes up in my work quite frequently, so I thought I’d make a quick post on that topic.

There are many use cases, it can be migration from a NAS storage array to a Windows Server or between an on-premises file server and cloud. Every such migration involves copying data and recreating shares. Doing it manually is almost impossible, unless you have only a handful of shares. If you want to replicate all NTFS and share-level permissions consistently from source to destination, scripting is almost the only way to go.

Copying data

I’m sure there are plenty of tools that can perform this task accurately and efficiently. But if you don’t have any special requirements, such as data at transit encryption, Robocopy is probably the simplest tool to use. It comes with every Windows Server installation and starting from Windows Server 2008 supports multithreading.

Below are the command line options I use:

robocopy \\file_server\source_folder D:\destination_folder /E /ZB /DCOPY:T /COPYALL /R:1 /W:1 /V /TEE /MT:128 /XD “System Volume Information” /LOG:D:\robocopy.log

Most of them are common, but there are a few worth pointing out:

  • /MT – use multithreading, 8 threads per Robocopy process by default. If you’re dealing with lots of small files, this can significantly improve performance.
  • /R:1 and /W:1 – Robocopy doesn’t copy locked files to avoid data inconsistencies. Default behaviour is to keep retrying until the file is unlocked. It’s important for the final data synchronisation, but for data seeding I recommend one retry and one second wait to avoid unnecessary delays.
  • /COPYALL and /DCOPY:T will copy all file and directory attributes, permissions, as well as timestamps.
  • /XD “System Volume Information” is useful if you’re copying an entire volume. If you don’t exclude the System Volume Information folder, you may end up copying deduplication and DFSR data, which in addition to wasting disk space, will break these features on the destination server.

Robocopy is typically scheduled to run at certain times of the day, preferably after hours. You can put it in a batch script and schedule using Windows Scheduler. Just keep in mind that if you specify the job to stop after running for a certain amount of hours, Windows Scheduler will stop only the batch script, but the Robocopy process will keep running. As a workaround, you can schedule another job with the following command to kill all Robocopy processes at a certain time of the day, say 6am in the morning:

taskkill /f /im robocopy.exe

Duplicating shares

For copying CIFS shares I’ve been using “sharedup” utility from EMC’s “CIFS Tools” collection. To get the tool, register a free account on https://support.emc.com. You can do that even if you’re not an EMC customer and don’t own an EMC storage array. From there you will be able to search for and download CIFT Tools.

If your source and destination file servers are completely identical, you can use sharedup to duplicate CIFS shares in one command. But it’s rarely the case. Often you want to exclude some of the shares or change paths if your disk drives or directory structure have changed. Sharedup supports input and output file command line options. You can generate a shares list first, which you can edit and then import shares to the destination file server.

To generate the list of shares first run:

sharedup \\source_server \\destination_server ALL /SD /LU /FO:D:\shares.txt /LOG:D:\sharedup.log

Resulting file will have records similar to this:

#
@Drive:E
:Projects ;Projects ;C:\Projects;
#
@Drive:F
:Home;Home;C:\Home;

Delete shares you don’t want to migrate and update target path from C:\ to where your data actually is. Don’t change “@Drive:E” headers, they specify location of the source share, not destination. Also worth noting that you won’t see permissions listed anywhere in this file. This file lists shares and share paths only, permissions are checked and copied at runtime.

Once you’re happy with the list, use the following command to import shares to the destination file server:

sharedup \\source_server \\destination_server ALL /R /SD /LU /FI:D:\shares.txt /LOG:D:\sharedup.log

For server local users and groups, sharedup will check if they exist on destination. So if you run into an error similar to the following, make sure to first create those groups on the destination file server:

10:13:07 : WARNING : The local groups “WinRMRemoteWMIUsers__” and “source_server_WinRMRemoteWMIUsers__” do not exist on the \\destination_server server !
10:13:09 : WARNING : Please use lgdup utility to duplicate the missing local user(s) or group(s) from \\source_server to \\destination_server.
10:13:09 : WARNING : Unable to initialize the Security Descriptor translator

Conclusion

I created this post as a personal howto note, but I’d love to hear if it’s helped anyone else. Or if you have better tool suggestions to accomplish this task, please let me know!

Advertisement

Jumbo Frames justified?

March 27, 2012

When it comes to VMware on NetApp, boosting  performance by implementing Jumbo Frames is always taken into consideration. However, it’s not clear if it really has any significant impact on latency and throughput.

Officially VMware doesn’t support Jumbo Frames for NAS and iSCSI. It means that using Jumbo Frames to transfer storage traffic from VMkernel interface to your storage system is the solution which is not tested by VMware, however, it actually works. To use Jumbo Frames you need to activate them throughout the whole communication path: OS, virtual NIC (change to Enchanced vmxnet from E1000), Virtual Switch and VMkernel, physical ethernet switch and storage. It’s a lot of work to do and it’s disruptive at some points, which is not a good idea for production infrastructure. So I decided to take a look at benchmarks, before deciding to spend a great amount of time and effort on it.

VMware and NetApp has a TR-3808-0110 technical report which is called “VMware vSphere and ESX 3.5 Multiprotocol Performance Comparison Using FC, iSCSI, and NFS”. Section 2.2 clearly states that:

  • Using NFS with jumbo frames enabled using both Gigabit and 10GbE generated overall performance that was comparable to that observed using NFS without jumbo frames and required approximately 6% to 20% fewer ESX CPU resources compared to using NFS without jumbo frames, depending on the test configuration.
  • Using iSCSI with jumbo frames enabled using both Gigabit and 10GbE generated overall performance that was comparable to slightly lower than that observed using iSCSI without jumbo and required approximately 12% to 20% fewer ESX CPU resources compared to using iSCSI without jumbo frames depending on the test configuration.
Another important statement here is:
  • Due to the smaller request sizes used in the workloads, it was not expected that enabling jumbo frames would improve overall performance.

I believe that 4K and 8K packet sizes are fair in case of virtual infrastructure. Maybe if you move large amounts of data through your virtual machines it will make sense for you, but I feel like it’s not reasonable to implement Jumbo Frames for virual infrastructure in general.

The another report finding is that Jumbo Frames decrease CPU load, but if you use TOE NICs, then no sense once again.

VMware supports jumbo frames with the following NICs: Intel (82546, 82571), Broadcom (5708, 5706, 5709), Netxen (NXB-10GXxR, NXB-10GCX4), and Neterion (Xframe, Xframe II, Xframe E). We use Broadcom NetXtreme II BCM5708 and Intel 82571EB, so Jumbo Frames implementation is not going to be a problem. Maybe I’ll try to test it by myself when I’ll have some free time.

Links I found useful:

Export share in ROCKS

March 14, 2012

In my previous post I described how you can present an iSCSI LUN to a Linux host. I moved all home directories to this NAS share, but later I came to the conclusion that making separate share would be better. Users should have ability to quickly compile applications in their home directories. If home directories are also used as target storage for computational data, then during computation, iSCSI network link can become a bottleneck and slow down everything. That’s why I decided to separate them. It requires exporting additional share and it can be done very easily in ROCKS.

1. Mount the LUN to say /export/scratch

2. Make export by adding (all in one line) to /etc/exports

/export/scratch 192.168.111.128(rw,async,no_root_squash) 192.168.111.0/255.255.255.0(rw,async)

3. Restart nfs

/etc/rc.d/init.d/nfs restart

4. Add line to /etc/auto.share

scratch master.local:/export/&

5. Update 411 config

make -C /var/411

Now share is accessible by all compute nodes from /share/scratch.

Same process is described in ROCKS FAQ here.

NetApp Guts

October 15, 2011

Today I took several pictures of our NetApp FAS3020 Active/Active cluster to give you an idea of what NetApp essentially is from hardware point of view.

Here are some highlights of FAS3020 series:

  • Maximum Raw Capacity: 84TB
  • Maximum Disk Drives (FC, SATA, or mix): 168
  • Controller Architecture: 32-bit
  • Cache Memory: 4GB
  • Maximum Fibre Channel Ports: 20
  • Maximum Ethernet Ports: 24
  • Storage Protocols: FCP, iSCSI, NFS, CIFS

General view.

Click pictures to enlarge.

Two filers in active/active high availability cluster configuration. In case of one filer failure second takes over without lost of service.

Filers are connected to four disk shelves 15 TB in total. First pair is populated with Fibre Channel hardrives (DS14mk4 FC) and second with SATA (DS14mk2 AT). You can see FC drives on picture below.

Even though NetApp supports iSCSI it’s a NAS in nature. Each filer has four FC ports for disk shelves connectivity 0a throught 0d and four GE ports for network connections e0a thorugh e0d.

Filers are connected with two cluster interconnect cables which very much resembles InfiniBand. This interconnect is used for HA heartbeat.

Meters of FC cables.

Power is connected to 10000VA APC. Power cables are tied up to prevent accidental unhooking.

Here is the NetApp motherboard which has two CPU sockets and four memory slots.

NetApp chassis also includes two power supplies, two fan modules, LCD display and backplane which ties everything up.

FC shelves are equipped with ESH4 modules and AT with AT-FCX.


Security on NetApp Filer

October 9, 2011

Storage systems usually store data critical for organization like databases, mailboxes, employee files, etc. Typically you don’t provide access to NAS from Internet. If Filer has real IP address to provide CIFS or NFS access inside organization you can just close all incoming connections from outside world on frontier firewall. But what if networking engineer mess up firewall configuration? If you don’t take even simple security measures then all your organization data is at risk.

Here I’d like to describe basic means to secure NetApp Filer:

  • Disable rsh:

options rsh.enable off

  • Disable telnet:

options telnet.enable off

  • Restrict SSH access to particular IP addresses. Take into consideration that if you enabled AD authentication Administrator user and Administrators group will implicitly have access to ssh.

options ssh.access host=ip_address_1,ip_address_2

  • You can configure Filer to allow files access via HTTP protocol. If you don’t have HTTP license or you don’t use HTTP then disable it:

options http.enable off

  • Even if you don’t have HTTP license you can access NetApp FilerView web interface to manage Filer. You can access it via SSL or plain connection, apparently SSL is more secure:

options http.admin.enable off

options http.admin.ssl.enable on

  • Restrict access to FilerView:

options httpd.admin.access host=ip_address_1,ip_address_2

  • If you don’t use SNMP then disable it:

options snmp.enable off

  • I’m using NDMP to backup Filer’s data. It’s done through virtual network. I restrict NDMP to work only between Filers (we have two of them) and backup server and only through particular virtual interface:

On Filer1:

options ndmpd.access “host=backup_server_ip,filer2_ip_address AND if=interface_name”

options ndmpd.preferred_interface interface_name

On Filer2:

options ndmpd.access “host=backup_server_ip,filer1_ip_address AND if=interface_name”

options ndmpd.preferred_interface interface_name

  • Disable other services you don’t use:

options snapmirror.enable off

options snapvault.enable off

  • Module which is responsible for SSH and FilerView SSL connections is called SecureAdmin. You probably won’t need to configure it since it’s enabled by default. You can verify if ssh2 and ssl connections are enabled by:

secureadmin status

  • Make sure all built-in users have strong passwords. You can list built-in users by:

 useradmin user list

  • By default Filer has home directory CIFS shares for all users. If you don’t use them, disable them by deleting:

/etc/cifs_homedir.cfg

  • Filer also has ETC$ and C$ default shares. I’d highly recommend to restrict access to these shares only to local Filer Administrator user. In fact, if you enabled AD authentication then also domain Administrator user and Administrators group will implicitly have access to these shares, even if you don’t  specify them in ACL. Delete all existing permissions and add:

cifs access share etc$ filer_system_name\Administrator Full Control
cifs access share c$ filer_system_name\Administrator Full Control

Basically this is it. Now you can say that you know hot to configure simple NetApp security.