Archive for the ‘NetApp’ Category
November 20, 2015
I come across this issue too often. You need to fetch some information for the customer from the My AutoSupport web-site and can’t because the last AutoSupport message is from half a year ago.
Check AutoSupport State
When you list the AutoSupport history on the target system you see something similar to this:
# autosupport history show

Mail Server Configuration
If AutoSupport is configured to use SMTP as in this case, then the first place to check is obviously the mail server. The most common cause of the issue is blocked relay.
There are two things you need to make sure are configured: NetApp controllers management IPs are whitelisted on the mail server and authentication is disabled.
To set this up on a Exchange server go to Exchange Management Console > Server Configuration > Hub Transport, select a Receive Connector (or create it if you don’t have one for whitelisting already), go to properties and add NetApp IPs on the network tab.

Then make sure to enable Externally Secured authentication type on the Authentication tab.

Confirm AutoSupport is Working
To confirm that the issue is fixed send an AutoSupport message either from OnCommand System Manager or right from the console and make sure that status shows “sent-successfull”.
# options autosupport.doit Test
# autosupport history show

Tags:ASUP, authentication, AutoSupport, controllers, Exchange, failed, fix, history, Hub Transport, IP, issue, mail server, message, NetApp, Receive Connector, relay, smtp, troubleshoot, White List
Posted in NetApp | Leave a Comment »
September 25, 2013

DISCLAMER: I ACCEPT NO RESPONSIBILITY FOR ANY DAMAGE OR CORRUPTION OF DATA THAT MAY OCCUR AS A RESULT OF CARRYING OUT STEPS DESCRIBED BELOW. YOU DO THIS AT YOUR OWN RISK.
We had an issue with high CPU usage on one of the NetApp controllers servicing a couple of NFS datastores to VMware ESX cluster. HA pair of FAS2050 had two shelves, both of them owned by the first controller. The obvious solution for us was to reassign disks from one of the shelves to the other controller to balance the load. But how do you do this non-disruptively? Here is the plan.
In our setup we had two controllers (filer1, filer2), two shelves (shelf1, shelf2) both assigned to filer1. And two aggregates, each on its own shelf (aggr0 on shelf0, aggr1 on shelf1). Say, we want to reassign disks from shelf2 to filer2.
First step is to migrate all of the VMs from the shelf2 to shelf1. Because operation is obviously disruptive to the hosts accessing data from the target shelf. Once all VMs are evacuated, offline all volumes and an aggregate, to prevent any data corruption (you can’t take aggregate offline from online state, so change it to restricted first).
If you prefer to reassign disks in two steps, as described in NetApp Professional Services Tech Note #021: Changing Disk Ownership, don’t forget to disable automatic ownership assignment on both controllers, otherwise disks will be assigned back to the same controller again, right after you unown them:
> options disk.auto_assign off
It’s not necessary if you change ownership in one step as shown below.
Next step is to actually reassign the disks. Since they are already part of an aggregate you will need to force the ownership change:
filer1> disk assign 1b.01.00 -o filer2 -f
filer1> disk assign 1b.01.01 -o filer2 -f
…
filer1> disk assign 1b.01.nn -o filer2 -f
If you do not force disk reassignment you will get an error:
Assign request failed for disk 1b.01.0. Reason:Disk is part of a failed or offline aggregate or volume. Changing its owner may prevent aggregate or volume from coming back online. Ownership may be changed only by using the appropriate force option.
When all disks are moved across to filer2, new aggregate will show up in the list of aggregates on filer2 and you’ll be able to bring it online. If you can’t see the aggregate, force filer to rescan the drives by running:
filer2> disk show
The old aggregate will still be seen in the list on filer1. You can safely remove it:
filer1> aggr destroy aggr1
Tags:aggregate, assignment, controller, corruption, CPU, datastore, disk, ESX, FAS, Filer, force, load balancing, migrate, NetApp, NFS, non-disruptively, offline, online, own, ownership, reassign, restricted, shelf, unown, VM, vmware, volume
Posted in NetApp, VMware | Leave a Comment »
August 9, 2013
NetApp has quite a bit of features related to replication and clustering:
- HA pairs (including mirrored HA pairs)
- Aggregate mirroring with SyncMirror
- MetroCluster (Fabric and Stretched)
- SnapMirror (Sync, Semi-Sync, Async)
It’s easy to get lost here. So lets try to understand what goes where.

SnapMirror
SnapMirror is a volume level replication, which normally works over IP network (SnapMirror can work over FC but only with FC-VI cards and it is not widely used).
Asynchronous version of SnapMirror replicates data according to schedule. SnapMiror Sync uses NVLOGM shipping (described briefly in my previous post) to synchronously replicate data between two storage systems. SnapMirror Semi-Sync is in between and synchronizes writes on Consistency Point (CP) level.
SnapMirror provides protection from data corruption inside a volume. But with SnapMirror you don’t have automatic failover of any sort. You need to break SnapMirror relationship and present data to clients manually. Then resynchronize volumes when problem is fixed.
SyncMirror
SyncMirror mirror aggregates and work on a RAID level. You can configure mirroring between two shelves of the same system and prevent an outage in case of a shelf failure.
SyncMirror uses a concept of plexes to describe mirrored copies of data. You have two plexes: plex0 and plex1. Each plex consists of disks from a separate pool: pool0 or pool1. Disks are assigned to pools depending on cabling. Disks in each of the pools must be in separate shelves to ensure high availability. Once shelves are cabled, you enable SyncMiror and create a mirrored aggregate using the following syntax:
> aggr create aggr_name -m -d disk-list -d disk-list
HA Pair
HA Pair is basically two controllers which both have connection to their own and partner shelves. When one of the controllers fails, the other one takes over. It’s called Cluster Failover (CFO). Controller NVRAMs are mirrored over NVRAM interconnect link. So even the data which hasn’t been committed to disks isn’t lost.
MetroCluster
MetroCluster provides failover on a storage system level. It uses the same SyncMirror feature beneath it to mirror data between two storage systems (instead of two shelves of the same system as in pure SyncMirror implementation). Now even if a storage controller fails together with all of its storage, you are safe. The other system takes over and continues to service requests.
HA Pair can’t failover when disk shelf fails, because partner doesn’t have a copy to service requests from.
Mirrored HA Pair
You can think of a Mirrored HA Pair as HA Pair with SyncMirror between the systems. You can implement almost the same configuration on HA pair with SyncMirror inside (not between) the system. Because the odds of the whole storage system (controller + shelves) going down is highly unlike. But it can give you more peace of mind if it’s mirrored between two system.
It cannot failover like MetroCluster, when one of the storage systems goes down. The whole process is manual. The reasonable question here is why it cannot failover if it has a copy of all the data? Because MetroCluster is a separate functionality, which performs all the checks and carry out a cutover to a mirror. It’s called Cluster Failover on Disaster (CFOD). SyncMirror is only a mirroring facility and doesn’t even know that cluster exists.
Further Reading
Tags:aggregate, Async, asynchronous, cabling, CFO, CFOD, cluster, Consistency Point, controller, CP, cutover, fabric, failover, failure, FC, FC-VI, fiber channel, give back, HA, high availability, interconnect, MetroCluster, mirror, NetApp, NVLOG, NVMEM, NVRAM, pair, plex, pool, RAID, replication, semi-sync, shelf, snapmirror, storage, stretched, sync, synchronous, SyncMirror, take over
Posted in NetApp | 3 Comments »
August 8, 2013
In my previous post I gave a high level overview of how NetApp uses NVRAM for write caching. Now I want to move on to read caching in main memory and NetApp Flash Cache/Flash Pool features.
Layers of Memory
The first layer of read caching in NetApp is main memory. For example, there is 4GB of ECC main memory in FAS3140 as opposed to 512MB of NVRAM. It’s 12GB for FAS3220. You can check amount of main memory in your filer by running:
> sysconfig -a
But if you have a random read intensive environment and the size of main memory is too small for your workloads, instead of buying additional spindles you can consider using Flash Cache or Flash Pool features.
Flash Cache (formerly PAM – Performance Acceleration Module) is a PCIe card with flash memory chips on board. The most recent Flash Cache II modules have 2TB of flash. And you can fit as many as 4 such cards in high-end 6xxx NetApp series (or 8 x 1TB cards).
Flash Pool is basically a SSD drive Raid Group which is combined with HDD Raid Groups in the same aggregate to provide caching capabilities. Data is copied (not moved) from HDDs to SSDs to give faster access to more frequently used (hot) data blocks. Both Flash Cash and Flash Pool use FIFO logic to eject less frequently used (cold) data from cache.
Flash Cache
Flash Cache is a second level of read cache memory. When filer decides to evict cached data from main memory, it’s actually moved to a Flash Cash card. Similarly, when client needs to read a data block and filer doesn’t have the data in main memory it now first looks up in Flash Cache and if it’s not there, data is retrieved from disk.
Flash Cache can operate in three modes: Metadata Caching, Normal User Data Caching (default) and Low-Priority Data Caching. First mode caches only metadata, second caches metadata and data blocks and the last one lets you cache data which is not normally cached, which is writes and sequential reads.

In fact, when write request comes into the system, it’s actually cached in main memory first and then logged in NVRAM. When CP occurs, data is sent to hard drives and becomes a first target for eviction from main memory after that. If you enable Low-Priority Data Caching this data goes to Flash Cash card instead. It’s not write caching per se, because writes have been already sent to disks. But it’s helpful in workloads, when data which has just been written may need to be accessed again in a short period of time. It’s called read-write caching.
Caching sequential reads is generally not a good idea, because they overwrite large amounts of cache. But if your environment may benefit from that, you again can use Low-Priority Data Caching option.
Flash Pool
Flash Pool has one significant difference from Flash Cache. It works at an aggregate level, not at system level as Flash Cache. If you have only one stack of shelves and one aggregate, it makes no difference. But it’s almost always not the case.
Read Caching
Flash Pool uses essentially the same mechanism for read caching. When data is first accessed it goes to main memory. When filer needs to free up some space in main memory, blocks are moved to SSD disks as part of a Consistency Point.

NetApp uses scanner to evict blocks from SSD cache (see figure above). When cache gets full, scanner kicks in and reduces each block’s temperature by one level. Blocks with the lowest temperature are evicted. Each time block is accessed by client it’s temperature is incremented.
Write caching
Flash Pool can be used for write caching of partial overwrites, in contrast to Flash Cache which is purely a read cache.
WAFL is optimized for writes, because filer can put data at any place in file system. And when new data comes in, filer makes a so called “full write”, which writes a full stripe of data and parity. But when part of stripe needs to be overwritten, all other data blocks from the stripe need to be read from disk to recalculate parity. It’s a very expensive operation. Flash Pool can be used to cache partial overwrites and even better optimize performance.
If write caching is enabled, instead of going to HDDs this data is written to SSD disks as part of a Consistency Point. And unlike read caching it exists temporarily only on SSDs.

After each scanner run, write cache block temperature is decremented by one. When write is overwritten, it’s temperature gets back to normal, but it can’t go higher than that. When block is about to be evicted, it’s read back to main memory and then written to HDDs as part of next CP.
Policies
Flash Pool read/write policies are almost the same as Flash Cache ones. Read policies are: meta, random read (default), random-read-write, none. Write policies: random-write, none (default).
Notes
Flash Pool and Flash Cache can be combined in one system and configured at per volume level. But Flash Cache can’t be used for volumes which are already being cached by Flash Pool. It’s either-or.
NetApp filers have Predictive Cache Statistics (PCS) feature built in, which allows you to analyse your workload and predict if storage system will benefit from additional cache.
Further Reading
TR-3832: Flash Cache Best Practices Guide
TR-3801: Introduction to Predictive Cache Statistics
TR-4070: Flash Pool Design and Implementation Guide
Differences between NetApp Flash Cache and Flash Pool
Tags:aggregate, cache, Consistency Point, CP, evict, flash, Flash Cache, Flash Pool, HDD, memory, metadata, NetApp, NVMEM, NVRAM, PAM, partial overwrite, PCIe, PCS, Performance Acceleration Module, policy, pool, Predictive Cache Statistics, random, scanner, sequential, SSD, stripe, temperature, volume, workload
Posted in NetApp | 10 Comments »
August 5, 2013
In one of my previous posts I spoke about three basic types of NetApp Virtual Storage Console restores: datastore restore, VM restore and backup mount. The last and the least used feature, but very underrated, is the Single File Restore (SFR), which lets you restore single files from VM backups. You can do the same thing by mounting the backup, connecting vmdk to VM and restore files. But SFR is a more convenient way to do this.
Workflow
SFR is pretty much an out-of-the-box feature and is installed with VSC. When you create an SFR session, you specify an email address, where VSC sends an .sfr file and a link to Restore Agent. Restore Agent is a separate application which you install into VM, where you want restore files to (destination VM). You load the .sfr file into Restore Agent and from there you are able to mount source VM .vmdks and map them to OS.
VSC uses the same LUN cloning feature here. When you click “Mount” in Restore Agent – LUN is cloned, mapped to an ESX host and disk is connected to VM on the fly. You copy all the data you want, then click “Dismount” and LUN clone is destroyed.
Restore Types
There are two types of SFR restores: Self-Service and Limited Self-Service. The only difference between them is that when you create a Self-Service session, user can choose the backup. With Limited Self-Service, backup is chosen by admin during creation of SFR session. The latter one is used when destination VM doesn’t have connection to SMVI server, which means that Remote Agent cannot communicate with SMVI and control the mount process. Similarly, LUN clone is deleted only when you delete the SFR session and not when you dismount all .vmdks.
There is another restore type, mentioned in NetApp documentation, which is called Administartor Assisted restore. It’s hard to say what NetApp means by that. I think its workflow is same as for Self-Service, but administrator sends the .sfr link to himself and do all the job. And it brings a bit of confusion, because there is an “Admin Assisted” column on SFR setup tab. And what it actually does, I believe, is when Port Group is configured as Admin Assisted, it forces SFR to create a Limited Self-Service session every time you create an SFR job. You won’t have an option to choose Self-Assisted at all. So if you have port groups that don’t have connectivity to VSC, check the Admin Assisted option next to them.
Notes
Keep in mind that SFR doesn’t support VM’s with IDE drives. If you try to create SFR session for VMs which have IDE virtual hard drives connected, you will see all sorts of errors.
Tags:assisted, backup, clone, datastore, dismount, ESX, ESXi, IDE, limited, link, LUN, map, mount, NetApp, port group, restore, Restore Agent, self-service, session, SFR, Single File Restore, SMVI, virtual machine, Virtual Storage Console, VM, vmdk, VSC
Posted in NetApp, VMware | Leave a Comment »
July 19, 2013
Overview
NetApp storage systems use several types of memory for data caching. Non-volatile battery-backed memory (NVRAM) is used for write caching (whereas main memory and flash memory in forms of either extension PCIe card or SSD drives is used for read caching). Before going to hard drives all writes are cached in NVRAM. NVRAM memory is split in half and each time 50% of NVRAM gets full, writes are being cached to the second half, while the first half is being written to disks. If during 10 seconds interval NVRAM doesn’t get full, it is forced to flush by a system timer.
To be more precise, when data block comes into NetApp it’s actually written to main memory and then journaled in NVRAM. NVRAM here serves as a backup, in case filer fails. When data has been written to disks as part of so called Consistency Point (CP), write blocks which were cached in main memory become the first target to be evicted and replaced by other data.
Caching Approach
NetApp is frequently criticized for small amounts of write cache. For example FAS3140 has only 512MB of NVRAM, FAS3220 has a bit more 1,6GB. In mirrored HA or MetroCluster configurations NVRAM is mirrored via NVRAM interconnect adapter. Half of the NVRAM is used for local operations and another half for the partner’s. In this case the amount of write cache becomes even smaller. In FAS32xx series NVRAM has been integrated into main memory and is now called NVMEM. You can check the amount of NVRAM/NVMEM in your filer by running:
> sysconfig -a
The are two answers to the question why NetApp includes less cache in their controllers. The first one is given in white paper called “Optimizing Storage Performance and Cost with Intelligent Caching“. It states that NetApp uses different approach to write caching, compared to other vendors. Most often when data block comes in, cache is used to keep the 8KB data block, as well as 8KB inode and 8KB indirect block for large files. This way, write cache can be thought as part of the physical file system, because it mimics its structure. NetApp on the other hand uses journaling approach. When data block is received by the filer, 8KB data block is cached along with 120B header. Header contains all the information needed to replay the operation. After each cache flush Consistency Point (CP) is created, which is a special type of consistent file system snapshot. If controller fails, the only thing which needs to be done is reverting file system to the latest consistency point and replaying the log.
But this white paper was written in 2010. And cache journaling is not a feature unique to NetApp. Many vendors are now using it. The other answer, which makes more sense, was found on one of the toaster mailing list archives here: NVRAM weirdness (UNCLASSIFIED). I’ll just quote the answer:
The reason it’s so small compared to most arrays is because of WAFL. We don’t need that much NVRAM because when writes happen, ONTAP writes out single complete RAID stripes and calculates parity in memory. If there was a need to do lots of reads to regenerate parity, then we’d have to increase the NVRAM more to smooth out performance.
NVLOG Shipping
A feature called NVLOG shipping is an integral part of sync and semi-sync SnapMirror. NVLOG shipping is simply a transfer of NVRAM writes from the primary to a secondary storage system. Writes on primary cannot be transferred directly to NVRAM of the secondary system, because in contrast to mirrored HA and MetroCluster, SnapMirror doesn’t have any hardware implementation of the NVRAM mirroring. That’s why the stream of data is firstly written to the special files on the volume’s parent aggregate on the secondary system and then are read to the NVRAM.

Documents I found useful:
WP-7107: Optimizing Storage Performance and Cost with Intelligent Caching
TR-3326: 7-Mode SnapMirror Sync and SnapMirror Semi-Sync Overview and Design Considerations
TR-3548: Best Practices for MetroCluster Design and Implementation
United States Patent 7730153: Efficient use of NVRAM during takeover in a node cluster
Tags:battery-backed, block, cache, caching, cluster, Consistency Point, CP, data, file system, flash, flush, HA, high availability, journal, log, memory, MetroCluster, mirroring, NetApp, non-volatile, NVLOG, NVMEM, NVRAM, partner, PCIe, primary, secondary, semi-sync, shipping, snapmirror, snapshot, SSD, storage, sync
Posted in NetApp | 2 Comments »
July 1, 2013
NetApp DataFabric Manager has one annoying alert, which notifies that space utilization of the volume increases more quickly than expected. I used to receive dozens of these alerts each morning until I had done the following:
> dfm eventtype modify -v information volume-growth-rate:abnormal
This command changes the default severity for this event from warning to informational. Since my notifications configured to send everything higher or equal to warning, I no longer receive this alert.
There is a “volume full” event which triggers at 90%, I believe, which is enough for me.
Tags:alert, DataFabric Manager, event, full, growth rate, informational, NetApp, notification, OnCommand, Operations Manager, severity, trigger, Unified Manager, utilization, volume, warning
Posted in NetApp | Leave a Comment »
July 1, 2013
If you run lun resize command on NetApp you might run into the following error:
lun resize: New size exceeds this LUN’s initial geometry
The reason behind it is that each SAN LUN has head/cylinder/sector geometry. It’s not an actual physical mapping to the underlying disks and has no meaning these days. It’s simply a SCSI protocol artifact. But it imposes limitation on maximum LUN resize. Geometry is chosen at initial LUN creation and cannot be changed. Roughly you can resize the LUN to the size, which is 10 times bigger than the size at the time of creation. For example, the 50GB LUN can be extended to the maximum of 502GB. See the table below for the maximum sizes:
Initial Size Maximum Size
< 50g 502g
51-100g 1004g
101-150g 1506g
151-200g 2008g
201-251g 2510g
252-301g 3012g
302-351g 3514g
352-401g 4016g
To check the maximum size for particular LUN use the following commands:
> priv set diag
> lun gemetry lun_path
> priv set
If you run into this issue, unfortunately you will need to create a new LUN, copy all the data using robocopy for example and make a cutover. Because such features as volume level SnapMirror or ndmpcopy will recreate the LUN’s geometry together with the data.
Tags:aggregate, cutover, Filer, geometry, limit, LUN, ndmpcopy, NetApp, resize, SAN, SCSI, snapmirror, volume
Posted in NetApp | Leave a Comment »
June 12, 2013
NetApp Virtual Storage Console is a plug-in for VMware vCenter which provides capabilities to perform instant backup/restore using NetApp snapshots. It uses several underlying NetApp features to accomplish its tasks, which I want to describe here.
Backup Process
When you configure a backup job in VSC, what VSC does, is it simply creates a NetApp snapshot for a target volume on a NetApp filer. Interestingly, if you have two VMFS datastores inside one volume, then both LUNs will be snapshotted, since snapshots are done on the volume level. But during the datastore restore, the second volume will be left intact. You would think that if VSC reverts the volume to the previously made snapshot, then both datastores should be affected, but that’s not the case, because VSC uses Single File SnapRestore to restore the LUN (this will be explained below). Creating several VMFS LUNs inside one volume is not a best practice. But it’s good to know that VSC works correctly in this case.
Same thing for VMs. There is no sense of backing up one VM in a datastore, because VSC will make a volume snapshot anyway. Backup the whole datastore in that case.
Datastore Restore
After a backup is done, you have three restore options. The first and least useful kind is a datastore restore. The only use case for such restore that I can think of is disaster recovery. But usually disaster recovery procedures are separate from backups and are based on replication to a disaster recovery site.
VSC uses NetApp’s Single File SnapRestore (SFSR) feature to restore a datastore. In case of a SAN implementation, SFSR reverts only the required LUN from snapshot to its previous state instead of the whole volume. My guess is that SnapRestore uses LUN clone/split functionality in background, to create new LUN from the snapshot, then swap the old with the new and then delete the old. But I haven’t found a clear answer to that question.
For that functionality to work, you need a SnapRestore license. In fact, you can do the same trick manually by issuing a SnapRestore command:
> snap restore -t file -s nightly.0 /vol/vol_name/vmfs_lun_name
If you have only one LUN in the volume (and you have to), then you can simply restore the whole volume with the same effect:
> snap restore -t vol -s nightly.0 /vol/vol_name
VM Restore
VM restore is also a bit controversial way of restoring data. Because it completely removes the old VM. There is no way to keep the old .vmdks. You can use another datastore for particular virtual hard drives to restore, but it doesn’t keep the old .vmdks even in this case.
VSC uses another mechanism to perform VM restore. It creates a LUN clone (don’t confuse with FlexClone,which is a volume cloning feature) from a snapshot. LUN clone doesn’t use any additional space on the filer, because its data is mapped to the blocks which sit inside the snapshot. Then VSC maps the new LUN to the ESXi host, which you specify in the restore job wizard. When datastore is accessible to the ESXi host, VSC simply removes the old VMDKs and performs a storage vMotion from the clone to the active datastore (or the one you specify in the job). Then clone is removed as part of a clean up process.
The equivalent cli command for that is:
> lun clone create /vol/clone_vol_name -o noreserve -b /vol/vol_name nightly.0
Backup Mount
Probably the most useful way of recovery. VSC allows you to mount the backup to a particular ESXi host and do whatever you want with the .vmdks. After the mount you can connect a virtual disk to the same or another virtual machine and recover the data you need.
If you want to connect the disk to the original VM, make sure you changed the disk UUID, otherwise VM won’t boot. Connect to the ESXi console and run:
# vmkfstools -J setuuid /vmfs/volumes/datastore/VM/vm.vmdk
Backup mount uses the same LUN cloning feature. LUN is cloned from a snapshot and is connected as a datastore. After an unmount LUN clone is destroyed.
Some Notes
VSC doesn’t do a good cleanup after a restore. As part of the LUN mapping to the ESXi hosts, VSC creates new igroups on the NetApp filer, which it doesn’t delete after the restore is completed.
What’s more interesting, when you restore a VM, VSC deletes .vmdks of the old VM, but leaves all the other files: .vmx, .log, .nvram, etc. in place. Instead of completely substituting VM’s folder, it creates a new folder vmname_1 and copies everything into it. So if you use VSC now and then, you will have these old folders left behind.
Tags:backup, clone, datastore, disaster recovery, disk, ESX, ESXi, Filer, FlexClone, igroup, job, license, LUN, mount, NetApp, restore, SFSR, Single File SnapRestore, snap restore, SnapRestore, snapshot, split, storage vMotion, unmount, UUID, vCenter, Virtual Storage Console, VMFS, vmkfstools, vMotion, vmware, volume, VSC
Posted in NetApp, VMware | 1 Comment »
May 31, 2013
There is one tricky thing about SSH connections to NetApp filers. If you use PuTTY or PuTTY Connection Manager and you experience frequent timeouts from ssh sessions, you might need to fiddle around with PuTTY configuration options. It seems that there is some issue with how Data ONTAP implements SSH key exchanges, which results in frequent annoying disconnections.
In order to fix that, on PuTTY Configuration screen go to Connection -> SSH -> Bugs and change “Handles SSH2 key re-exchange badly” to ‘On’. That should fix it.
Tags:bug, configuration, Connection Manager, Data ONTAP, disconnect, Filer, NetApp, PuTTY, session, ssh, timeout
Posted in NetApp | 3 Comments »