Posts Tagged ‘windows’

Troubleshooting vSphere Guest Operations API

October 4, 2019

What is vSphere Guest Operations

Recently I’ve been heavily utilizing vSphere Guest Operations API for automating vCenter patching. vSphere Guest Operations (GuestOps) is an API, which allows you to run commands on a virtual machine without needing to connect to it over the network. All you need is credentials to the vCenter managing the virtual machine and to the virtual machine itself.

GuestOps can be called by using an Invoke-VMScript PowerCLI cmdlet in the following format:

> Invoke-VMScript -ScriptText “uname -a” -vm vc01 -GuestUser root -GuestPassword VMware1!

Cmdlet will talk to the vCenter, vCenter will talk to ESXi host, ESXi host will talk to VMware Tools and, eventually, VMware Tools will run the command on the Guest OS.

It worked well for me when I was running commands on VCSA 6.0 VM (managed by another vCenter), but after patching and upgrading this VM to VCSA 6.7 I encountered the following error:

Error occured while executing script on guest OS in VM ‘vc01’. Could not locate “Powershell” script interpreter in any of the expected locations. Probably you do not have enough permissions to execute command within guest.

It’s obvious from the error message that cmdlet is doing something wrong, since it’s supposed to use bash in Linux, not PowerShell.

Enable Debugging in VMware Tools

To better understand what was going on, I logged in to VCSA via SSH and enabled VMware Tools debugging (see KB1007873 for instructions on how to do that) and restarted Open VM Tools:

# systemctl restart vmtoolsd.service

After running the Invoke-VMScript cmdlet again, this is what I noticed in vmsvc.log debug log:

[vix] VixTools_StartProgram: User: root args: progamPath: ‘cmd.exe’, arguments: ‘/C powershell -NonInteractive -EncodedCommand cABvAHcAZQByAHMAaABl…

So it wasn’t just a misleading PowerCLI error message, Invoke-VMScript was actually trying to call a PowerShell command using Windows command interpreter on a Linux VM.

Solution

My guess is that since VMware has changed underlying operating system on VCSA from SUSE Linux to Photon OS, Invoke-VMScript can no longer properly identify the underlying OS and defaults to Windows.

Simple solution to this problem is to give a helping hand to Invoke-VMScript cmdlet and specify interpreter using -ScriptType Bash parameter. This is what a proper resulting debug log message will look like:

[vix] VixToolsStartProgramImpl: started ‘”/bin/bash” -c “bash > /tmp/vmware-root/powerclivmware159 2>&1 -c \”uname -a\””‘, pid 7456

Advertisement

Dell Compellent iSCSI Configuration

November 20, 2015

I haven’t seen too many blog posts on how to configure Compellent for iSCSI. And there seem to be some confusion on what the best practices for iSCSI are. I hope I can shed some light on it by sharing my experience.

In this post I want to talk specifically about the Windows scenario, such as when you want to use it for Hyper-V. I used Windows Server 2012 R2, but the process is similar for other Windows Server versions.

Design Considerations

All iSCSI design considerations revolve around networking configuration. And two questions you need to ask yourself are, what your switch topology is going to look like and how you are going to configure your subnets. And it all typically boils down to two most common scenarios: two stacked switches and one subnet or two standalone switches and two subnets. I could not find a specific recommendation from Dell on whether it should be one or two subnets, so I assume that both scenarios are supported.

Worth mentioning that Compellent uses a concept of Fault Domains to group front-end ports that are connected to the same Ethernet network. Which means that you will have one fault domain in the one subnet scenario and two fault domains in the two subnets scenario.

For iSCSI target ports discovery from the hosts, you need to configure a Control Port on the Compellent. Control Port has its own IP address and one Control Port is configured per Fault Domain. When server targets iSCSI port IP address, it automatically discovers all ports in the fault domain. In other words, instead of using IPs configured on the Compellent iSCSI ports, you’ll need to use Control Port IP for iSCSI target discovery.

Compellent iSCSI Configuration

In my case I had two stacked switches, so I chose to use one iSCSI subnet. This translates into one Fault Domain and one Control Port on the Compellent.

IP settings for iSCSI ports can be configured at Storage Management > System > Setup > Configure iSCSI IO Cards.

iscsi_ports

To create and assign Fault Domains go to Storage Management > System > Setup > Configure Local Ports > Edit Fault Domains. From there select your fault domain and click Edit Fault Domain. On IP Settings tab you will find iSCSI Control Port IP address settings.

local_ports

control_port

Host MPIO Configuration

On the Windows Server start by installing Multipath I/O feature. Then go to MPIO Control Panel and add support for iSCSI devices. After a reboot you will see MSFT2005iSCSIBusType_0x9 in the list of supported devices. This step is important. If you don’t do that, then when you map a Compellent disk to the hosts, instead of one disk you will see multiple copies of the same disk device in Device Manager (one per path).

add_iscsi

iscsi_added

Host iSCSI Configuration

To connect hosts to the storage array, open iSCSI Initiator Properties and add your Control Port to iSCSI targets. On the list of discovered targets you should see four Compellent iSCSI ports.

Next step is to connect initiators to the targets. This is where it is easy to make a mistake. In my scenario I have one iSCSI subnet, which means that each of the two host NICs can talk to all four array iSCSI ports. As a result I should have 2 host ports x 4 array ports = 8 paths. To accomplish that, on the Targets tab I have to connect each initiator IP to each target port, by clicking Connect button twice for each target and selecting one initiator IP and then the other.

iscsi_targets

discovered_targets

connect_targets

Compellent Volume Mapping

Once all hosts are logged in to the array, go back to Storage Manager and add servers to the inventory by clicking on Servers > Create Server. You should see hosts iSCSI adapters in the list already. Make sure to assign correct host type. I chose Windows 2012 Hyper-V.

 

add_servers

It is also a best practice to create a Server Cluster container and add all hosts into it if you are deploying a Hyper-V or a vSphere cluster. This guarantees consistent LUN IDs across all hosts when LUN is mapped to a Server Cluster object.

From here you can create your volumes and map them to the Server Cluster.

Check iSCSI Paths

To make sure that multipathing is configured correctly, use “mpclaim” to show I/O paths. As you can see, even though we have 8 paths to the storage array, we can see only 4 paths to each LUN.

io_paths

Arrays such as EMC VNX and NetApp FAS use Asymmetric Logical Unit Access (ALUA), where LUN is owned by only one controller, but presented through both. Then paths to the owning controller are marked as Active/Optimized and paths to the non-owning controller are marked as Active/Non-Optimized and are used only if owning controller fails.

Compellent is different. Instead of ALUA it uses iSCSI Redirection to move traffic to a surviving controller in a failover situation and does not need to present the LUN through both controllers. This is why you see 4 paths instead of 8, which would be the case if we used an ALUA array.

References

Windows MPIO with IBM storage

September 17, 2012

IBM mid-range storage systems (like DS3950) work in active/passive mode. It means that access to each LUN is given through one controller, in constrast to active/active storage where data between host and two controllers can flow in round-robin fashion. So redundant path here is used only as a failover. Software which provides this failover functionality is called Multipath I/O (MPIO) and has implementations for all operating systems. I’ll desribe how to configure MPIO version for Windows.

Installation

Prior to Windows Server 2008, Microsoft didn’t have its own MPIO implementation and MPIO was distributed with IBM DS Storage Manager product. Now you can install MPIO from “Feautures” sub-menu of Windows Server 2008 Server Manager. After installation is complete you will find MPIO configuration options under Control Panel and in Administrative Tools.

IBM storage works well with default Windows MPIO implementation, however it’s recommended to install IBM MPIO (device-specific module) from Storage Manager installation bundle. In my case MPIO installation file was called SMIA-WSX64-01.03.0305.0608.

Enable multipathing

Initially you will see two hard drives for each LUN in Device Manager. You can enable MPIO for particular hardware ID (in other words, storage system) on Discover Multi-Paths tab of MPIO control panel. You can’t do that with LUN granularity. After you add selected devices and reboot, you will see them on “MPIO Devices” tab. Now each LUN will be seen as a single hard drive in Device Manager.

Configure preferred path

MPIO supports several load-balancing policies, which are configured on a LUN basis from MPIO tab of a hard drive in Device Manager. As a Load Balance Policy select Fail Over Only. Then for each path select which is Active/Optimized and which is a Standby path. Also make active path Preferred, so that after failover it failbacks to it.

Don’t be confused by iSCSI on the figure. It’s the same for pure FC. It’s just for reference.

Check configuration

When you configure active and passive paths you assume that first path listed is to controller A and second path is to controller B. But, in fact, there is no indication of that from the configuration page and you can neither confirm nor deny it. The only ID you see is adapter ports but they don’t even map to the actual ports on HBAs.

To be able to check your configuration you need to install IBM SMdevices utility which comes with IBM DS Storage Manager. Run DS SM installation and go for Custom Installation. There you need to check only the Utilities part. In SMdevices output you can see which path is preferred for this LUN and if it’s configured as active (In Use):

C:\Program Files\IBM_DS\util>SMdevices
IBM System Storage DS Storage Manager Devices
. . .
\\.\PHYSICALDRIVE1 [Storage Subsystem ITSO5300, Logical
Drive 1, LUN 0, Logical Drive ID
<600a0b80002904de000005a246d82e17>, Preferred Path
(Controller-A): In Use]

References

The best reference I found on that topic is IBM Midrange System Storage Hardware Guide (SG24-7676-01), from p.453: DS5000 logical drive representation in Windows Server 2008. As well as Installing and Configuring MPIO guide from Microsoft.

Highly available Windows network infrastructure

February 27, 2012

When number of computers in company starts to grow, IT services become critical for company operation, every IT department starts to think how to make their network infrastructure highly available. If it’s a Windows environment, then the first step is usually an additional domain controller. Bringing second DC up and running is rather simple. The only thing you need to do is to run dcpromo and follow the instructions given by the wizard. Then make additional DC a Global Catalog, so that it will serve authentication requests, by going to Active Directory Sites and Services and in NTDS settings on General tab check Global Catalog option. Windows File Replication Services (FRS) will do the rest.

However, it’s usually not enough. Computers rely on DNS service to resolve servers names and in case of primary DC failure your network will be paralyzed. Dcpromo don’t automatically install and configure additional DNS server. You need to do that manually. Moreover, if you use DHCP service to provide network settings to client computers and it’s located on the same server you will also have major issues. The problem here is that you can’t have two active DHCP servers giving out same addresses. But this problem also have its solution.

In case of DNS you should go to Add or Remove Windows Components and find DNS in Networking Services. Install it as AD integrated. Then on the primary DNS, for all your forward and reverse lookup zones, in properties add secondary DNS IP on Name Servers tab. After that DNS will automatically replicate all data. Don’t also forget to add your secondary DNS to DHCP configuration, otherwise clients won’t know about it.

When it comes to DHCP you have an option to use so called 80/20 rule to divide scope between DHCP servers (if you work on Windows server 2008 platform you can build HA DHCP cluster). Simply configure your first DHCP server to lease first 80% of network IP addresses and leave 20% to the second DHCP server. Then in case of first server failure most of computers will already have their IP addresses and you will still have 20% to distribute. In my case network is quite small and I split scope in 50/50. Just make equal configurations for two servers (reservations, exclusions, scope options, etc), but configure scope to have non-overlapping ranges. Then if you use 80/20 rule, you want your primary server to lease IP address in normal circumstances. If both servers will lease addresses with equal rights then you will quickly run out of addresses on 20% server and in case of primary server failure you won’t have enough addresses to lease. To solve that, tweak Conflict detection attempts option.

Basically, this is it. Of course, you will still have many points of failure, like network switch, UPS, etc. But this topic goes beyond this post.

Caching of roaming Profiles

February 9, 2012

Windows has a feature of working with files from a network share when you are disconnected of LAN. When you initially mark files to be available offline they are downloaded to your local computer. Then you can work with them even without a network connection. When network connection becomes available files are copied to a network share.

If you are using roaming profiles then these two technologies can conflict with each other. The rule of a thumb for roaming profiles is to always disable caching. Just check it in folder options.

Moving Active Directory roaming profiles to another server

February 9, 2012

Relocating Active Directory roaming profiles can be a tricky task. You have many shared folders with particular permissions which won’t move unlike NTFS permissons to another server if you simply copy these folders. On top of that, you need to change profile paths in Active Directory Users and Computers snap-in. And if you have hundreds of users it’s not what you will happy to do. Given these two objectives lets move on to implementation.

Moving shares preserving permissions

I ran into several suggestions how to do that, like using robocopy, xcopy, permcopy or other tools. I don’t know to what extent they might help. I’d like to suggest simpler solution. Microsoft has  File Server Migration Toolkit (FSMT). It’s very basic and limited tool. It means you will probably need to do some hand work. But it solves the primary problem which is copying shares along with their permissions.

FSMT has additional feature of creating DFS links for you but I didn’t use it. GUI is rather intuitive, so there is not much to explain. The particular problem with FSMT is that it changes target share and folder paths. Say you have share with the name ~UNAME$ which is located on server CONTOSO_PDC. As a result of movement you will have share with the name ~UNAME$_contoso_pdc$. Which is not what we expect to have in our case. Same thing for target folder. For example, if the source folder for the share is D:\Profiles\UNAME, after migration you’ll get D:\Profiles\contoso_pdc\~UNAME$. Apart from additional folder in between, as you can see last part of source path is changed to share name in the target path (~UNAME$ instead of UNAME).

In my case I had to revert all these changes back to what it originally was. The trick here is to create FSMT project, add server and shares to it and then exit without performing the actual move. Then open project .xml file and correct all paths by search/replace. Since I had complicated share names I had to use replace with substitution feature in MS Office Word. For example to change target path from D:\Shared\~PROF\~UNAME$ to D:\Shared\~PROF\UNAME I used following masks for search and replace:

D:\\Shared\\~USER\\\~(*)\$

D:/Shared/~USER/\1

Here word processor searches for the first string and use word from parenthesis as the substitution for the special sequence \1.

Changing profile paths in Active Directory

Here you also have several ways to accomplish that. You can use ADModify tool. But I simply wrote a Powershell script which I share with you as it is. I believe it’s mostly self-explanatory. For convenience I also uploaded this script to FileDen. Download it from here.

# Bind to the root of the current domain
$ldapPath = "LDAP://ou=Users and Computers,dc=contoso,dc=com"
$objDomain = New-Object System.DirectoryServices.DirectoryEntry($ldapPath)

$objSearcher = New-Object System.DirectoryServices.DirectorySearcher
$objSearcher.SearchRoot = $objDomain
$objsearcher.Filter = ("(objectCategory=User)")
$colResult = $objSearcher.FindAll()

foreach($objResult in $colResult) {
	$user = $objResult.GetDirectoryEntry()
	write-host "For user" $user.cn ":"

	$profilePath = $user.ProfilePath
	$parts = $profilePath.ToString().Split("\")

	# Identifying profile type (XP = 0 or Windows 7 = 1)
	if($parts.Length -eq 4) { $profType = 0 }
	elseif($parts.Length -eq 5) { $profType = 1 }

	# Constructing new profile paths
	if($profilePath) {
		if($profType -eq 0) {
			$newProfPath = "\\SERVERNAME\" + $parts[3];
			$newProfDirPath = "D:\~PROF\" + $user.sAMAccountName
			$newProfShareName = $parts[3]
		}
		# Windows 7 profiles do not have individual shares. There is
		# one share for all roaming profiles.
		elseif($profType -eq 1) {
			$newProfPath = "\\SERVERNAME\Profiles\" + $parts[4]
		}
	}

	# Constructing new home directory paths
	$homeDirectory = $user.homeDirectory
	$parts = $homeDirectory.ToString().Split("\")
	if($homeDirectory) {
		$newHomePath = "\\SERVERNAME\" + $parts[3];
		$newHomeDirPath = "D:\~USER\" + $user.sAMAccountName
		$newHomeShareName = $parts[3]

	}

	if($profilePath) {
		# Changing profile path
		write-host "Changing profile path from" `
			$user.ProfilePath "to" $newProfPath
		$user.ProfilePath = $newProfPath
	}
	if($homeDirectory) {
		# Changing home directory path
		write-host "Changing home directory path from" `
			$user.homeDirectory "to" $newHomePath
		$user.homeDirectory = $newHomePath
	}
	# Commit changes
	$user.setinfo()
}

VMware Tools update issue

September 20, 2011

Recently I decided to update VMware Tools on VMs because most of them showed Out of date in VI client. For some reason several Linux VMs didn’t update even though VI client showed no error. I tried to update from inside VM by running /usr/sbin/vmware-tools-upgrade and it showed that there is not enough space in /tmp. I enlarged /tmp from 128 to 512MB and update went fine this time.

Take into account that:

  1. Windows VM will most likely be rebooted after update.
  2. In Linux VMmware Tools may not start automatically. If it’s the case start it manually by calling /etc/init.d/vmware-tools start.
  3. Network interfaces in Linux may go down after VMware Tools update. Boot them manually.

 

Reducing OS boot time

January 8, 2011

Every time you turn on or reboot your PC you sit in front of it and stupidly wait for it to load OS. If the time it takes to boot is long it becomes annoying especially if you need to reboot it several times. I measured boot time for my PC with WinXp on it and it’s 4m 13s from power on to the moment when HDD LED stops blinking like insane. My feeling is that 4m is too much for me.

What I did was replacing  ancient Kaspersly Antivirus 6 with Kaspersky Crystal. It dropped boot time to 3m 36s. And restoration of my RAID 1 which fell apart some time ago. It further reduced boot time to 2m 33s.

It’s almost a two times less than the baseline and seems enough at this point.

Disable password enter

December 3, 2010

If you have empty password for your primary Windows XP account and for some reason you see annoying authentication window each time you turn your PC on then run:

control userpasswords2

in cmd and uncheck the checkbox that says “Users must enter a user name and password to use this computer”.