Pages Menu
TwitterRssFacebook
Categories Menu

Posted by on Mar 8, 2015 in Compute, Infrastructure | 1 comment

vSphere 6 Upgrade Install Error NSX

vSphere6_NSX_Error

 

<CONFLICTING_VIBS ERROR: Vibs on the host are conflicting with vibs in metadata.  Remove the conflicting vibs or use Image Builder to create a customer ISO providing newer versions of the conflicting vibs.  [‘VMware_bootbank_esx-dvfiler-switch-security_5.5.0-0-0.2107100’, ‘VMware_bootbank_esx-vsip_5.5.0-0-0.2107100’, ‘VMware_bootbank_esx-vxlan_5.5.0-0-0.2107100’]>

I spent the day upgrading all my hosts to vSphere 6.0 from vSphere 5.5.  It felt right, it felt like it was time.  However, I must admit – I was not successful at first.  Did you check out the way wordy above? Yikes!

What I could make out from the upgrade error is there was something about the vSphere 6 upgrade that wasn’t jiving with NSX 6.1.2.  It appeared if I could “un-prep” this host then I could be on my way.  Fortunately, it is way easy to remove the VIBs that NSX installs during a host prep.

Once your host is back online, SSH to it and issue the following commands in bold:
~ # esxcli software vib remove -n esx-vxlan

Removal Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed:
VIBs Removed: VMware_bootbank_esx-vxlan_5.5.0-0.0.2107100
VIBs Skipped:
~ #
~ # esxcli software vib remove -n esx-dvfilter-switch-security
Removal Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed:
VIBs Removed: VMware_bootbank_esx-dvfilter-switch-security_5.5.0-0.0.2107100
VIBs Skipped:
~ # esxcli software vib remove -n esx-vsip
Removal Result
Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
Reboot Required: true
VIBs Installed:
VIBs Removed: VMware_bootbank_esx-vsip_5.5.0-0.0.2107100
VIBs Skipped:

That’s it! Go ahead and start that the upgrade again!

PS I’m sure VMware will have better method of upgrading to vSphere 6 where NSX is installed.  However, us early adopters sometimes don’t get to ride on easy street.

Read More

Posted by on Dec 22, 2014 in Storage | 0 comments

Hard Drive Health Status of Permanent Disk Failure in VSAN

hdd

With all that spinning metal, it is only a matter of time when you experience a hard drive failure.  Fortunately for you, you’re using VSAN and hard drive failures are easy peasy.  Here we go!

Browse to the vCenter Web Client, Select your cluster > Manage > Disk Management

hdd2

Next, you’re going to want to remove that puked drive from the disk group.  It should be easy to spot the failed drive…

*spoiler alert*

Look for the icon with the red !

Select the disk group of the failed drive and Select the Remove Disk.

 

hdd3

Adding a drive is just as easy as removing the drive depending on the scenario you got yourself in when your configured the I/O controller.  The two paths are:

1.  If the I/O controller is in pass-through mode then its just plug and rug.
2. If the I/O controller doesn’t support pass-through mode then you will need to re-create each drive replaced as its own RAID 0 group.

Since I’m lazy I selected an I/O card that did pass-through mode.  I’ve replaced the drive and will now need to add it back to the disk group.

Select the original disk group from your cluster > Manage > Disk Management and select the Add Drive icon and new drive.

hdd4

 

BAM! Within a few minutes, I’m fully recovered and healthy again.  Come on VSAN – I beg you, make things just a little tougher next time!  This is just too easy.

Read More

Posted by on Dec 21, 2014 in Storage | 0 comments

SSD is Opertional Dead or Error in VSAN

blow

 

Oh shucks, if you are just noticing your available VSAN capacity decreased by an entire disk group then you probably lost a SSD drive in your VSAN cluster.  By design, if a SSD drive dies then the entire disk group goes for a coffee break.  Let’s start off by verifying how clairvoyant I am, time for a heat check.

Browse to the vCenter Web Client, Select your cluster > Manage > Disk Management 

If you see “Dead or Error” under Operational then your SSD is hurting.

vsan1

I know this part might be painful for most but now you’re on the first step of recovery.  Just know, if you have at least 2 other VSAN hosts then you’re safe.  I’m assuming all your VMs have at least a failure policy of 1 host. Unless you’re the type that enjoys freebasing then none of your were VMs were  hanging around with a failure policy of 0.

From here, remove the disk group by Select the Remove Disk group icon under Disk Groups.

vsan2

 

Now that you’ve removed the disk group, you can replace the SSD and recreate the disk group like nothing ever happened.  If you don’t have any SSD drives to replace it with then you may want to take the time to see if the SSD is recoverable.  If the drive is still readable by the host then you might be in luck.  The SSD drive just may have went suicidal on you and just blasted away its own partition table.

I only wish I could yank the SSD drive out and bring it back to life by blowing on it like a Nintendo cartridge.  Bro, I don’t care what you say – Excitebike was the best.

excitebike-wii-world-rally

Read More

Posted by on Nov 22, 2014 in Infrastructure | 0 comments

Stuck at the Loading module mptsas screen

mpt2sas

 

So you’re booting vSphere up and you’re stuck at the “Loading module mptsas”

Although this isn’t common, what do you do to get around it?

Here are some possible causes

1.  Bad hard drive

2. I/O controller hardware issue

3. I/O controller firmware issue

I faced this issue recently and sure enough, as soon as I took one of the hard drives out, vSphere booted up just fine!  I ran a health check scan on the drive and the puppy was sick!

Read More

Posted by on Nov 3, 2014 in Infrastructure, Storage, Uncategorized | 0 comments

VSAN and Jumbo Frames

yuno

Error Message:

Call “FileManager.MakeDirectory” for object “FileManager” on vCenter Server “vCenter” failed. Failed to connect to component host 5419e4e5-daed-b7e8-3a07-00266cf65634. An unknown error has occurred. Call “FileManager.MakeDirectory” for object “FileManager” on vCenter Server “vCenter” failed. Failed to connect to component host 53229f42-40ff-aa65-8444-00266cf89748

 

Do you know the feeling of when you experience a real awesome technology and you become so excited you just can’t hide it? Yeah..so for me that is VMware’s VSAN.  I was an early adopter of VSAN and grabbed onto those beta bits as soon as it hit the wire.  During the beta it was advised to use Jumbo Frames (aka increasing the MTU size past 1500) for best performance.  I took the bait and why not? It was an easy enough to setup.

This past weekend, I migrated my VSAN cluster over to a Dell PowerConnect 6248 Layer 3 switch.  There are two places to configure Jumbo Frames on the Dell Powerconnect: The VLAN and the physical interface.

To recap, I have set:

  1. 9000 MTU on the VSAN VMkernel on the vSphere host.
  2. 9000 MTU on the VSAN VLAN on the Dell PowerConnect
  3. 9000 MTU on the VSAN Physical Interface the Dell PowerConnect

Lastly, I tested VSAN out by writing a folder to the VSAN datastore but get returned a nasty error which read

Call “FileManager.MakeDirectory” for object “FileManager” on vCenter Server “vCenter” failed. Failed to connect to component host 5419e4e5-daed-b7e8-3a07-00266cf65634. An unknown error has occurred. Call “FileManager.MakeDirectory” for object “FileManager” on vCenter Server “vCenter” failed. Failed to connect to component host 53229f42-40ff-aa65-8444-00266cf89748

 

Great, now I can’t even write a folder to the VSAN datastore so I figuratively yell into the sky, Come on VSAN Gods! Why!??

Once I got over the self pity stage, I noticed Dell uses a different type of MTU logic on the physical switch interface.  While in the rest of the world, when you configure a MTU of 9000, you mean it.  However, with Dell switches a MTU 9000 is actually (9 * 1024) = 9216.  Ugh, so now I have set

  1. 9000 MTU on the VSAN VMkernel on the vSphere host.
  2. 9000 MTU on the VSAN VLAN on the Dell PowerConnect
  3. 9216 MTU on the VSAN Physical Interface the Dell PowerConnect

BAM! The VSAN datastore is rocking and I can now write to it.  However, this scenario posed some reflection on how MTU actually impacts VSAN.  I did some further testing and came up with the follow conclusions:

  1. If any host in a VSAN cluster has a mismatched MTU size, NOTHING can write to the VSAN datastore. Even if one host with the wrong MTU is set then it will prevent VSAN from working.
  2.  Even with mismatched MTU’s when one verifies the Network Status (vCenter > Virtual SAN > General) it will show Normal. However, this doesn’t verify MTU, just IP connectivity.  To test if the MTU is correct then use the MTU of the VSAN VMkernel’s MTU size and issue a vmkping -s <VSANvmkernel_mtu_setting> <Other-VSANvmkernel-interfaces-in-cluster>
  3. VSAN performances about the same with or without Jumbo Frames configured.

In conclusion, would I advise configuring Jumbo Frames with VSAN?  No.  Unless you’re the type who prefers all risk and no reward…

 

 

Read More