Pages Menu
Categories Menu

Posted by on May 3, 2015 in Storage | 10 comments

Updating LSI Firmware VSAN

Just to catch you up to the point on why I’m even discussing this, let’s jog down how I arrived to this point:

  • On March 2014, I decided to run VSAN 5.5 using the Dell’s H310 I/O controller.  Under the hood this controller is a LSI 2008 (B2).
  • Dell’s stock firmware throttles this down to a queue depth to a pathetic 25.  Where a LSI 2008 should have a queue depth of 600.
  • After following Vladan Seget’s blog on  How-to Flash Dell Perc H310 with IT Firmware To Change Queue Depth from 25 to 600 I used the LSI-9211-8i firmware to turn this card from the laughing stock of I/O controllers to something respectable.
  • On Feb 2015, I installed vSphere 6.0 and upgrade to VSAN 6.0.  Holy cow, it sure took a long time to boot up vSphere.  I could tell during the boot up that the famous “mptsas2” was what was to blame.
  • I ran across Dunan’s blog on Updating LSI firmware through the ESXi commandline where he was upgrading his LSI 2308 card and decided to try this on the Dell H310 I/O controllers that are running the LSI 9211 8i firmware.

1. Grab your files.  There are 3 files you will need to complete this upgrade.  Download the zip files and grab the following files from

  • 9211_8i_Package_P20_IR_IT_Firmware_BIOS_for_MSDOS_Windows (92118ir.bin)
  • Installer_P20_for_Vmware_ESX50 (mptsas2.rom)
  • Installer_P20_for_Vmware_ESX50 (vmware-esx-sas2flash.vib)

2. SCP those 3 files to your /tmp directory of the host you are upgrading



2. Place the server in maintenance mode from vCenter or via the CLI:

esxcli system maintenanceMode set -e yes

3. Install the vib

esxcli software vib install –force -v /tmp/vmware-esx-sas2flash.vib


4. Verify version 20 has been installed

sxcli software vib list | grep sas2flash


5. Upgrade the LSI 92118-IR firmware

/opt/lsi/bin/sas2flash -o -f /tmp/92118ir.bin -b /tmp/mptsas2.rom


6. Verify the 20.0 was installed

/opt/lsi/bin/sas2flash -list


7. Reboot the host

esxcli system shutdown reboot -r “Update firmware for LSI”



You should now notice that the vSphere startup time has greatly improved!



Read More

Posted by on Dec 22, 2014 in Storage | 0 comments

Hard Drive Health Status of Permanent Disk Failure in VSAN


With all that spinning metal, it is only a matter of time when you experience a hard drive failure.  Fortunately for you, you’re using VSAN and hard drive failures are easy peasy.  Here we go!

Browse to the vCenter Web Client, Select your cluster > Manage > Disk Management


Next, you’re going to want to remove that puked drive from the disk group.  It should be easy to spot the failed drive…

*spoiler alert*

Look for the icon with the red !

Select the disk group of the failed drive and Select the Remove Disk.



Adding a drive is just as easy as removing the drive depending on the scenario you got yourself in when your configured the I/O controller.  The two paths are:

1.  If the I/O controller is in pass-through mode then its just plug and rug.
2. If the I/O controller doesn’t support pass-through mode then you will need to re-create each drive replaced as its own RAID 0 group.

Since I’m lazy I selected an I/O card that did pass-through mode.  I’ve replaced the drive and will now need to add it back to the disk group.

Select the original disk group from your cluster > Manage > Disk Management and select the Add Drive icon and new drive.



BAM! Within a few minutes, I’m fully recovered and healthy again.  Come on VSAN – I beg you, make things just a little tougher next time!  This is just too easy.

Read More

Posted by on Dec 21, 2014 in Storage | 0 comments

SSD is Opertional Dead or Error in VSAN



Oh shucks, if you are just noticing your available VSAN capacity decreased by an entire disk group then you probably lost a SSD drive in your VSAN cluster.  By design, if a SSD drive dies then the entire disk group goes for a coffee break.  Let’s start off by verifying how clairvoyant I am, time for a heat check.

Browse to the vCenter Web Client, Select your cluster > Manage > Disk Management 

If you see “Dead or Error” under Operational then your SSD is hurting.


I know this part might be painful for most but now you’re on the first step of recovery.  Just know, if you have at least 2 other VSAN hosts then you’re safe.  I’m assuming all your VMs have at least a failure policy of 1 host. Unless you’re the type that enjoys freebasing then none of your were VMs were  hanging around with a failure policy of 0.

From here, remove the disk group by Select the Remove Disk group icon under Disk Groups.



Now that you’ve removed the disk group, you can replace the SSD and recreate the disk group like nothing ever happened.  If you don’t have any SSD drives to replace it with then you may want to take the time to see if the SSD is recoverable.  If the drive is still readable by the host then you might be in luck.  The SSD drive just may have went suicidal on you and just blasted away its own partition table.

I only wish I could yank the SSD drive out and bring it back to life by blowing on it like a Nintendo cartridge.  Bro, I don’t care what you say – Excitebike was the best.


Read More

Posted by on Nov 3, 2014 in Infrastructure, Storage, Uncategorized | 0 comments

VSAN and Jumbo Frames


Error Message:

Call “FileManager.MakeDirectory” for object “FileManager” on vCenter Server “vCenter” failed. Failed to connect to component host 5419e4e5-daed-b7e8-3a07-00266cf65634. An unknown error has occurred. Call “FileManager.MakeDirectory” for object “FileManager” on vCenter Server “vCenter” failed. Failed to connect to component host 53229f42-40ff-aa65-8444-00266cf89748


Do you know the feeling of when you experience a real awesome technology and you become so excited you just can’t hide it? for me that is VMware’s VSAN.  I was an early adopter of VSAN and grabbed onto those beta bits as soon as it hit the wire.  During the beta it was advised to use Jumbo Frames (aka increasing the MTU size past 1500) for best performance.  I took the bait and why not? It was an easy enough to setup.

This past weekend, I migrated my VSAN cluster over to a Dell PowerConnect 6248 Layer 3 switch.  There are two places to configure Jumbo Frames on the Dell Powerconnect: The VLAN and the physical interface.

To recap, I have set:

  1. 9000 MTU on the VSAN VMkernel on the vSphere host.
  2. 9000 MTU on the VSAN VLAN on the Dell PowerConnect
  3. 9000 MTU on the VSAN Physical Interface the Dell PowerConnect

Lastly, I tested VSAN out by writing a folder to the VSAN datastore but get returned a nasty error which read

Call “FileManager.MakeDirectory” for object “FileManager” on vCenter Server “vCenter” failed. Failed to connect to component host 5419e4e5-daed-b7e8-3a07-00266cf65634. An unknown error has occurred. Call “FileManager.MakeDirectory” for object “FileManager” on vCenter Server “vCenter” failed. Failed to connect to component host 53229f42-40ff-aa65-8444-00266cf89748


Great, now I can’t even write a folder to the VSAN datastore so I figuratively yell into the sky, Come on VSAN Gods! Why!??

Once I got over the self pity stage, I noticed Dell uses a different type of MTU logic on the physical switch interface.  While in the rest of the world, when you configure a MTU of 9000, you mean it.  However, with Dell switches a MTU 9000 is actually (9 * 1024) = 9216.  Ugh, so now I have set

  1. 9000 MTU on the VSAN VMkernel on the vSphere host.
  2. 9000 MTU on the VSAN VLAN on the Dell PowerConnect
  3. 9216 MTU on the VSAN Physical Interface the Dell PowerConnect

BAM! The VSAN datastore is rocking and I can now write to it.  However, this scenario posed some reflection on how MTU actually impacts VSAN.  I did some further testing and came up with the follow conclusions:

  1. If any host in a VSAN cluster has a mismatched MTU size, NOTHING can write to the VSAN datastore. Even if one host with the wrong MTU is set then it will prevent VSAN from working.
  2.  Even with mismatched MTU’s when one verifies the Network Status (vCenter > Virtual SAN > General) it will show Normal. However, this doesn’t verify MTU, just IP connectivity.  To test if the MTU is correct then use the MTU of the VSAN VMkernel’s MTU size and issue a vmkping -s <VSANvmkernel_mtu_setting> <Other-VSANvmkernel-interfaces-in-cluster>
  3. VSAN performances about the same with or without Jumbo Frames configured.

In conclusion, would I advise configuring Jumbo Frames with VSAN?  No.  Unless you’re the type who prefers all risk and no reward…



Read More

Posted by on Oct 1, 2014 in Infrastructure, Storage, Uncategorized | 4 comments

Solved: The Case of the Missing Snapshot and the Failed VDP Backup

You’ve been working in the lab creating a plethora of virtual goodness and figure you’ll do a backup so generations will be able to enjoy the fruits of your labor.  You download the vSphere Data Protection appliance and deploy it, schedule a backup job and get mixed success.  Looking through your log of failed jobs you see “VDP: Operation failed due to existing snapshot”.

VDP error message

You check the snapshot manager, but find no snapshots.  You even click ‘Consolidate’ – which completes successfully and relaunch the backup job, but it fails again.

“NOOOOO!!!!!” You cry out in unimaginable frustration.

Take a deep breath.  This is a quick and easy fix and you’ll soon be the virtualization hero again.  There are just some extra files in your VM folder that VDP doesn’t know what to do with.

  • Open your web or C+ vCenter client and find the datastore your failing VM is attached to.
  • Open the datastore browser for that datastore and find your failing VM folder.
  • Leave the datastore browser window open and go back to the web or C+ client and perform a storage vMotion on the failing VM to another datastore.
  • Upon completion, refresh your datastore browser.  You should see at least one file left over.
  • If the folder fails to delete, shutdown the VM and try again.
  • Once the original folder has been successfully deleted, kick off the backup job again.

VDP error


Congratulations!  You are a true virtualization ninja.





Read More