(403) 325-9872

 

 

 

 

 

 

912-462-1508

    ESXi Recipe for HPE on different CNA/NIC cards. The versions are validated and mostly aligns with the HPE/VMware recommendation as of writing, Please feel free to update using Github Pull requests if you have any new HW/Cards that are validated.

HPE-ESXi-FW-Driver-Recipe
(620) 319-6718
6393646248 forks.
0 stars.
0 open issues.
Recent commits:

FYI:- Sample from ESXi-65U1.md (/github.com/in4sometech/HPE-ESXi-FW-Driver-Recipe/blob/master/ESXi-65U1.md)

4038631731

FYI:- Previous blog post with older HPE format example at /in4sometech.com/2016/05/04/vmware-and-hpe-recipe-depot/

ESXi Bash script for NW packet capture using pktcap-uw utility

ESXi 6.0,6.5 are having many Random NW network disconnection issues. Its been tough to do packet captures at the right time when we are having the issues, as these are happening very sporadic in nature. Below  Shell script was written to capture the Traffic on the eth port of a VM, and we can comment out or update the commands as per the requirement. 

Note’s:- 

  • Please un-comment using “#” if we don’t want to run the specific command.
  • Login to SSH, go to  “cd  /tmp” and create a new .sh file using VI or your fav tool.
  • Run using “Sh script.sh” and follow the prompts
  • Using “CTRL+C” will stop the capture and save to /tmp directory. 

Script-V1

 

GitHub :- /github.com/in4sometech/in4sometech-coding/blob/master/bash-scripts/esxi-packet-capture.sh 

9176098826

                 I never thought I would be writing a small blog on losing my data, Which is kind of bad. As of 07/07, my two WD-Red-5TB Drives which are in Raid-1 configuration running inside the two bay NAS-TS251 are dead, in the sense they are not powering on, thereby losing all the data. I am usually good at backing up the data multiple places, which I did. But, I lost some of the Tier-2/3 data(NFS-share, VMware images, ISO, tools, Software, Backups,  vmdk’s) which I thought should be good with the R-1. So, how come I lost both drives at the same time by some funky mistake. 
                 It started with me shifting to a new place, I did a clean shutdown of my QNAP-TS251 and removed all power and ethernet cables. Here comes the mystery, I had 3 Intel NUC’s(Two SkullCanyon + One 6I5SYH), connected to the same power strip coming from the UPS.  I packed all three Nucs in a package and then tried to find their power-cables/adapters. Being I already removed all cables, I used my due diligence (at-least tried) and found that two power-adapter looks identical in nature(PIC-1), So I moved them to the NUC package thinking they should be from the Skull Canyon’s. Interestingly, I found the power adapters are not named as intel, but some third-party company Delta which is not so uncommon. It’s now easy for me to find the adapter for third NUC, which is for 6i5syh. I found the same manufacturer(Delta) from the two power adapter’s and moved it to NUC Bag.  Now I moved the remaining power adapter to Qnap-Nas.
                Fast forward, after a week I removed my NAS from the package in my new place, and connected the power adapter and powered on. Boom, NAS is not powering on, I tried different ports and then realized after some troubleshooting, that the QNAP NAS power input is listed as 12V, but my adapter is showing the output as 19V. This confused me because I thought, that’s the only power adapter in the NAS package. I went back and found that the third power adapter in my NUC package is indeed 12V. This confused me, I quickly opened Google/YouTube and saw a couple of unboxing videos to found the grave mistake. Actually, The third adapter I thought of as Intel NUC is indeed Qnap’s. Just for this NUC model(6I5SYH), Intel used different manufacturer and labeling(Pic-2), But coincidentally Qnap also using the same power adapter manufacturer(Delta-Pic3) for its NAS.  I quickly swapped the power adapter and the NAS still not turned on, but after removing the drives, now my NAS itself is working good, But as soon as I pop in the drives the NAS shuts off. I attached these drives to two different desktops to confirm the Red drives burned due to the additional Voltage). Interestingly even though the Drives fried away without any warning, the NAS itself is working good.
                 The whole moral is even for Home Lab’s, Assumption is really bad. I am positive if this scenario happened to Desktop or SMB-Nas, the drives may be protected by power-supply unit or by the motherboard, but being low-cost Home-NAS the voltage is pushed directly to the drives. Anyways my drives are under warranty and even got the certified disks back from WD. Now, it’s time to slowly start the rebuild of HomeLab.

 

FYI:-

1) Delta PowerAdapter for Intel Skull Canyon NUC’s

Blythe process

 

2) Power Adapter for  Intel NUC-6I5SYH

848-249-6519

 

3) Delta power Adapter for QNAP

218-888-1670

 

 

7752275391

PowerCli Script to get HPE Driver and Firmware, Its ‘s  been hard coded to BL-460Gen8 and Gen9 and can be changed as per requirement. 

$vmhosts = Get-Cluster "ClusterName" | get-vmHost

$report = @()

foreach( $ESXHost in $vmhosts) {

$HWModel = get-vmHost $ESXHost | Select Name, Model
$esxcli = Get-ESXcli -vmhost $ESXHost

if($HWModel.Model -eq "ProLiant BL460c Gen8")
{

$info = $esxcli.network.nic.get("vmnic0").DriverInfo | select Driver,Hardwaremodel, FirmwareVersion, Version

$ModuleName = "$($info.Driver)"
$Firmware = "$($info.FirmwareVersion)"
$Driver = "$($info.Version)"
$lpfc = $esxcli.software.vib.list() | where { $_.name -eq "lpfc"}

$report += $info | select @{N="Hostname"; E={$ESXHost}},@{N="Hardware-Model"; E={$HWModel.Model}},@{N="Adapter-Firmware"; E={$Firmware}}, @{N="Network-Driver"; E={$Driver}}, @{N="FC-Driver"; E={$lpfc.version.substring(0,11)}}

}

elseif ($HWModel.Model -eq "ProLiant BL460c Gen9")

{

$info = $esxcli.network.nic.get("vmnic0").DriverInfo | select Driver, FirmwareVersion, Version
$ModuleName = "$($info.Driver)"
$Firmware = "$($info.FirmwareVersion)"
$Driver = "$($info.Version)"
$bnx2fc = $esxcli.software.vib.list() | where { $_.name -eq "scsi-bnx2fc"}
$report += $info | select @{N="Hostname"; E={$ESXHost}},@{N="Hardware-Model"; E={$HWModel.Model}},@{N="Adapter-Firmware"; E={$Firmware.substring(2,8)}}, @{N="Network-Driver"; E={$Driver}}, @{N="FC-Driver"; E={$bnx2fc.version.substring(0,14)}}

}

}

$report | out-gridview

 

FYI:-  Example output(Hostname been masked).

 

760-868-7769

The below script is from Vmwareadmins.com. Thank’s to the post “2568545861

I have tweaked the script with few lines,  to list the DataStore Name and Naaid info, so it would be nice to have all objects in single pane of view.  All credits to Eric Sarakaitis

 

Power CLI Script :

$initalTime = Get-Date
$date = Get-Date ($initalTime) -uformat %Y%m%d
$time = Get-Date ($initalTime) -uformat %H%M

Write-Host "nStarting time of Script is $(Get-Date ($initalTime) -uformat %H:%M:%S) "

$AllHosts = Get-cluster "Cluster-Name" | get-vmhost

#$AllHosts = get-vmhost HostName # Remove above line and Un-comment this, to run againest single Host

$reportLunPathState = @()

Write-Host "Total No of Hosts Acquired $($AllHosts.length)"

ForEach ($VMHost in $AllHosts) {

$hss = Get-View $VMHost.Extensiondata.ConfigManager.StorageSystem
$i++
Write-Host "n$(Get-Date -uformat %H:%M:%S) - $($i) of $($AllHosts.length) - " -NoNewLine; Write-Host

"$($VMHost)n" -ForegroundColor Yellow
$VMHostScsiLuns = $VMHost | Get-ScsiLun -LunType disk
ForEach ($VMHostScsiLun in $VMHostScsiLuns) {

$datastores = get-vmhost -name $VMHost | Get-datastore | Where-Object {$_.extensiondata.info.vmfs.extent.diskname -like
$VMHostScsiLun.canonicalname}

write-host "Finding path information for id - " -NoNewline; write-host "$VMHostScsiLun --> $($datastores.name)" -
ForegroundColor DarkGreen

if ($hss.FileSystemVolumeInfo.MountInfo.volume.extent -eq "$VMHostScsiLun" )
{
echo "equal"
}
$dsname = $hss.FileSystemVolumeInfo.MountInfo.volume.name
$VMHostScsiLunPaths = $VMHostScsiLun | Get-ScsiLunPath
$reportLunPathState += ($VMHostScsiLunPaths | Measure-Object) | Select @{N="Hostname"; E={$VMHost.Name}}, @
{N="Number of Paths"; E={$_.Count}}, Name, State, @{N="Datastore-Name"; E={$datastores.name}}, @{N="Naaid"; E=
{$VMHostScsiLun.canonicalname}}
$reportLunPathState += $VMHostScsiLunPaths | Select @{N="Hostname"; E={$VMHost.Name}}, "Number of Paths", Name,
State, Datastore-Name, Naaid
}

}

$conclusionTime = Get-Date
$totalTime = New-TimeSpan $initalTime $conclusionTime

Write-Host "nScript Ending Time is $(Get-Date ($conclusionTime) -uformat %H:%M:%S) " -NoNewLine; Write-Host " With
total Time of $($totalTime.Hours):$($totalTime.Minutes):$($totalTime.Seconds)`n"

$reportLunPathState | Out-GridView

$CurrentDateTime = Get-Date -format "ddMMMyyyy-HH-mm"
$Filename = "Esx-lun-path-" + $CurrentDateTime + ".csv"
$reportLunPathState | Export-Csv "$Filename" -NoTypeInformation -UseCulture

 

 

Fyi:- With few more changes, we can also run the same on selected list of naa.id’s  imported from CSV File. 

 

 

 

 

 

 

Patch – Update vCenter Server Appliance 6.0 with External PSC’s – CLI

        The Process to patch vCenter Server Appliance’s is pretty straight forward and the below procedure shows the high level Steps involved using CLI to patch to 6.0U3a. For GUI, we can directly login to 5480 and use either check URL (Internet connectivity required, I would only recommended this for HomeLab) or CD-ROM check option(Have to mount patch ISO).

Pre-Requisite:-

  1. Always the most important, validate the patch/Upgrade  against VMware Product Interoperability Matrices like vRA, vRO ,vRNI ,vRIN.
    (Ex:-705-535-7272)
  2. Validate last backups for the appliances and take new snapshots before the upgrade.
  3. Patch ISO version is validated and downloaded(In my case its “VMware-vCenter-Server-Appliance-6.0.0.30100-5202501-patch-FP.iso”)
  4. Note the ESXi Hosts in which the vCenter appliances are running(This is not required, but I feel is a good approach).
  5. Always first patch the PSC’s and then the vcenter Server.

 

GUI  :-  /in4sometech.com/2016/04/22/vcenter-6-0-upgrade-to-6-0-u2-external-pscs/

  1. Login to 5480 management portal of appliances (PSC’s, vCenter Server) and patch using URL /CD-ROM method.

CLI-Procedure:-

  1. Download the required patch from /my.vmware.com/group/vmware/patch#search
  2. Login to ESXi host running secondary PSC and Mount the Patch-ISO.
  3. Login to ESXi hosts SSH session for Secondary PSC(Vmware doesn’t really mention which external PSC to patch first, but based on the community guidance, its best to start with the secondary PSC).
  4. Make sure you are running the native Shell for the appliance  which is not always default. ( Type  “Appliancesh” once in putty to get the “command>” prompt)
  5. Now Run "software-packages stage --iso --acceptEulas"
  6. Once the above command completed succesfully, run  "software-packages install --staged"
  7. The install may take some time depending on the patch content but would be quick for PSC’s. After the installation completed, Reboot the appliance if prompted by the installation.(Once the appliance rebooted, we can now see the appliance version being changed in console, and also in Mgmt portal from port 5480)
  8. Once validated the secondary PSC patched as expected, Proceed with Primary PSC using same steps from 3 to 7.
  9. Now once both PSC’s patched, online and validated,  proceed with the steps 3-7 for the vCenter Appliance. The Appliance will take a little longer than PSC’s and  we can see the progress in the putty/Console session.
  10. Reboot and validate the vCenter Appliance, Login to Services and make sure all services are healthy.
  11. Monitor the vCenter server for next 24-48 hours for any surprises and delete the snapshots.
  12. One more caveat here is,  the patch ISO i have downloaded only contains modules for the appliances. But for the VUM, in order to make sure we are at same patch level, we have to download the Windows Update module  ISO from the normal download page (Ex:- VMware vCenter Server 6.0 Update 3a and modules for Windows) and run the VUM executable, which will detect the existing VUM and will prompt for an upgrade. I found this is really wired being we have to use two different ISO’s for a complete patch of all vCenter Server components.

Screenshot’s:-  PSC’s and vCenter Appliance 

For VUM:- 

One final interesting info , As per VMware, the  PSC’s running 6.0-U3 update should have min RAM >=4GB. For new installations it is default and being an upgrade, it is recommended to manually increase the RAM for PSC’s  after the patch on each PSC.