Thursday, January 19, 2012

PowerCLI to get VMs Hard disk info

PS C:\Users\Nutanix01> get-VM *PDS* -Location *Atlas* |Get-HardDisk
CapacityKB Persistence Filename
---------- ----------- --------
104857600 Persistent ...RED-DS-01] Win2003_PDS_RDM/Win2003_PDS_RDM.vmdk
104857600 Persistent ...D-DS-01] Win2003_PDS_RDM/Win2003_PDS_RDM_1.vmdk
104857600 Persistent ...D-DS-01] Win2003_PDS_RDM/Win2003_PDS_RDM_2.vmdk
209715200 Persistent ...D-DS-01] Win2003_PDS_RDM/Win2003_PDS_RDM_3.vmdk
209715200 Persistent ...D-DS-01] Win2003_PDS_RDM/Win2003_PDS_RDM_4.vmdk

PS C:\Users\Nutanix01> get-VM *PDS* -Location *Atlas* |Get-HardDisk |fl

DeviceName : vml.010000000077696e323030335f4f535f363430363365653764636663383966383665373338386631383263564449534b20
ScsiCanonicalName : t10.NUTANIX_win2003_OS_64063ee7dcfc89f86e7388f182c2fa112d6420c5
Persistence : Persistent
DiskType : RawVirtual
Filename : [ATLAS-SHARED-DS-01] Win2003_PDS_RDM/Win2003_PDS_RDM.vmdk
CapacityKB : 104857600
ParentId : VirtualMachine-vm-2922
Parent : Win2003_PDS_RDM
Uid : /VIServer=root@172.16.4.254:443/VirtualMachine=VirtualMachine-vm-2922/HardDisk=2000/
ConnectionState :
ExtensionData : VMware.Vim.VirtualDisk
Id : VirtualMachine-vm-2922/2000
Name : Hard disk 1
DeviceName : vml.010000000077696e323030335f4c4f47315f62363139353438366163613762613663323066323966306531564449534b20
ScsiCanonicalName : t10.NUTANIX_win2003_LOG1_b6195486aca7ba6c20f29f0e1b66cdc0a0838a84
Persistence : Persistent
DiskType : RawVirtual
Filename : [ATLAS-SHARED-DS-01] Win2003_PDS_RDM/Win2003_PDS_RDM_1.vmdk
CapacityKB : 104857600
ParentId : VirtualMachine-vm-2922
Parent : Win2003_PDS_RDM
Uid : /VIServer=root@172.16.4.254:443/VirtualMachine=VirtualMachine-vm-2922/HardDisk=2001/
ConnectionState :
ExtensionData : VMware.Vim.VirtualDisk
Id : VirtualMachine-vm-2922/2001
Name : Hard disk 2
DeviceName : vml.010000000077696e323030335f4c4f47325f38346333376565643634633135623538663565356266343463564449534b20
ScsiCanonicalName : t10.NUTANIX_win2003_LOG2_84c37eed64c15b58f5e5bf44c3de99ef3dd8ea28
Persistence : Persistent
DiskType : RawVirtual
Filename : [ATLAS-SHARED-DS-01] Win2003_PDS_RDM/Win2003_PDS_RDM_2.vmdk
CapacityKB : 104857600
ParentId : VirtualMachine-vm-2922
Parent : Win2003_PDS_RDM
Uid : /VIServer=root@172.16.4.254:443/VirtualMachine=VirtualMachine-vm-2922/HardDisk=2002/
ConnectionState :
ExtensionData : VMware.Vim.VirtualDisk
Id : VirtualMachine-vm-2922/2002
Name : Hard disk 3
DeviceName : vml.010000000077696e323030335f4442315f3731356236383061633339376363346162336434653034646364564449534b20
ScsiCanonicalName : t10.NUTANIX_win2003_DB1_715b680ac397cc4ab3d4e04dcd8013b49906f710
Persistence : Persistent
DiskType : RawVirtual
Filename : [ATLAS-SHARED-DS-01] Win2003_PDS_RDM/Win2003_PDS_RDM_3.vmdk
CapacityKB : 209715200
ParentId : VirtualMachine-vm-2922
Parent : Win2003_PDS_RDM
Uid : /VIServer=root@172.16.4.254:443/VirtualMachine=VirtualMachine-vm-2922/HardDisk=2003/
ConnectionState :
ExtensionData : VMware.Vim.VirtualDisk
Id : VirtualMachine-vm-2922/2003
Name : Hard disk 4
DeviceName : vml.010000000077696e323030335f4442325f3138633663663332613031633334633539303136343638313533564449534b20
ScsiCanonicalName : t10.NUTANIX_win2003_DB2_18c6cf32a01c34c5901646815349a59859006b77
Persistence : Persistent
DiskType : RawVirtual
Filename : [ATLAS-SHARED-DS-01] Win2003_PDS_RDM/Win2003_PDS_RDM_4.vmdk
CapacityKB : 209715200
ParentId : VirtualMachine-vm-2922
Parent : Win2003_PDS_RDM
Uid : /VIServer=root@172.16.4.254:443/VirtualMachine=VirtualMachine-vm-2922/HardDisk=2004/
ConnectionState :
ExtensionData : VMware.Vim.VirtualDisk
Id : VirtualMachine-vm-2922/2004
Name : Hard disk 5

Hard disk on cloned :
PS C:\Users\Nutanix01> get-VM *PDS* -Location *Proteus* |Get-HardDisk|fl


StorageFormat : Thick
Persistence : Persistent
DiskType : Flat
Filename : [PDS_VMFS] New Virtual Machine/New Virtual Machine.vmdk
CapacityKB : 104857600
ParentId : VirtualMachine-vm-2926
Parent : Windows2003-PDS-cloned_from_Atlas
Uid : /VIServer=root@172.16.4.254:443/VirtualMachine=VirtualMachine-vm-2926/HardDisk=2000/
ConnectionState :
ExtensionData : VMware.Vim.VirtualDisk
Id : VirtualMachine-vm-2926/2000
Name : Hard disk 1
StorageFormat : Thick
Persistence : Persistent
DiskType : Flat
Filename : [PDS_VMFS] New Virtual Machine/New Virtual Machine_1.vmdk
CapacityKB : 104857600
ParentId : VirtualMachine-vm-2926
Parent : Windows2003-PDS-cloned_from_Atlas
Uid : /VIServer=root@172.16.4.254:443/VirtualMachine=VirtualMachine-vm-2926/HardDisk=2001/
ConnectionState :
ExtensionData : VMware.Vim.VirtualDisk
Id : VirtualMachine-vm-2926/2001
Name : Hard disk 2
StorageFormat : Thick
Persistence : Persistent
DiskType : Flat
Filename : [PDS_VMFS] New Virtual Machine/New Virtual Machine_2.vmdk
CapacityKB : 104857600
ParentId : VirtualMachine-vm-2926
Parent : Windows2003-PDS-cloned_from_Atlas
Uid : /VIServer=root@172.16.4.254:443/VirtualMachine=VirtualMachine-vm-2926/HardDisk=2002/
ConnectionState :
ExtensionData : VMware.Vim.VirtualDisk
Id : VirtualMachine-vm-2926/2002
Name : Hard disk 3
StorageFormat : Thick
Persistence : Persistent
DiskType : Flat
Filename : [PDS_VMFS] New Virtual Machine/New Virtual Machine_3.vmdk
CapacityKB : 209715200
ParentId : VirtualMachine-vm-2926
Parent : Windows2003-PDS-cloned_from_Atlas
Uid : /VIServer=root@172.16.4.254:443/VirtualMachine=VirtualMachine-vm-2926/HardDisk=2003/
ConnectionState :
ExtensionData : VMware.Vim.VirtualDisk
Id : VirtualMachine-vm-2926/2003
Name : Hard disk 4
StorageFormat : Thick
Persistence : Persistent
DiskType : Flat
Filename : [PDS_VMFS] New Virtual Machine/New Virtual Machine_4.vmdk
CapacityKB : 209715200
ParentId : VirtualMachine-vm-2926
Parent : Windows2003-PDS-cloned_from_Atlas
Uid : /VIServer=root@172.16.4.254:443/VirtualMachine=VirtualMachine-vm-2926/HardDisk=2004/
ConnectionState :
ExtensionData : VMware.Vim.VirtualDisk
Id : VirtualMachine-vm-2926/2004
Name : Hard disk 5

Saturday, January 14, 2012

PowerCLI to Create Clones

Got from Steve Poitras

add-pssnapin VMware.VimAutomation.Core

$global:vCenter = "vcenter_ip"
$global:vc_acct = "root"
$global:vc_pass = "password"
$global:location = "Cluster_Name"
$global:datastore = "datastore0"
$global:sourceVMs = @("ubuntu-gold-vmfs-thin", "windows-gold-vmfs-thick")
$global:testIterations = 75


#####################
##### Functions #####
#####################

#Connect VI Server
function connectVIServer {
Connect-VIServer $global:vCenter -User $global:vc_acct -Password $global:vc_pass
}

#Build menu function
function buildMenu ($title,$data) {
$increment = 0
write-host ""
write-host $title
$data | %{
$increment +=1
write-host "$increment." $_
}
$selection = $data[(read-host "Please select an option [Example: 1]")-1]
write-host "You selected: $selection"
return $selection
}

Function Wait-VMGuest
{
<#
.SYNOPSIS
Wait while the VM performs a power operation.
.DESCRIPTION
Wait while the VM performs a power operation. usefull when working with
VMGuests. uses VMware tools to denote a startup, and powerOff.
.PARAMETER VM
VM object to wait on
.PARAMETER VMGuest
VMGuest object to wait on
.PARAMETER Operation
Type of power Operation to wait on valid values are: 'Startup', and 'Shutdown'
.EXAMPLE
Get-VM VM01 | Start-VM | Wait-VM -Operation 'Startup'| Update-Tools
.EXAMPLE
Get-VM VM01 | Shutdown-VMGuest | Wait-VM -Operation 'Shutdown'| Set-vm -NumCpu 2 | start-VM
#>
[cmdletbinding(DefaultParameterSetName='VM')]
Param(
[parameter(Position=0
, ParameterSetName='VM'
)]
[parameter(Position=0
, ParameterSetName='Guest'
)]
[ValidateSet("Startup","Shutdown")]
[string]
$Operation
,
[parameter(Mandatory=$True
, ValueFromPipeline=$True
, HelpMessage='Virtual Machine object to wait on'
, ParameterSetName='VM'
)]
[VMware.VimAutomation.ViCore.Impl.V1.Inventory.VirtualMachineImpl]
$VM
,
[parameter(Mandatory=$True
, ValueFromPipeline=$True
, HelpMessage='The VM Guest object to wait on'
, ParameterSetName='Guest'
)]
[VMware.VimAutomation.ViCore.Impl.V1.VM.Guest.VMGuestImpl]
$VMGuest

)
Process {
IF ($PSCmdlet.ParameterSetName -eq 'Guest') {
$VM = $VMGuest.VM
}
Switch ($Operation)
{
"Startup"
{
while ($vm.ExtensionData.Guest.ToolsRunningStatus -eq "guestToolsNotRunning")
{
Start-Sleep -Seconds 1
$vm.ExtensionData.UpdateViewData("Guest")
}
# return a fresh VMObject
Write-Output (Get-VM $VM)
break;
}
"Shutdown"
{
# wait for the VM to be shutdown
while ($VM.ExtensionData.Runtime.PowerState -ne "poweredOff")
{
Start-Sleep -Seconds 1
$vm.ExtensionData.UpdateViewData("Runtime.PowerState")
}
# return a fresh VMObject
Write-Output (Get-VM $VM)
break;
}
}
}
}



#Clone from VM - Linked Clone
function cloneLinkedVM($sourceVM,$targetDS){

$vmView = $sourceVM | Get-View
$date = Get-Date -Format "yyyMMddhhmmss"

$cloneSpec = new-object Vmware.Vim.VirtualMachineCloneSpec
$cloneSpec.Snapshot = $vmView.Snapshot.CurrentSnapshot
$cloneName = "$sourceVM-clone-$date"
$cloneFolder = $vmView.parent

$dsView = $sourceVM | Get-Datastore | Get-View

$cloneSpec.Location = new-object Vmware.Vim.VirtualMachineRelocateSpec
$cloneSpec.Location.DiskMoveType = [Vmware.Vim.VirtualMachineRelocateDiskMoveOptions]::createNewChildDiskBacking
$cloneSpec.Location.Datastore = $targetDS.Extensiondata.MoRef

write-host "Folder is: $cloneFolder"
write-host "Clone Name is: $CloneName"
write-host "Snapshot is: $cloneSpec.Snapshot"

write-host "Creating linked clone of $sourceVM"
$task = $vmView.CloneVM_Task( $cloneFolder, $cloneName, $cloneSpec )

Start-Sleep -Seconds 10
$cloneVM = Get-VM -Name $cloneName
return $cloneVM
}

#Clone a basic VM
function cloneHotVM($sourceVM,$format) {

$vmView = $sourceVM | Get-View
$date = Get-Date -Format "yyyMMddhhmm"
$activity = "Clone VM - Hot to $format disks"

$cloneSpec = new-object Vmware.Vim.VirtualMachineCloneSpec
$cloneSpec.Snapshot = $vmView.Snapshot.CurrentSnapshot
$cloneName = "$sourceVM-clone-$date"
$cloneFolder = $vmView.parent

$dsView = $sourceVM | Get-Datastore | Get-View

$cloneSpec.Location = New-Object VMware.Vim.VirtualMachineRelocateSpec
$cloneSpec.Location.Datastore = $dsView.MoRef

if($format -match "Thin"){
write-host "Target disk format is Thin"
$cloneSpec.Location.Transform = [Vmware.Vim.VirtualMachineRelocateTransformation]::sparse
}else{
write-host "Target disk format is Thick"
$cloneSpec.Location.Transform = [Vmware.Vim.VirtualMachineRelocateTransformation]::flat
}

write-host "Folder is: $cloneFolder"
write-host "Clone Name is: $CloneName"
write-host "Snapshot is:" $cloneSpec.Snapshot

write-host "Creating clone of $sourcevM"
$task = $vmView.CloneVM_Task( $cloneFolder, $cloneName, $cloneSpec )

findTaskDetails $task $activity
write-host "Task is" $task

#Start new VM
$cloneVM = Get-VM -Name $cloneName
$cloneVM | Start-VM
$cloneView = $cloneVM | Get-View

if($cloneVM){
#VM Exists
#Wait until the clone is started
while($cloneVM.'PowerState' -ne "PoweredOn"){
Start-Sleep -Seconds 10
#$cloneView.ExtensionData.UpdateViewData('Runtime.PowerState')
write-host "$sourceVM state is " $cloneVM.'PowerState' " waiting..."
$cloneVM = Get-VM $cloneVM
}
}

#Cleanup the clone
$cloneVM | Stop-VM -Confirm:$false | Out-null
$cloneVM | Remove-VM -DeletePermanently -Confirm:$false
}

#Clone from VM
function cloneColdVM($sourceVM,$format){

$vmView = $sourceVM | Get-View
$date = Get-Date -Format "yyyMMddhhmm"
$activity = "Clone VM - Cold to $format disks"

if($sourceVM.'PowerState' -eq "PoweredOn") {
if($vmView.guest.toolsstatus -eq "toolsOk"){
#Power off VM
write-host "$sourceVM is powered on, shutting down!"
Shutdown-VMGuest -VM $sourceVM -Confirm:$false | Out-null
}else{
#Hard power off vm
write-host "VMtools not installed performing hard power off"
Stop-VM -VM $sourceVM -Confirm:$false | Out-null
}
}

while ($sourceVM.'PowerState' -ne "PoweredOff") {
Start-Sleep -Seconds 10
#$vmView.ExtensionData.UpdateViewData('Runtime.PowerState')
write-host "$sourceVM state is " $sourceVM.'PowerState' " waiting..."
#Perform hard power off
$sourceVM = Get-VM $sourceVM
$sourceVM | Stop-VM -Confirm:$false
}

#Take Snapshot
write-host "Taking snapshot of $sourceVM"
New-Snapshot -VM $sourceVM -Name "$vm-SNAP$date" | Out-null

$cloneSpec = new-object Vmware.Vim.VirtualMachineCloneSpec
$cloneSpec.Snapshot = $vmView.Snapshot.CurrentSnapshot
$cloneName = "$sourceVM-clone-$date"
$cloneFolder = $vmView.parent

$dsView = $sourceVM | Get-Datastore | Get-View

$cloneSpec.Location = New-Object VMware.Vim.VirtualMachineRelocateSpec
$cloneSpec.Location.Datastore = $dsView.MoRef

if($format -match "Thin"){
write-host "Target disk format is Thin"
$cloneSpec.Location.Transform = [Vmware.Vim.VirtualMachineRelocateTransformation]::sparse
}else{
write-host "Target disk format is Thick"
$cloneSpec.Location.Transform = [Vmware.Vim.VirtualMachineRelocateTransformation]::flat
}

write-host "Folder is: $cloneFolder"
write-host "Clone Name is: $CloneName"
write-host "Snapshot is: $cloneSpec.Snapshot"

write-host "Creating clone of $sourcevM"
$task = $vmView.CloneVM_Task( $cloneFolder, $cloneName, $cloneSpec )

findTaskDetails $task $activity
write-host "Task is" $task

if($sourceVM.'PowerState' -ne "PoweredOn") {
#Start source VM
$sourceVM | Start-VM -RunAsync
}

#Start new VM
$cloneVM = Get-VM -Name $cloneName
$cloneVM | Start-VM
$cloneView = $cloneVM | Get-View

#Wait until the clone is started
if($cloneVM){
#VM Exists
while($cloneVM.'PowerState' -ne "PoweredOn"){
Start-Sleep -Seconds 10
#$cloneView.ExtensionData.UpdateViewData('Runtime.PowerState')
write-host "$sourceVM state is " $cloneVM.'PowerState' " waiting..."
$cloneVM = Get-VM $cloneVM
}
}

#Cleanup the clone
#$cloneVM | Stop-VM -Confirm:$false | Out-null
#$cloneVM | Remove-VM -DeletePermanently -Confirm:$false
}

###END - THIS SECTION MUST BE RUN###

#Connect to VIServer and Perform Tests
connectVIServer

foreach ($sourceVMName in $global:sourceVMs) {

for($i=1; $i -le $global:testIterations; $i++) {
write-host "Provision Linked Clone from $sourceVMName "
$sourceVM = Get-VM -Name $sourceVMName -Location $global:location
if (!$sourceVM) {exit 1}
$cVM = cloneLinkedVM $sourceVM (Get-Datastore -Name $global:datastore)
if (!$cVM) { exit 1}
else {
$cVM | Start-VM
while($cVM.'PowerState' -ne "PoweredOn"){
Start-Sleep -Seconds 10
write-host "$cVM.'name' state is " $cVM.'PowerState' " waiting..."
$cVM = Get-VM $cVM
}

#$cVM | Stop-VM -Confirm:$false
#$cVM | Remove-VM -DeletePermanently -Confirm:$false
}
}
}

#Get-VM -Location $global:location | where {$_.name -match "ubunutu-gold-vmfs-thin-clone-" -And $_.'PowerState' -ne "PoweredOn"} | Remove-VM -DeletePermanently -Confirm:$false

PowerCLi Commands to move VM

:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-ResourcePool vcloud | Get-VM | Update-Tools -NoReboot

For Storage Vmotion
Move-Vm -VM (get-VM AlWinDows* |where {$_.'PowerState' -ne "PoweredOn"}) -datastore ( Get-datastore "Atlas-NTNX-datastore4")

Get-Folder "" | get-vm | Move-Vm -Datastore -RunAsync


Get-VM “MyVM“ |Move-VM -datastore (Get-datastore “MyDatastore“)

The Move-VM Cmdlet covers a multiple of sins, lets check some out, you want VMotion:

Get-VM -Name “MyVM“ |Move-VM -Destination (Get-VMHost MyHost)

And also what you would expect, moving a VM to a new folder:

Move-VM -VM (Get-VM -Name MyVM)-Destination (Get-Folder -Name Production)

And moving a VM to a new resource pool, what a multifunctional cmdlet this is !

Move-VM -VM (Get-VM -Name MyVM)-Destination (Get-ResourcePool -Name “Important“)

Get-VM -Location Proteus-Cluster -Name *stress*
PS C:\Users\Nutanix01> Get-VM |Get-vmhost


PS C:\Users\Nutanix01> Get-VM -Location Proteus-Cluster|Get-VMHOST

Name ConnectionState PowerState Id CpuUsage CpuTotal Memory Memory
Mhz Mhz UsageMB TotalMB
---- --------------- ---------- -- -------- -------- ------- -------
172.x.x.x Connected PoweredOn ...-107 17026 31992 35702 49142
172.x.x.x Connected PoweredOn ...t-65 9238 31992 40414 49142
172.x.x.x Connected PoweredOn ...t-84 4740 31992 25492 49142
172.x.x.x Connected PoweredOn ...st-9 4552 31992 18584 49142



________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
PS C:\Users\Nutanix01> Get-VM -Location Proteus-Cluster -Name vMA_Proteus_5

Name PowerState Num CPUs Memory (MB)
---- ---------- -------- -----------
vMA_Proteus_5 PoweredOn 1 600



________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
PS C:\Users\Nutanix01> Get-VM -Location Proteus-Cluster -Name vMA_Proteus_5 |get-vmhost

Name ConnectionState PowerState Id CpuUsage CpuTotal Memory Memory
Mhz Mhz UsageMB TotalMB
---- --------------- ---------- -- -------- -------- ------- -------
172.x.x.x Connected PoweredOn ...t-84 5470 31992 25492 49142



________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
PS C:\Users\Nutanix01> Get-VM -Location Proteus-Cluster -Name vMA_Proteus_5 |Move-VM -Destination 172.x.x.x



PS C:\Users\Nutanix01> Get-VM -Location Proteus-Cluster -Name TestVMotion |Move-VM -Destination 172.x.x.x

Name PowerState Num CPUs Memory (MB)
---- ---------- -------- -----------
TestvMotion PoweredOff 1 2048



________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
PS C:\Users\Nutanix01> Get-VM -Location Proteus-Cluster -Name TestVMotion |Get-VMHost

Name ConnectionState PowerState Id CpuUsage CpuTotal Memory Memory
Mhz Mhz UsageMB TotalMB
---- --------------- ---------- -- -------- -------- ------- -------
172.16.x.x Connected PoweredOn ...-107 20462 31992 35703 49142



________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
PS C:\Users\Nutanix01> Get-VM -Location Proteus-Cluster -Name TestVMotion |Move-VM -Destination 172.y.y.y

Name PowerState Num CPUs Memory (MB)
---- ---------- -------- -----------
TestvMotion PoweredOff 1 2048



________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
PS C:\Users\Nutanix01> Get-VM -Location Proteus-Cluster -Name TestVMotion |Get-VMHost

Name ConnectionState PowerState Id CpuUsage CpuTotal Memory Memory
Mhz Mhz UsageMB TotalMB
---- --------------- ---------- -- -------- -------- ------- -------
172.y.y.y Connected PoweredOn ...t-84 4573 31992 25491 49142



S C:\Users\Nutanix01> Get-VM -Location Proteus-Cluster -Name TestVMotion |Get-Datastore

Name FreeSpaceMB CapacityMB
---- ----------- ----------
datastore0 45149 255744


________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
PS C:\Users\Nutanix01> Get-VM -Location Proteus-Cluster -Name TestVMotion |Move-VM -datastore datastore1

Name PowerState Num CPUs Memory (MB)
---- ---------- -------- -----------
TestvMotion PoweredOff 1 2048






:\Program Files\VMware\Infrastructure\vSphere PowerCLI> Get-ResourcePool vcloud | Get-VM | Update-Tools -NoReboot

Monday, January 9, 2012

ESXi - get VM commands.

for id in ` vim-cmd vmsvc/getallvms |awk '{print $1}'`
> do
> echo $id
> vim-cmd vmsvc/device.getdevices $id|egrep -i "vmdk|compat"
> done
336


esxcfg-scsidevs -m

esxcli vms vm ( you can kill a VM)

Friday, January 6, 2012

ESXi5.0 iscsi patch and iscsi commands

http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&docTypeID=DT_KB_1_1&externalId=2008018

Patch is in
https://hostupdate.vmware.com/software/VUM/OFFLINE/release-318-20111025-965713/ESXi500-201111001.zip

/vmfs/volumes/4ec6bf3f-09bef104-280a-00259048b4e3 # ls ESXi500-201111001.zip
ESXi500-201111001.zip

cd /var/log/vmware
ln -s /vmfs/volumes/4ec6bf3f-09bef104-280a-00259048b4e3/ESXi500-201111001.zip

esxcli software vib install -d ESXi500-201111001.zip
reboot
# esxcli software vib list

~ # esxcli software vib list| grep 2011-12
esx-base 5.0.0-0.4.504890 VMware VMwareCertified 2011-12-07
vmware-fdm 5.0.0-455964 VMware VMwareCertified 2011-12-06
tools-light 5.0.0-0.3.474610 VMware VMwareCertified 2011-12-07
~ # vmware -v
VMware ESXi 5.0.0 build-504890

Attachments (0)
Attach a file:

Comments
_displayNameOrEmail_ - _time_ - Remove_text_

Jerome Joseph - Dec 21, 2011 10:47 AM - Remove
To restart the software iSCSI stack:
1.Disable the software iSCSI configuration with the command:

esxcfg-swiscsi -d

2.Terminate the software iSCSI processes with the command:

esxcfg-swiscsi -k

In some cases, the iSCSI stack is in an unresponsive state and does not terminate on this command. If this happens, you must find the process ID and issue a terminate command directly to the operating system for this process.


a.Obtain the process ID for the vmkiscsi processes with the command:

ps ax | grep vmkiscsid

b.Terminate the process with the command:

kill

3.Enable the software iSCSI configuration with the command:

esxcfg-swiscsi -e

4.Perform a rescan of the software initiator with the command:

esxcfg-rescan vmhba37
Note: This procedure only applies if you are using the software iSCSI initiator.

Jerome Joseph - Dec 21, 2011 10:48 AM - Remove
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004033

Jerome Joseph - Dec 21, 2011 10:57 AM - Remove
Also firewall rules:

~ # esxcli network firewall ruleset list |grep -i nfs
nfsClient true

Jerome Joseph - Jan 5, 2012 1:59 PM - Remove
esxcli iscsi adapter get -A vmhba37 | grep iqn
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1004033

ESXi networking

ESXi
CDP
vim-cmd hostsvc/net/query_networkhint
esxcfg-info | more +/CDP\ Summary
esxcli iscsi adapter get -A vmhba37 (ESXi5.0)

~ # esxcfg-vswitch -l to find out VLAN id, port groups and Uplinks VMNICs
esxcfg-vswitch -l
Switch Name Num Ports Used Ports Configured Ports MTU Uplinks
vSwitch0 128 7 128 1500 vmnic3.p1
PortGroup Name VLAN ID Used Ports Uplinks
VM Network 0 2 vmnic3.p1
svm-iscsi-pg 1025 1 vmnic3.p1
vmk-iscsi-pg 1025 1 vmnic3.p1
Management Network 0 1 vmnic3.p1

In this example we are using only one uplink at MTU 1500 and vlan id of 1025.
~ # esxcli network neighbor list
Neighbor Mac Address vmknic Expiry(sec) State
10.50.80.14 00:50:56:9d:00:0e vmk0 843
10.50.80.50 9c:8e:99:16:a9:60 vmk0 1169
10.50.80.51 00:25:90:2e:f1:f4 vmk0 249
10.50.80.52 00:25:90:2e:f2:6e vmk0 418
10.50.80.53 00:25:90:2e:f2:3a lo0 4291308798
10.50.80.80 00:0c:29:7e:55:63 vmk0 1162
10.50.80.81 00:50:56:9d:00:00 vmk0 1121
10.50.80.244 00:21:5a:a8:d7:e8 vmk0 1168
10.50.80.253 00:23:47:f6:45:00 vmk0 325
192.168.5.2 00:50:56:9d:00:07 vmk1 1057

~ # esxcli network vswitch standard list
vSwitch0
Name: vSwitch0
Class: etherswitch
Num Ports: 128
Used Ports: 10
Configured Ports: 128
MTU: 1500
CDP Status: listen
Beacon Enabled: false
Beacon Interval: 1
Beacon Threshold: 3
Beacon Required By:
Uplinks: vmnic3.p1
Portgroups: svm-iscsi-pg, VM Network, vmk-nfs-pg, vmk-iscsi-pg, Management Network
vSwitch1
Name: vSwitch1
Class: etherswitch
Num Ports: 128
Used Ports: 3
Configured Ports: 128
MTU: 1500
CDP Status: listen
Beacon Enabled: false
Beacon Interval: 1
Beacon Threshold: 3
Beacon Required By:
Uplinks:
Portgroups: VM Network 2, VMkernel
~ # esxcli network vswitch standard policy failover get -v vSwitch0
Load Balancing: srcport
Network Failure Detection: link
Notify Switches: true
Failback: true
Active Adapters: vmnic3.p1
Standby Adapters:
Unused Adapters:
# esxcli network vswitch standard portgroup policy failover get -p "Management Network"
Load Balancing: srcport
Network Failure Detection: link
Notify Switches: true
Failback: true
Active Adapters: vmnic3.p1
Standby Adapters:
Unused Adapters:
Override Vswitch Load Balancing: true
Override Vswitch Network Failure Detection: true
Override Vswitch Notify Switches: true
Override Vswitch Failback: true
Override Vswitch Uplinks: false
~ # esxcli network vswitch standard portgroup policy failover get -p vmk-iscsi-pg
Load Balancing: srcport
Network Failure Detection: link
Notify Switches: true
Failback: true
Active Adapters: vmnic3.p1
Standby Adapters:
Unused Adapters:
Override Vswitch Load Balancing: false
Override Vswitch Network Failure Detection: false
Override Vswitch Notify Switches: false
Override Vswitch Failback: false
Override Vswitch Uplinks: false
~ # esxcli network vswitch standard portgroup policy failover get -p svm-iscsi-pg
Load Balancing: srcport
Network Failure Detection: link
Notify Switches: true
Failback: true
Active Adapters: vmnic3.p1
Standby Adapters:
Unused Adapters:
Override Vswitch Load Balancing: false
Override Vswitch Network Failure Detection: false
Override Vswitch Notify Switches: false
Override Vswitch Failback: false
Override Vswitch Uplinks: false
~ # esxcli network vswitch standard uplink add -u=vmnic0 -v=vSwitch0
~ # esxcli network vswitch standard uplink add -u=vmnic1 -v=vSwitch0
~ # esxcli network vswitch standard portgroup policy failover get -p svm-iscsi-pg
Load Balancing: srcport
Network Failure Detection: link
Notify Switches: true
Failback: true
Active Adapters: vmnic3.p1
Standby Adapters: vmnic0, vmnic1
Unused Adapters:
Override Vswitch Load Balancing: false
Override Vswitch Network Failure Detection: false
Override Vswitch Notify Switches: false
Override Vswitch Failback: false
Override Vswitch Uplinks: false


/sbin # esxcfg-nics -l : to find what other nics are available.
Name PCI Driver Link Speed Duplex MAC Address MTU Description
vmnic0 0000:01:00.00 e1000e Up 1000Mbps Full 00:25:90:48:b4:ca 1500 Intel Corporation 82574L Gigabit Network Connection
vmnic1 0000:02:00.00 e1000e Down 0Mbps Half 00:25:90:48:b4:cb 1500 Intel Corporation 82574L Gigabit Network Connection
vmnic3.p1 0000:03:00.00 mlx4_en Up 10000Mbps Full 00:25:90:2e:98:85 1500 Mellanox Technologies MT26418 [ConnectX VPI - 10GigE / IB DDR, PCIe 2.0 5GT/s]

/sbin # esxcfg-vmknic -l - what ip address is used. ( make sure only one IP is configured for iscsi)
Interface Port Group/DVPort IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type
vmk0 Management Network IPv4 172.16.13.3 255.240.0.0 172.31.255.255 00:25:90:48:b4:ca 1500 65535 true STATIC
vmk1 vmk-iscsi-pg IPv4 192.168.5.1 255.255.255.0 192.168.5.255 00:50:56:7b:a3:50 1500 65535 true STATIC

Driver Version:
/sbin # ethtool -i vmnic3.p1
driver: mlx4_en
version: 1.6.1.2 (Aug-08-2011)
firmware-version: 2.9.1000
bus-info: 0000:03:00.0

/sbin # esxcfg-module -i mlx4_en : driver version and parameters, what type of interrupt and Rings (Qos)
esxcfg-module module information
input file: /usr/lib/vmware/vmkmod/mlx4_en
License: Dual BSD/GPL
Version:
Name-space: com.mellanox.mlx4_en@9.2.0.0
Required name-spaces:
com.vmware.driverAPI@9.2.0.0
com.vmware.vmkapi@v2_0_0_0
Parameters:
heap_initial: int
Initial heap size allocated for the driver.
heap_max: int
Maximum attainable heap size for the driver.
low_mem_config: uint
If set will configure driver to prefer memory utilization over performance. Range 0|1. Default 0
msi_x: uint
Attempt to use MSI-X if nonzero. Range: 0|1. Default 1
net_q: uint
NetQueue support. Enables NetQueue support. Default 1
non_vep_mfunc: uint
Multi functional mode without VEP configuration. Default 0
num_lro: uint
Number of LRO sessions per ring or disabled (0). Default 16
num_lro_aggr: uint
Max number of LRO packets to aggregate. Default 16
port1_default_func: uint
VLAN steering mode only: default function on port1 for untagged traffic. Default 0
port2_default_func: uint
VLAN steering mode only: default function on port2 for untagged traffic. Default 1
rx_bw0: uint
RX BW for function 0 (in multi functional mode). Default 0
rx_bw1: uint
RX BW for function 1 (in multi functional mode). Default 0
rx_bw2: uint
RX BW for function 2 (in multi functional mode). Default 0
rx_bw3: uint
RX BW for function 3 (in multi functional mode). Default 0
rx_bw4: uint
RX BW for function 4 (in multi functional mode). Default 0
rx_bw5: uint
RX BW for function 5 (in multi functional mode). Default 0
rx_bw6: uint
RX BW for function 6 (in multi functional mode). Default 0
rx_bw7: uint
RX BW for function 7 (in multi functional mode). Default 0
rx_ring_num: uint
Number of RX rings. Default: 8, multifunction: 4. Max value is 8, must be power of 2
skb_mpool_initial: int
Driver's minimum private socket buffer memory pool size.
skb_mpool_max: int
Maximum attainable private socket buffer memory pool size for the driver.
tx_bw0: int
TX rate limiting for function0. Default 2500
tx_bw1: int
TX rate limiting for function1. Default 2500
tx_bw2: int
TX rate limiting for function2. Default 2500
tx_bw3: int
TX rate limiting for function3. Default 2500
tx_ring_num: uint
Number of TX rings. Default: 8. Max value is (1 + 8), power of 2

ethtool vmnic3.p1 : to find duplex mode, link detected.
Settings for vmnic3.p1:
Supported ports: [ TP ]
Supported link modes:
Supports auto-negotiation: No
Advertised link modes: Not reported
Advertised auto-negotiation: No
Speed: Unknown! (10000)
Duplex: Full
Port: Twisted Pair
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x00000014 (20)
Link detected: yes

/sbin # ethtool -S vmnic3.p1 ( another nice command watch ethtool -S vmnic.3.p1 | egrep "NIC|errors|drops")

to watch for collissions, packet drops, error statistics)
NIC statistics:
rx_packets: 477499775
tx_packets: 385171583
rx_bytes: 553575646465
tx_bytes: 387922909420
rx_errors: 0
tx_errors: 0
rx_dropped: 0
tx_dropped: 0
multicast: 1864891
collisions: 0
rx_length_errors: 0
rx_over_errors: 0
rx_crc_errors: 0
rx_frame_errors: 0
rx_fifo_errors: 0
rx_missed_errors: 0
tx_aborted_errors: 0
tx_carrier_errors: 0
tx_fifo_errors: 0
tx_heartbeat_errors: 0
tx_window_errors: 0
lro_aggregated: 0
lro_flushed: 0
lro_no_desc: 0
tso_packets: 12413800
queue_stopped: 0
wake_queue: 0
tx_timeout: 0
rx_alloc_failed: 0
rx_csum_good: 473208461
rx_csum_none: 4290768
tx_chksum_offload: 48962974
tx_queue_0_packets: 195868345
tx_queue_0_bytes: 268219905684
tx_queue_1_packets: 0
tx_queue_1_bytes: 0
tx_queue_2_packets: 22660
tx_queue_2_bytes: 2401715
tx_queue_3_packets: 19489818
tx_queue_3_bytes: 28984837136
tx_queue_4_packets: 28289
tx_queue_4_bytes: 3747808
tx_queue_5_packets: 24940
tx_queue_5_bytes: 16991750
tx_queue_6_packets: 197
tx_queue_6_bytes: 11928
tx_queue_7_packets: 63
tx_queue_7_bytes: 72446
tx_queue_8_packets: 157566697
tx_queue_8_bytes: 87314994857
tx_queue_9_packets: 0
tx_queue_9_bytes: 0
tx_queue_10_packets: 12438
tx_queue_10_bytes: 999073
tx_queue_11_packets: 12084208
tx_queue_11_bytes: 3358342992
tx_queue_12_packets: 26856
tx_queue_12_bytes: 2925516
tx_queue_13_packets: 47073
tx_queue_13_bytes: 17678621
tx_queue_14_packets: 0
tx_queue_14_bytes: 0
tx_queue_15_packets: 0
tx_queue_15_bytes: 0
rx_queue_0_packets: 68133576
rx_queue_0_bytes: 45139151905
rx_queue_1_packets: 409366202
rx_queue_1_bytes: 508436494878
rx_queue_2_packets: 0
rx_queue_2_bytes: 0
rx_queue_3_packets: 0
rx_queue_3_bytes: 0


You can set the speed, duplex mode as well
ethtool -s|--change DEVNAME Change generic options
[ speed 10|100|1000 ]
[ duplex half|full ]
[ port tp|aui|bnc|mii|fibre ]
[ autoneg on|off ]
[ phyad %%d ]
[ xcvr internal|external ]
[ wol p|u|m|b|a|g|s|d... ]
[ sopass %%x:%%x:%%x:%%x:%%x:%%x ]
[ msglvl %%d ]

ESXi ISCSI adapter related commands:
# esxcli iscsi adapter list
Adapter Driver State UID Description
------- --------- ------ ------------- ----------------------
vmhba37 iscsi_vmk online iscsi.vmhba37 iSCSI Software Adapter
esxcli iscsi networkportal list --adapter=vmhba37
esxcli iscsi networkportal add --adapter=vmhba37 --nic=vmk1

vmkiscsi-tool -V vmhba37 : look at the high lighted to see if the ESXi is seeing the mac address right.
iSCSI Nic Properties:
- vNic name: vmk1
- pNic name: vmnic3.p1
- ipv4 address: 192.168.5.1
- ipv4 net mask: 255.255.255.0
- ipv6 addresses:
- mac address: 00:25:90:2e:98:e1
- mtu: 1500 (bytes)
- toe: off
- tso: on
- tcp checksum: off
- vlan: on
- vlanId: 1022
- ports revered 63488~65536
- link status: connected
- ethernet speed: 10000 (Mbps)
- packets received: 3745729
- packets sent: 1746541
- NIC driver: mlx4_en
- driver version: 1.6.1.2 (Aug-08-2011)
- firmware version: 2.9.1000

esxcli iscsi adapter get -A vmhba37
vmhba37
Name: iqn.1998-01.com.vmware:proteus-1-6bf60b96
Alias:
Vendor: VMware
Model: iSCSI Software Adapter
Description: iSCSI Software Adapter
Serial Number:
Hardware Version:
Asic Version:
Firmware Version:
Option Rom Version:
Driver Name: iscsi_vmk
Driver Version:
TCP Protocol Supported: false
Bidirectional Transfers Supported: false
Maximum Cdb Length: 64
Can Be NIC: false
Is NIC: false
Is Initiator: true
Is Target: false
Using TCP Offload Engine: false
Using ISCSI Offload Engine: false

esxcli iscsi adapter discovery sendtarget list : if target is configured right ?
Adapter Sendtarget
------- ----------------
vmhba37 192.168.5.2:3261

esxcfg-mpath -L|grep active or dead
vmhba37:C0:T24:L0 state:active t10.NUTANIX_jenkins_var2_f03a81584ad2481e7a32dff1435cd2d6e0ac2be9 vmhba37 0 24 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:jenkins-var2-3b2607f7,t,1
vmhba37:C0:T3:L0 state:active t10.NUTANIX_SPTest_snapshot_dec21ef19174250e0f1274fdc1d24fc9938c7181 vmhba37 0 3 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:sptest.snapshot-bf16abd6,t,1
vmhba37:C0:T8:L0 state:active t10.NUTANIX_shared_datastore1_a118bcd634d6150d5816d20eeeba58b8f7188938 vmhba37 0 8 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:shared_datastore1-08acdfb0,t,1
vmhba37:C0:T13:L0 state:active t10.NUTANIX_NFS_rdm_root_aeaed9fdd5c0a5f0e31b3b27d25597ea2fafff1f vmhba37 0 13 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:nfs-rdm-root-86ef8ab8,t,1
vmhba37:C0:T18:L0 state:active t10.NUTANIX_laura_ubuntu_b567124e6a1c671c951ac540d5b4d066bf02490d vmhba37 0 18 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:laura-ubuntu-2e5c9d9e,t,1
vmhba32:C0:T0:L0 state:active t10.ATA_____ST91000640NS________________________________________9XG066JH vmhba32 0 0 0 NMP active local sata.vmhba32 sata.0:0
vmhba37:C0:T23:L0 state:active t10.NUTANIX_NFS_laura2_19a05f0389ea48c8bf8ab5f2bd09ba64292fdf47 vmhba37 0 23 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:nfs-laura2-501a06ad,t,1
vmhba37:C0:T2:L0 state:active t10.NUTANIX_vdisk_jenkins_var_4adb3cd05414b6867515da94aadd8b60bbd6fa74 vmhba37 0 2 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:vdisk-jenkins-var-ffa83c61,t,1
vmhba35:C0:T0:L0 state:active t10.ATA_____ST91000640NS________________________________________9XG04XGA vmhba35 0 0 0 NMP active local sata.vmhba35 sata.0:0
vmhba37:C0:T7:L0 state:active t10.NUTANIX_shared_datastore3_a761c9791f24e9a3192d13c5429d8bdb8a764a21 vmhba37 0 7 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:shared_datastore3-4b9418d4,t,1
vmhba37:C0:T12:L0 state:active t10.NUTANIX_NFS_rdm_disk_4_8a16d5a2fed9c0cd98f522a5972cca17232b5778 vmhba37 0 12 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:nfs-rdm-disk-4-806fe744,t,1
vmhba37:C0:T17:L0 state:active t10.NUTANIX_VS5TEST_SHARED_DS_02_818ab185dacbba01268ee757057dc3ec3d2efda3 vmhba37 0 17 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:vs5test-shared-ds-02-714ceddd,t,1
vmhba0:C0:T0:L0 state:active t10.ATA_____INTEL_SSDSA2CW300G3_____________________CVPR130205U3300EGN__ vmhba0 0 0 0 NMP active local sata.vmhba0 sata.0:0
vmhba37:C0:T22:L0 state:active t10.NUTANIX_jenkins_home2_f16535bb09f50bd389321753707c4089aca13c00 vmhba37 0 22 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:jenkins-home2-b09052f4,t,1
vmhba37:C0:T1:L0 state:active t10.NUTANIX_vdisk_jenkins_home_ee4dc08f8546629f1b8de8ff9c17d9f0f6b494ae vmhba37 0 1 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:vdisk-jenkins-home-60425f0d,t,1
vmhba37:C0:T6:L0 state:active t10.NUTANIX_shared_datastore2_5056dde856bda2e950dc640701b2d15232696b53 vmhba37 0 6 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:shared_datastore2-368ea95a,t,1
vmhba37:C0:T11:L0 state:active t10.NUTANIX_NFS_rdm_disk_2_ba28e141962617a511a7fc2d7adba28e03c4daf2 vmhba37 0 11 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:nfs-rdm-disk-2-ce363c96,t,1
vmhba37:C0:T16:L0 state:active t10.NUTANIX_VS5TEST_SHARED_DS_03_29186254060a4edc0b07938b2459918024d724dd vmhba37 0 16 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:vs5test-shared-ds-03-fa925a96,t,1
vmhba34:C0:T0:L0 state:active t10.ATA_____ST91000640NS________________________________________9XG06694 vmhba34 0 0 0 NMP active local sata.vmhba34 sata.0:0
vmhba37:C0:T21:L0 state:active t10.NUTANIX_NFS_laura_ad64ca1339b7bbbe97b9f6588c41053b89cea280 vmhba37 0 21 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:nfs-laura-2f3765a2,t,1
vmhba37:C0:T0:L0 state:active t10.NUTANIX_shared_data7_9f17562a583d94c93d8e3e1cf1918142a15d23c2 vmhba37 0 0 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:shared_data7-402424f8,t,1
vmhba37:C0:T5:L0 state:active t10.NUTANIX_shared_datastore0_4ff793687221b5a3d62646b157c50a3fffc61d32 vmhba37 0 5 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:shared_datastore0-aa69a53e,t,1
vmhba37:C0:T10:L0 state:active t10.NUTANIX_NFS_rdm_disk_3_2557408b6a76d3c6604ac4bf33338a22f8aedee4 vmhba37 0 10 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:nfs-rdm-disk-3-e7fbd2a5,t,1
vmhba37:C0:T15:L0 state:active t10.NUTANIX_VS5TEST_SHARED_DS_01_068e745713f034b9fff264fa113427b09788679d vmhba37 0 15 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:vs5test-shared-ds-01-190baf86,t,1
vmhba37:C0:T20:L0 state:active t10.NUTANIX_ubu_stress_test_0_vdisk_4eea1dcd_0_162a26ba3f0c483191756d9fb12db120950498b9 vmhba37 0 20 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:ubu-stress-test-0-vdisk-4eea1dcd-0-8657ebdb,t,1
vmhba37:C0:T4:L0 state:active t10.NUTANIX_jenkins_dogfood_f691c6bdbba98a9ad38ca1f228fb4fd8bd7a7d59 vmhba37 0 4 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:jenkins-dogfood-5eac420e,t,1
vmhba37:C0:T9:L0 state:active t10.NUTANIX_PDS_TEST_e20efc76f560c6a67217815f275d5b39eed87735 vmhba37 0 9 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:pds-test-2b88fcd0,t,1
vmhba33:C0:T0:L0 state:active t10.ATA_____ST91000640NS________________________________________9XG066CE vmhba33 0 0 0 NMP active local sata.vmhba33 sata.0:0
vmhba37:C0:T14:L0 state:active t10.NUTANIX_VS5TEST_SHARED_DS_04_dc7b523a436286d0b0c0f2fa37b507f8868d9761 vmhba37 0 14 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:vs5test-shared-ds-04-fa503db2,t,1
vmhba37:C0:T19:L0 state:active t10.NUTANIX_laura_ubuntu_clone_9d528216d97af798327a7a31123e3b5096515f9d vmhba37 0 19 0 NMP active san iqn.1998-01.com.vmware:proteus-3-5f23b19a 00023d000001,iqn.2010-06.com.nutanix:laura-ubuntu-clone-7da678d7,t,1
vmhba36:C0:T0:L0 state:active t10.ATA_____ST91000640NS________________________________________9XG066SX vmhba36 0 0 0 NMP active local sata.vmhba36 sata.0:0


vmkiscsi-tool -D -l vmhba37 ( if you don't see any iscsi targets from nutanix or only partial list, check ESX firewall rules on all the ESX nodes)

=========Discovery Properties for Adapter vmhba37=========
iSnsDiscoverySettable : 0
iSnsDiscoveryEnabled : 0
iSnsDiscoveryMethod : 0
iSnsHost.ipAddress : ::
staticDiscoverySettable : 0
staticDiscoveryEnabled : 1
sendTargetsDiscoverySettable : 0
sendTargetsDiscoveryEnabled : 1
slpDiscoverySettable : 0
DISCOVERY ADDRESS : 192.168.5.2
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:shared_data2-f32ba819
ADDRESS : 192.168.5.2:62002
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:shared_data3-14038b98
ADDRESS : 192.168.5.2:62003
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:shared_data0-ed7fb8e6
ADDRESS : 192.168.5.2:62000
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:shared_data1-665353cd
ADDRESS : 192.168.5.2:62001
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:shared_data7-402424f8
ADDRESS : 192.168.5.2:62007
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:shared_data5-0ceb3267
ADDRESS : 192.168.5.2:62009
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:shared_data4-d6de24ba
ADDRESS : 192.168.5.2:62008
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:shared_data6-bfaa6e4d
ADDRESS : 192.168.5.2:62010
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:vdisk-jenkins-home-60425f0d
ADDRESS : 192.168.5.2:3261
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:vdisk-jenkins-var-ffa83c61
ADDRESS : 192.168.5.2:3261
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:shared-lun10-54a5bcd7
ADDRESS : 192.168.5.2:62011
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:vdisk-test-657ac94b
ADDRESS : 192.168.5.2:3261
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:shared_data11_10g-dbd2670f
ADDRESS : 192.168.5.2:62015
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:shared_data9_10g-5c4b59f2
ADDRESS : 192.168.5.2:62013
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:shared_data8_10g-893d36a6
ADDRESS : 192.168.5.2:62012
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:shared_data10_10g-a012f8f6
ADDRESS : 192.168.5.2:62014
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:shared_data100_100g-6b99d975
ADDRESS : 192.168.5.2:62004
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:shared_data104_100g-818afa13
ADDRESS : 192.168.5.2:62016
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:uvm-vdisk-4ed018a2-1
ADDRESS : 192.168.5.2:3261
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:shared_data103_100g-1549ee10
ADDRESS : 192.168.5.2:62006
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:uvm-vdisk-4ed018a2-0
ADDRESS : 192.168.5.2:3261
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:shared_data101_100g-9bcc23b1
ADDRESS : 192.168.5.2:62005
BOOT : No
LAST ERR : LOGIN: No Errors
STATIC DISCOVERY TARGET
NAME : iqn.2010-06.com.nutanix:sptest-798711b3
ADDRESS : 192.168.5.2:3261
BOOT : No
LAST ERR : LOGIN: No Errors

esxcli iscsi networkportal list
vmhba37
Adapter: vmhba37
Vmknic: vmk1
MAC Address: 00:25:90:2e:98:e1
MAC Address Valid: true
IPv4: 192.168.5.1
IPv4 Subnet Mask: 255.255.255.0
IPv6:
MTU: 1500
Vlan Supported: true
Vlan ID: 1022
Reserved Ports: 63488~65536
TOE: false
TSO: true
TCP Checksum: false
Link Up: true
Current Speed: 10000
Rx Packets: 44604
Tx Packets: 48326
NIC Driver: mlx4_en
NIC Driver Version: 1.6.1.2 (Aug-08-2011)
NIC Firmware Version: 2.9.1000
Compliant Status: compliant
NonCompliant Message:
NonCompliant Remedy:
Vswitch: vSwitch0
PortGroup: vmk-iscsi-pg
VswitchUuid:
PortGroupKey:
PortKey:
Duplex:
Path Status: unused
~ # esxcli network firewall ruleset list |grep nfs
nfsClient true
~ # esxcli network firewall ruleset rule list| grep -i nfs
nfsClient Outbound TCP Dst 0 65535


-For 1G Ethernet Module: (most of the above commands can be used)
/sbin # ethtool -i vmnic1
driver: e1000e
version: 1.1.2-NAPI
firmware-version: 1.9-0
bus-info: 0000:02:00.0



esxcfg-module -i e1000e
esxcfg-module module information
input file: /usr/lib/vmware/vmkmod/e1000e
License: GPL
Version:
Name-space:
Required name-spaces:
com.vmware.driverAPI@9.2.0.0
com.vmware.vmkapi@v2_0_0_0
Parameters:
CrcStripping: array of int
Enable CRC Stripping, disable if your BMC needs the CRC
IntMode: array of int
Interrupt Mode
InterruptThrottleRate: array of int
Interrupt Throttling Rate
KumeranLockLoss: array of int
Enable Kumeran lock loss workaround
RxAbsIntDelay: array of int
Receive Absolute Interrupt Delay
RxIntDelay: array of int
Receive Interrupt Delay
SmartPowerDownEnable: array of int
Enable PHY smart power down
TxAbsIntDelay: array of int
Transmit Absolute Interrupt Delay
TxIntDelay: array of int
Transmit Interrupt Delay
WriteProtectNVM: array of int
Write-protect NVM [WARNING: disabling this can lead to corrupted NVM]
copybreak: uint
Maximum size of packet that is copied to a new buffer on receive
heap_initial: int
Initial heap size allocated for the driver.
heap_max: int
Maximum attainable heap size for the driver.
skb_mpool_initial: int
Driver's minimum private socket buffer memory pool size.
skb_mpool_max: int
Maximum attainable private socket buffer memory pool size for the driver.

On CVM:

ifconfig -a
eth0 Link encap:Ethernet HWaddr 00:50:56:9a:c5:bc
inet addr:172.16.13.23 Bcast:172.16.15.255 Mask:255.255.240.0
inet6 addr: fe80::250:56ff:fe9a:c5bc/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:470546787 errors:0 dropped:0 overruns:0 frame:0
TX packets:371359181 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:552056214185 (552.0 GB) TX bytes:369607646409 (369.6 GB)
eth1 Link encap:Ethernet HWaddr 00:50:56:9a:c5:bd
inet addr:192.168.5.2 Bcast:192.168.5.255 Mask:255.255.255.0
inet6 addr: fe80::250:56ff:fe9a:c5bd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:90194295 errors:0 dropped:0 overruns:0 frame:0
TX packets:60650228 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:104324081442 (104.3 GB) TX bytes:68556431588 (68.5 GB)
# modinfo e1000
filename: /lib/modules/2.6.32-25-server/kernel/drivers/net/e1000/e1000.ko
version: 7.3.21-k5-NAPI
license: GPL
description: Intel(R) PRO/1000 Network Driver
author: Intel Corporation,
srcversion: 3895921F9A653A8C699A770
alias: pci:v00008086d000010B5sv*sd*bc*sc*i*
alias: pci:v00008086d00001099sv*sd*bc*sc*i*
alias: pci:v00008086d0000108Asv*sd*bc*sc*i*
alias: pci:v00008086d0000107Csv*sd*bc*sc*i*
alias: pci:v00008086d0000107Bsv*sd*bc*sc*i*
alias: pci:v00008086d0000107Asv*sd*bc*sc*i*
alias: pci:v00008086d00001079sv*sd*bc*sc*i*
alias: pci:v00008086d00001078sv*sd*bc*sc*i*
alias: pci:v00008086d00001077sv*sd*bc*sc*i*
alias: pci:v00008086d00001076sv*sd*bc*sc*i*
alias: pci:v00008086d00001075sv*sd*bc*sc*i*
alias: pci:v00008086d00001028sv*sd*bc*sc*i*
alias: pci:v00008086d00001027sv*sd*bc*sc*i*
alias: pci:v00008086d00001026sv*sd*bc*sc*i*
alias: pci:v00008086d0000101Esv*sd*bc*sc*i*
alias: pci:v00008086d0000101Dsv*sd*bc*sc*i*
alias: pci:v00008086d0000101Asv*sd*bc*sc*i*
alias: pci:v00008086d00001019sv*sd*bc*sc*i*
alias: pci:v00008086d00001018sv*sd*bc*sc*i*
alias: pci:v00008086d00001017sv*sd*bc*sc*i*
alias: pci:v00008086d00001016sv*sd*bc*sc*i*
alias: pci:v00008086d00001015sv*sd*bc*sc*i*
alias: pci:v00008086d00001014sv*sd*bc*sc*i*
alias: pci:v00008086d00001013sv*sd*bc*sc*i*
alias: pci:v00008086d00001012sv*sd*bc*sc*i*
alias: pci:v00008086d00001011sv*sd*bc*sc*i*
alias: pci:v00008086d00001010sv*sd*bc*sc*i*
alias: pci:v00008086d0000100Fsv*sd*bc*sc*i*
alias: pci:v00008086d0000100Esv*sd*bc*sc*i*
alias: pci:v00008086d0000100Dsv*sd*bc*sc*i*
alias: pci:v00008086d0000100Csv*sd*bc*sc*i*
alias: pci:v00008086d00001009sv*sd*bc*sc*i*
alias: pci:v00008086d00001008sv*sd*bc*sc*i*
alias: pci:v00008086d00001004sv*sd*bc*sc*i*
alias: pci:v00008086d00001001sv*sd*bc*sc*i*
alias: pci:v00008086d00001000sv*sd*bc*sc*i*
depends:
vermagic: 2.6.32-25-server SMP mod_unload modversions
parm: TxDescriptors:Number of transmit descriptors (array of int)
parm: RxDescriptors:Number of receive descriptors (array of int)
parm: Speed:Speed setting (array of int)
parm: Duplex:Duplex setting (array of int)
parm: AutoNeg:Advertised auto-negotiation setting (array of int)
parm: FlowControl:Flow Control setting (array of int)
parm: XsumRX:Disable or enable Receive Checksum offload (array of int)
parm: TxIntDelay:Transmit Interrupt Delay (array of int)
parm: TxAbsIntDelay:Transmit Absolute Interrupt Delay (array of int)
parm: RxIntDelay:Receive Interrupt Delay (array of int)
parm: RxAbsIntDelay:Receive Absolute Interrupt Delay (array of int)
parm: InterruptThrottleRate:Interrupt Throttling Rate (array of int)
parm: SmartPowerDownEnable:Enable PHY smart power down (array of int)
parm: KumeranLockLoss:Enable Kumeran lock loss workaround (array of int)
parm: copybreak:Maximum size of packet that is copied to a new buffer on receive (uint)
parm: debug:Debug level (0=none,...,16=all) (int)

ethtool -i eth0
driver: e1000
version: 7.3.21-k5-NAPI
firmware-version: N/A
bus-info: 0000:02:01.0

watch ethtool -S eth1

NIC statistics:
rx_packets: 470902536
tx_packets: 201854613
rx_bytes: 554346183392
tx_bytes: 358609480869
rx_broadcast: 0
tx_broadcast: 0
rx_multicast: 0
tx_multicast: 0
rx_errors: 0
tx_errors: 0
tx_dropped: 0
multicast: 0
collisions: 0
rx_length_errors: 0
rx_over_errors: 0
rx_crc_errors: 0
rx_frame_errors: 0


root@NTNX-Ctrl-VM-3-proteus# netstat -i
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 0 470597929 0 0 0 371404119 0 0 0 BMRU
eth1 1500 0 90195114 0 0 0 60650904 0 0 0 BMRU
lo 16436 0 32978986 0 0 0 32978986 0 0 0 LRU

netstat -s eth1 ( netstat -s eth1 > /tmp/1; sleep 10; netstat -s eth1> /tmp/2; diff 1 2 )
Ip:
590234026 total packets received
8429 with invalid headers
120254162 forwarded
0 incoming packets discarded
469498722 incoming packets delivered
287603881 requests sent out
Icmp:
270 ICMP messages received
0 input ICMP message failed.
ICMP input histogram:
echo requests: 270
421 ICMP messages sent
0 ICMP messages failed
ICMP output histogram:
destination unreachable: 6
time exceeded: 145
echo replies: 270
IcmpMsg:
InType8: 270
OutType0: 270
OutType3: 6
OutType11: 145
Tcp:
50236 active connections openings
64298 passive connection openings
5200 failed connection attempts
9690 connection resets received
345 connections established
469074746 segments received
167035196 segments send out
98317 segments retransmited
0 bad segments received.
50829 resets sent
Udp:
283709 packets received
28 packets to unknown port received.
0 packet receive errors
284394 packets sent
UdpLite:
TcpExt:
19224 invalid SYN cookies received
24 resets received for embryonic SYN_RECV sockets
34387 TCP sockets finished time wait in fast timer
692 time wait sockets recycled by time stamp
3666701 delayed acks sent
1372 delayed acks further delayed because of locked socket
Quick ack mode was activated 10872 times
21709599 packets directly queued to recvmsg prequeue.
3261500288 bytes directly in process context from backlog
1163896018 bytes directly received in process context from prequeue
337684597 packet headers predicted
32888878 packets header predicted and directly queued to user
14751616 acknowledgments not containing data payload received
80604811 predicted acknowledgments
2411 times recovered from packet loss by selective acknowledgements
Detected reordering 62 times using FACK
Detected reordering 789 times using SACK
Detected reordering 63 times using time stamp
49 congestion windows fully recovered without slow start
989 congestion windows partially recovered using Hoe heuristic
265 congestion windows recovered without slow start by DSACK
24 congestion windows recovered without slow start after partial ack
35294 TCP data loss events
TCPLostRetransmit: 682
48 timeouts after SACK recovery
89178 fast retransmits
4722 forward retransmits
3834 retransmits in slow start
239 other TCP timeouts
206 SACK retransmits failed
1 times receiver scheduled too late for direct processing
10879 DSACKs sent for old packets
7812 DSACKs received
20342 connections reset due to unexpected data
4167 connections reset due to early user close
8 connections aborted due to timeout
TCPDSACKIgnoredOld: 5654
TCPDSACKIgnoredNoUndo: 368
TCPSackShifted: 230706
TCPSackMerged: 182577
TCPSackShiftFallback: 19457
IpExt:
InMcastPkts: 12889
InBcastPkts: 195690
InOctets: 1795087596
OutOctets: -358547569
InMcastOctets: 493576
InBcastOctets: 39597687


ISCSI on CVM:
root@NTNX-Ctrl-VM-3-proteus:~/serviceability/config/nagios3# lsmod|grep iscsi
iscsi_tcp 10188 0
libiscsi_tcp 16788 1 iscsi_tcp
libiscsi 44398 3 ib_iser,iscsi_tcp,libiscsi_tcp
scsi_transport_iscsi 38402 4 ib_iser,iscsi_tcp,libiscsi
root@NTNX-Ctrl-VM-3-proteus:~/serviceability/config/nagios3# modinfo libiscsi
filename: /lib/modules/2.6.32-25-server/kernel/drivers/scsi/libiscsi.ko
license: GPL
description: iSCSI library functions
author: Mike Christie
srcversion: 4D6F40D9FEBD420A25F5089
depends: scsi_transport_iscsi
vermagic: 2.6.32-25-server SMP mod_unload modversions
parm: debug_libiscsi_conn:Turn on debugging for connections in libiscsi module. Set to 1 to turn on, and zero to turn off. Default is off. (int)
parm: debug_libiscsi_session:Turn on debugging for sessions in libiscsi module. Set to 1 to turn on, and zero to turn off. Default is off. (int)
parm: debug_libiscsi_eh:Turn on debugging for error handling in libiscsi module. Set to 1 to turn on, and zero to turn off. Default i

iscsiadm -m discovery -t sendtargets -p 127.0.0.1:3261 ( if u don't see the vdisk that was created, then check zeus_config_printer
if the disk is set to delete or there is vzone config).
192.168.5.2:62015,1 iqn.2010-06.com.nutanix:shared_data11_10g-dbd2670f
192.168.5.2:3261,1 iqn.2010-06.com.nutanix:vdisk-jenkins-home-60425f0d
192.168.5.2:3261,1 iqn.2010-06.com.nutanix:vdisk-jenkins-var-ffa83c61
192.168.5.2:62008,1 iqn.2010-06.com.nutanix:shared_data4-d6de24ba
192.168.5.2:62011,1 iqn.2010-06.com.nutanix:shared-lun10-54a5bcd7
192.168.5.2:62012,1 iqn.2010-06.com.nutanix:shared_data8_10g-893d36a6
192.168.5.2:62002,1 iqn.2010-06.com.nutanix:shared_data2-f32ba819
192.168.5.2:62003,1 iqn.2010-06.com.nutanix:shared_data3-14038b98
192.168.5.2:62010,1 iqn.2010-06.com.nutanix:shared_data6-bfaa6e4d
192.168.5.2:62000,1 iqn.2010-06.com.nutanix:shared_data0-ed7fb8e6
192.168.5.2:3261,1 iqn.2010-06.com.nutanix:vdisk-test-657ac94b
192.168.5.2:62014,1 iqn.2010-06.com.nutanix:shared_data10_10g-a012f8f6
192.168.5.2:62001,1 iqn.2010-06.com.nutanix:shared_data1-665353cd
192.168.5.2:62009,1 iqn.2010-06.com.nutanix:shared_data5-0ceb3267
192.168.5.2:62013,1 iqn.2010-06.com.nutanix:shared_data9_10g-5c4b59f2
192.168.5.2:62007,1 iqn.2010-06.com.nutanix:shared_data7-402424f8

root@NTNX-Ctrl-VM-3-proteus# iscsiadm -m node -T iqn.2010-06.com.nutanix:shared_data10_10g-a012f8f6 to find Iscsi parameters.
# BEGIN RECORD 2.0-871
node.name = iqn.2010-06.com.nutanix:shared_data10_10g-a012f8f6
node.tpgt = 1
node.startup = manual
iface.hwaddress =
iface.ipaddress =
iface.iscsi_ifacename = default
iface.net_ifacename =
iface.transport_name = tcp
iface.initiatorname =
node.discovery_address = 127.0.0.1
node.discovery_port = 3261
node.discovery_type = send_targets
node.session.initial_cmdsn = 0
node.session.initial_login_retry_max = 8
node.session.xmit_thread_priority = -20
node.session.cmds_max = 128
node.session.queue_depth = 32
node.session.auth.authmethod = None
node.session.auth.username =
node.session.auth.password =
node.session.auth.username_in =
node.session.auth.password_in =
node.session.timeo.replacement_timeout = 120
node.session.err_timeo.abort_timeout = 15
node.session.err_timeo.lu_reset_timeout = 20
node.session.err_timeo.host_reset_timeout = 60
node.session.iscsi.FastAbort = Yes
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 16776192
node.session.iscsi.DefaultTime2Retain = 0
node.session.iscsi.DefaultTime2Wait = 2
node.session.iscsi.MaxConnections = 1
node.session.iscsi.MaxOutstandingR2T = 1
node.session.iscsi.ERL = 0
node.conn[0].address = 192.168.5.2
node.conn[0].port = 62014
node.conn[0].startup = manual
node.conn[0].tcp.window_size = 524288
node.conn[0].tcp.type_of_service = 0
node.conn[0].timeo.logout_timeout = 15
node.conn[0].timeo.login_timeout = 15
node.conn[0].timeo.auth_timeout = 45
node.conn[0].timeo.noop_out_interval = 5
node.conn[0].timeo.noop_out_timeout = 5
node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144
node.conn[0].iscsi.HeaderDigest = None
node.conn[0].iscsi.DataDigest = None
node.conn[0].iscsi.IFMarker = No
node.conn[0].iscsi.OFMarker = No
# END RECORD

root@NTNX-Ctrl-VM-3-proteus:~/serviceability/config/nagios3# iscsiadm -m node -T iqn.2010-06.com.nutanix:shared_data10_10g-a012f8f6 --login
Logging in to [iface: default, target: iqn.2010-06.com.nutanix:shared_data10_10g-a012f8f6, portal: 192.168.5.2,62014]
Login to [iface: default, target: iqn.2010-06.com.nutanix:shared_data10_10g-a012f8f6, portal: 192.168.5.2,62014]: successful

root@NTNX-Ctrl-VM-3-proteus:~/serviceability/config/nagios3# sudo iptables -S -t nat |grep 62014
-A PREROUTING -d 192.168.5.2/32 -p tcp -m tcp --dport 62014 -m comment --comment "stargate" -j DNAT --to-destination 172.16.13.24:3261
-A OUTPUT -d 192.168.5.2/32 -p tcp -m tcp --dport 62014 -m comment --comment "stargate" -j DNAT --to-destination 172.16.13.24:3261


Attachments (0)
Attach a file:

Comments
_displayNameOrEmail_ - _time_ - Remove_text_

Jerome Joseph - Dec 4, 2011 2:17 PM - Remove
Increase ISCSI verbosity

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1013283
---

Dec 4 11:08:02 iscsid: in VmIscsiNetlink_RecvPduData
Dec 4 11:08:02 iscsid: read 48 bytes of PDU header
Dec 4 11:08:02 iscsid: read 48 PDU header bytes, opcode 0x20, dlength 0, data 0x52895424, max 8192
Dec 4 11:08:02 iscsid: in VmIscsiNetlink_RecvPduEnd
Dec 4 11:08:02 iscsid: recv PDU finished for pdu handle 0x0x52815080
Dec 4 11:08:02 iscsid: thread 528978a4 delete: state 2
Dec 4 11:08:02 iscsid: thread 528978cc delete: state 0
Dec 4 11:08:02 iscsid: deleting a scheduled/waiting thread!
Dec 4 11:08:02 iscsid: conn noop out timer 0x528978a4 stopped
Dec 4 11:08:02 iscsid: thread 528978a4 schedule: delay 60 state 2
Dec 4 11:08:02 iscsid: thread removed
Dec 4 11:08:02 iscsid: exec thread 08fa4c3c callback
Dec 4 11:08:02 iscsid: event_type: 1 removing from the head: 0x0x8f45698:0x0x8f456b4:0x0x8f456b4:0x0x8f49698 elem 0x0x8f45698 length 28
Dec 4 11:08:02 iscsid: in VmIscsiNetlink_RecvPduBegin
Dec 4 11:08:02 iscsid: recv PDU began, pdu handle 0x0x523b20a8
Dec 4 11:08:02 iscsid: in VmIscsiNetlink_RecvPduData
Dec 4 11:08:02 iscsid: read 48 bytes of PDU header
Dec 4 11:08:02 iscsid: read 48 PDU header bytes, opcode 0x20, dlength 0, data 0x8f9f86c, max 8192
Dec 4 11:08:02 iscsid: in VmIscsiNetlink_RecvPduEnd
Dec 4 11:08:02 iscsid: recv PDU finished for pdu handle 0x0x523b20a8
Dec 4 11:08:02 iscsid: thread 08fa1cec delete: state 2
Dec 4 11:08:02 iscsid: thread 08fa1d14 delete: state 0
Dec 4 11:08:02 iscsid: deleting a scheduled/waiting thread!
Dec 4 11:08:02 iscsid: conn noop out timer 0x8fa1cec stopped
Dec 4 11:08:02 iscsid: thread 08fa1cec schedule: delay 60 state 2
Dec 4 11:08:02 iscsid: thread removed
Dec 4 11:08:02 iscsid: exec thread 52597bfc callback
Dec 4 11:08:02 iscsid: event_type: 1 removing from the head: 0x0x525dfde0:0x0x525dfdfc:0x0x525dfdfc:0x0x525e3de0 elem 0x0x525dfde0 length 28
Dec 4 11:08:02 iscsid: in VmIscsiNetlink_RecvPduBegin
Dec 4 11:08:02 iscsid: recv PDU began, pdu handle 0x0x5283c108
Dec 4 11:08:02 iscsid: in VmIscsiNetlink_RecvPduData
Dec 4 11:08:02 iscsid: read 48 bytes of PDU header
Dec 4 11:08:02 iscsid: read 48 PDU header bytes, opcode 0x20, dlength 0, data 0x5259282c, max 8192
Dec 4 11:08:02 iscsid: in VmIscsiNetlink_RecvPduEnd
Dec 4 11:08:02 iscsid: recv PDU finished for pdu handle 0x0x5283c108
Dec 4 11:08:02 iscsid: thread 52594cac delete: state 2
Dec 4 11:08:02 iscsid: thread 52594cd4 delete: state 0

Jerome Joseph - Dec 11, 2011 7:23 PM - Remove
Quick Vsphere 5.0 commands

Enable software iSCSI on the ESXi host
~ # esxcli iscsi software set --enabled=true

Add a portgroup to my standard vswitch for iSCSI #1
~ # esxcli network vswitch standard portgroup add -p iSCSI-1 -v vSwitch0

Now add a vmkernel nic (vmk1) to my portgroup
~ # esxcli network ip interface add -i vmk1 -p iSCSI-1

Repeat for iSCSI #2
~ # esxcli network vswitch standard portgroup add -p iSCSI-2 -v vSwitch0
~ # esxcli network ip interface add -i vmk2 -p iSCSI-2

Set the VLAN for both my iSCSI VMkernel port groups - in my case VLAN 140
~ # esxcli network vswitch standard portgroup set -p iSCSI-1 -v 140
~ # esxcli network vswitch standard portgroup set -p iSCSI-2 -v 140

Set the static IP addresses on both VMkernel NICs as part of the iSCSI network
~ # esxcli network ip interface ipv4 set -i vmk1 -I 10.190.201.62 -N 255.255.255.0 -t static
~ # esxcli network ip interface ipv4 set -i vmk2 -I 10.190.201.63 -N 255.255.255.0 -t static

Set manual override fail-over policy so each iSCSI VMkernel portgroup had one active physical vmnic
~ # esxcli network vswitch standard portgroup policy failover set -p iSCSI-1 -a vmnic0
~ # esxcli network vswitch standard portgroup policy failover set -p iSCSI-2 -a vmnic3

Bond each of the VMkernel NICs to the software iSCSI HBA
~ # esxcli iscsi networkportal add -A vmhba33 -n vmk1
~ # esxcli iscsi networkportal add -A vmhba33 -n vmk2

Add the IP address of your iSCSI array or SAN as a dynamic discovery sendtarget
~ # esxcli iscsi adapter discovery sendtarget add -A vmhba33 -a 10.190.201.102

Re-scan your software iSCSI hba to discover volumes and VMFS datastores
~ # esxcli storage core adapter rescan --adapter vmhba33

Jerome Joseph - Dec 14, 2011 5:57 PM - Remove
vsish -e set /config/VMFS3/intOpts/EnableDataMovement 0
vsish -e set /config/DataMover/intOpts/HardwareAcceleratedInit 0
vsish -e set /config/DataMover/intOpts/HardwareAcceleratedMove 0
vsish -e set /config/VMFS3/intOpts/ResvLostRetries 10
vsish -e set /config/Cpu/intOpts/TrackMigrations 0
vsish -e set /config/FSS/intOpts/FSSEnableDataMovement 0
vsish -e set /config/Migrate/intOpts/VMotionStreamHelpers 32
vsish -e set /config/Migrate/intOpts/DiskOpsMaxRetries 50
vsish -e set /config/Migrate/intOpts/NetTimeout 300
vsish -e set /config/Migrate/intOpts/MigrateCpuMinPct1G 30
vsish -e set /config/Migrate/intOpts/MigrateCpuMinPct10G 50
vsish -e set /config/Migrate/intOpts/MigrateCpuSharesRegular 10000
vsish -e set /config/Migrate/intOpts/MigrateCpuSharesHighPriority 20000
vsish -e set /config/Migrate/intOpts/NetResPoolsSched 0

vmkfstools

1. To create RDM to pass to SVM
cd /vmfs/devices/disks/
ls t10.ATA_____ST91000640NS________________________________________9XG066WP
mkdir /vmfs/volumes/localdatastore/rdms; cd rdms
vmkfstools -z /vmfs/devices/disks/t10.ATA_____ST91000640NS________________________________________9XG066WP t10.ATA_____ST91000640NS________________________________________9XG066WP.vmdk

2. vmkfstools -i ../ServiceVM/master/ServiceVM-1.12_Ubuntu/ServiceVM-1.12_Ubuntu.vmdk ServiceVM-1.12_Ubuntu.vmdk

This is to expand cloned .vmdk file and multiple vmdk files.
mkdir /vmfs/volumes/localdatastore/ServiceVM; cd ServiceVM and run the above command,

3. vmfs/volumes/4ecc5264-e9e3dc78-671b-0025902e98e1/Windows7_VMFS3_Nutanix # rm -rf Windows7_VMFS3_Nutanix-flat.vmdk
rm: cannot remove 'Windows7_VMFS3_Nutanix-flat.vmdk': No such file or directory
/vmfs/volumes/4ecc5264-e9e3dc78-671b-0025902e98e1/Windows7_VMFS3_Nutanix # vmkfstools -D Windows7_VMFS3_Nutanix-flat.vmdk
Lock [type 10c00001 offset 24301568 v 49, hb offset 4141056
gen 229, mode 0, owner 00000000-00000000-0000-000000000000 mtime 9443 nHld 0 nOvf 0]
Addr <4, 18, 26>, gen 29, links 1, type reg, flags 0, uid 0, gid 0, mode 600
len 190827200512, nb 2999 tbz 0, cow 0, newSinceEpoch 0, zla 3, bs 1048576
vmkfstools --activehosts /vmfs/volumes/4ecc5264-e9e3dc78-671b-0025902e98e1/Windows7_VMFS3_Nutanix
Found 2 actively heartbeating hosts on volume '/vmfs/volumes/4ecc5264-e9e3dc78-671b-0025902e98e1/Windows7_VMFS3_Nutanix'
(1): MAC address 00:25:90:48:b4:6c
(2): MAC address 00:25:90:48:b4:68
http://www.virtuallyghetto.com/2010/07/vsphere-41-is-gift-that-keeps-on-giving.html
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003397
http://spininfo.homelinux.com/news/Virtual_Machine_and_Guest_OS/2011/04/15/Can_t_delete_snapshots_-_even_unmounted_disks

vmkfstools -P /vmfs/volumes/NTNX_shared_datastore1
VMFS-3.54 file system spanning 1 partitions.
File system label (if any): NTNX_shared_datastore1
Mode: public
Capacity 268167020544 (255744 file blocks * 1048576), 267565137920 (255170 blocks) avail
UUID: 4ee6e755-68d3f64c-1040-00259048b4e2
Partitions spanned (on "lvm"):
t10.NUTANIX_shared_datastore1_a118bcd634d6150d5816d20eeeba58b8f7188938:1
Is Native Snapshot Capable: NO

-L --lock [reserve|release|lunreset|targetreset|busreset|readkeys|readresv] /vmfs/devices/disks/...
-B --breaklock /vmfs/devices/disks/...