Now, you are not suppose to find the VM's IP address from the host. But if you really want to. Here is the steps.
We talked about the use of "ft-stats" and "vmware-vimdump". so we can use those two command together to find the guest IP address that we are looking for
1. Let's find the VM's name by using "ft-stats -lv" (let's use BCS-VC40-1)
2. Now we can use "vmware-vimdump | less" and search for hostname (/hostname)
Not many people are running FT VMs. and even less people know about the ft-stats command. Now if you don't use a FT vm what this command can give you?
Answer: a fast report on the vmid and path to your VMs
"ft-stats -lv"
Now Why the state is set as "Not Configured"? Because none of the VM was configured as a FT VM
vShpere change the way to store the console OS. It's now running on a VMDK file. It's a real virtual machine. There are times you may want to know where is your root VMDK file for your console OS. Here is two fast ways to locate it.
1. From /proc/vmware: "rootFsVMDKPath" under /proc/vmware will give you the location of your root VMDK
2. Use of "vsd" command: You can use "vsd -g" to list your root VMDK file.
Many of us believe "vmware -v" will show you the current version of your ESX server (With all the patches applied). This is actually not true. This command only shows the version of a component.
In ESX 3.x "vmware -v" will show you the version number of "VMware-esx-vmx"
And in ESX 4.x "vmware -v" will show you the version of "vmware-esx-vmware-release"
In addition, vCenter will show the version of your "vmware-hostd"
Please keep in mind those numbers can all be different.
You always want to know more about your servers, your network settings your storage options. "esxcfg-info" is the best place to dig things up. You want to know more about your storage? Type:
"esxcfg-info -s | less -RS"
You can check your VMFS alignment here, when the Starting sector set as 128 you know your VMFS is aligned correctly (VMFS aligned on 64KB boundary)
You can map the LUN back to your service console device. It will also show you the type of the file system on the LUN. In this case fb is VMFS
There are times we want to find out what drivers are configured to load up when VMkernel boots up.
You can find this out by typing:
"esxcfg-modele -q"
You can also check what drivers are currently loaded:
" esxcfg-modele -l"
Every time when you create a VMFS datastore, a copy of VMFS metadata will write to your LUN with following information:
- Block size
- Number of extents
- Volume capacity
- VMFS version
- Label
- VMFS UUID
You can use "vmkfstools -P -h /vmfs/volumes/LUN_Lable" to query the file system information
Note: information based on Infrastructure 3 DSA Manual
Normally when you hit a VMotion issue, or a 64 bit VM can not be power on from your 64 bit ESX server. You might asked to reboot the host and check the BIOS if HV (Hardware Virtualization) is enabled. You can check this without reboot your server
esxcfg-info | grep -i "hv support'

It will return a number between 0 to 3
-
0 is Not present
-
1 is Not supported
-
2 is disabled
3 is enabled
So in my case my server supports HV but is disabled under the BIOS.
Sometimes we will get a hardware device just won't work on ESX server. There are many different reasons for that. We'd like to first find out if the device is supported and the right device driver is loaded. You can check the HCL online, but what about you are in front of the ESX server and it does not have Internet connection?
You can check the device against vmware-devices.map file. this file is located at /etc/vmware directory on your ESX server.
Let's use a NIC on my server as an example
esxcfg-nics -l
We can see the device is Broadcom Corporation NetXtreme II 5706 Gigabit Ethernet and the driver is bnx2.
Now we will chaeck this against vmware-devices.map file
grep -i "bnx2" vmware-devices.map
We can see the line "device,0x14e4,0x164a,nic,NetXtreme II 5706 Gigabit Ethernet,bnx2.o" We know the device is supported, and the loaded driver is also correct. Now we need look elsewhere for the problem.
In vSphere the new vDS would make your life much easier. However, if anything goes wrong during configuration you lost all your network (That include your management network as well). If you still have iLO or KVM you can follow the steps to get the access back
Step 1: Logon to ESX host.
Step 2: Create a new temporary vSS (tmpSwitch) and Port Group (vswifPg)
esxcfg-vswitch -a tmpSwitch
esxcfg-vswitch -A vswifPg tmpSwitch
Step 3: Move uplink from vDS to vSS
esxcfg-vswitch -l (to get DVSwitch, DVPort, and vmnic names)
esxcfg-vswitch -Q vmnic0 -V (unlink vmnic0 from vDS)
esxcfg-vswitch -L vmnic0 tmpSwitch (link to vswitch)
Step 4: Move vswif from vDS to vSS
esxcfg-vswif -l (get vswif IP address, netmask, dvPort id, etc.)
esxcfg-vswif -d vswif0
esxcfg-vswif -a vswif0 -i -n -p vswifPg
Check or edit the default gateway address by editing /etc/sysconfig/network or adding default gateway address with:
route add default gw
If you are on a ESX host service console (ESX 3.x or ESX 4.0), you want to run a quick summary on the data stores this host can see, but either don't have access to VI client or just simplely lazy to launch it, you can get that by typing
vmware-vim-cmd hostsvc/datastore/listsummary
You can got more about a specific datastore by
vmware-vim-cmd hostsvc/datastore/info datastore_name
Anyone work with ESX server would know sometimes you need to restart hostd service on the ESX server to "refresh" information on ESX. Normally we do this by typing "service mgmt-vmware restart"
However, sometimes this is not good enough. Every time you restart hostd it will map to 4 files under /var/lib/vmware/hostd/stats.
If any of those file got corrupted by any reason, restart hostd will not help. Every time when hostd restart called, the service will check if those files exist, if not it will create them before starting the service. So we can remove those files to make sure hostd service starts from scrach "rm -rf /var/lib/vmware/hostd/stats/*" Make sure you are in the right direct tory when you run the rm command especially with the -rf switch
When connect to an active/active array, esxcfg-mpath will not show you which HBA is currently in use for I/O operation. You can check this out in ESX 4.0 by using
esxcli nmp device list | grep "Working Paths"
You can also find out how many LUNs are connecting through a specific HBA
esxcli nmp device list | grep "Working Paths" | grep -c vmhba#
Note: (Tip was provided by a BCS customer)
In ESX 4.0 the fail over component is also a plug-in called PSP (Path Selection Plugin).
to list them you can run:
esxcli nmp psp list
Most Recently Used (MRU) — Selects the first working path discovered at system boot time. If this path becomes unavailable, the ESX host switches to an alternative path and continues to use the new path while it is available.
Fixed — Uses the designated preferred path, if it has been configured. Otherwise, it uses the first working path discovered at system boot time. If the ESX host cannot use the preferred path, it selects a random alternative available path. The ESX host automatically reverts back to the preferred path as soon as the path becomes available.
Round Robin (RR) – Uses an automatic path selection rotating through all available paths and enabling load balancing across the paths.Note: Information based on Cormac Hogan 's storage training slides
In ESX 4.0 the array type components is a plug-in, it called SATP (Storage Array Type Plugin) You can use the default one shipped with ESX server (Works for most arrays) or use the one from SAN vendor if they have one available. To check what plug-ins are installed on the host.
esxcli nmp satp list
You can really dig into destails by using "esxcli nmp satp listrules"
ESX 4.0 uses the unique LUN identifiers, typically the NAA (Network Addressing Authority) id. We can find out from the ESX host side if the LUN is unique.
esxcli nmp device list
The device starts with "naa" uses Network Addressing Authority, and it's unique. The one starts with "mpx" does not have a unique ID. In our case this is a local disk. It would be a good idea to make sure we have unique disks when the SAN team persent them to ESX servers.