Johnny Zhang
Not like everyone will find this is useful, but it's just cool to know that we can do this!
Yes, you can use VNC client (Please take a note, seems like not all vnc client will connect but I tested tight vnc and it works. http://www.tightvnc.com/download.html) to connect to your VM without install a VNC server on it. all you need to do is to add some lines in your .vmx file

RemoteDisplay.vnc.enabled = "True"
Remo
teDisplay.vnc.port = "7001"
RemoteDisplay.vnc.password = "test"

You can use any port and password.

When ready, you need to connect to your ESX servers useing "
esx_server_name_or_ip:port"
the port is the port you put into your .vmx file in my case is 7001
Click on "
Connect" it will ask for the password. This my case is "test"

Click on "
Ok"
You are now connected to the VM
Johnny Zhang
In vSphere the new vDS would make your life much easier. However, if anything goes wrong during configuration you lost all your network (That include your management network as well). If you still have iLO or KVM you can follow the steps to get the access back

Step 1: Logon to ESX host.

Step 2: Create a new temporary vSS (tmpSwitch) and Port Group (vswifPg)
esxcfg-vswitch -a tmpSwitch
esxcfg-vswitch -A vswifPg tmpSwitch

Step 3: Move uplink from vDS to vSS
esxcfg-vswitch -l (to get DVSwitch, DVPort, and vmnic names)
esxcfg-vswitch -Q vmnic0 -V (unlink vmnic0 from vDS)
esxcfg-vswitch -L vmnic0 tmpSwitch (link to vswitch)

Step 4: Move vswif from vDS to vSS
esxcfg-vswif -l (get vswif IP address, netmask, dvPort id, etc.)
esxcfg-vswif -d vswif0
esxcfg-vswif -a vswif0 -i -n -p vswifPg

Check or edit the default gateway address by editing /etc/sysconfig/network or adding default gateway address with:
route add default gw
Johnny Zhang
In ESX 3.5, the CPU scheduler logically partitions a host's physical CPU into cells, by default, there are 4 cores per cell for scalability reason. The scheduler can make decision locally within a cell without affecting other cells. However, with the introduction of 6 cores CPU, this may lead into some cell span sockets. You might experience performance issue when VMs are using those cells. If you are using those CPUs. You can change it from both VIclient and command line.

Using the VI Client:

  1. Select the Configuration tab in the VI client.
  2. Select Advanced Settings.
  3. Select VMkernel.
  4. In the right pane, locate VMkernel.Boot.cpuCellSize .
  5. Change the value to 6 .
This will take effect the next time the ESX host is rebooted.

From the command line interface (on classic ESX):
1.Enter:
esxcfg-advcfg --set-kernel 6 cpuCellSize
2. Reboot the ESX host.

From the remote command line interface (on ESXi):
1.Enter:

vicfg-advcfg --set-kernel 6 cpuCellSize
2. Reboot the ESX host.

Note: ESX 4.0 uses per-pCPU locks instead of cell scheduler. You will not see the span socket performance issue on ESX 4.0
Labels: 0 comments | edit post