Friday, July 16. 2010
Vyatta has released a Free VMware Network Virtualization Training
Free - Watch this video now at: http://www.vyatta.com/promo/virtualfirewallcourse.php
This free training course is just one of over 20 courses in the Vyatta University curriculum. For a full course catalog visit: http://www.vyatta.com/services/training.php
Thursday, July 15. 2010
The new 5-day vSphere 4.1 ICM training course
This hands-on training course explores installation, configuration, and management of VMware vSphere 4,1, which consists of VMware ESX™/ESXi and VMware vCenter™ Server. The course is based on ESX/ESXi 4.1 and vCenter Server 4.1. Upon completion of this course, you can take the examination to qualify as a VMware Certified Professional (VCP4).
Students who complete this course may enroll in any of several advanced vSphere courses.
The old 4-day course will change in duration to a 5-day class for all sessions after August 23rd. For differences please look at the course datasheet.
Scott Vessey has discovered some differences between the old and the new course. The new content in the 5-day ICM will be:
• vCenter Linked Mode
• Host Profiles
• Distributed Power Management
• Deeper content on High Availability
• Fault Tolerance
Wednesday, July 14. 2010
vSphere 4.1 - Virtual Serial Port Concentrator
Many admins rely on using serial port console connections to manage physical servers. These connections usually provide a non-graphical and hence low-bandwidth remote console approach for administering physical servers. Administrators use physical serial port concentrators to multiplex connections to several hosts. vSphere 4.1 enables support for virtual serial port concentrators to provide similar functionality for virtual machines. The feature allows you to redirect virtual machine’s serial ports over a standard network link using telnet or ssh. This enables solutions such as third-party virtual serial port concentrators, like the new virtual appliance-based Avocent Cyclades ACS 6000, for virtual machine serial console management or monitoring. Providing a suitable way to remote a VM’s serial port(s) over a network connection, and supporting a “virtual serial port concentrator” utility, thus, gives VI administrators a first-class support for the traditional server management approach. Furthermore, these console connections are also considered more secure for virtual machines since the traffic is only on the management network. The Virtual Machine settings user interface has been modified to allow serial port configuration.
Implementing the Avocent ACS6000 Virtual Serial Port Concentrator
Using a Proxy with vSphere Virtual Serial Ports
Tuesday, July 13. 2010
Steve Herrod, VMware's CTO, Introduces VMware vSphere 4.1
vSphere 4.1 Memory Enhancements - Compression
Finally, with Transparent Memory Compression, 4.1 will compress memory on the fly to increase the amount of memory that appears to be available to a given VM. The new Transparent Memory Compression is of interest in the workload cases where memory -- rather than CPU cycles -- has limitations. ESX/ESXi provides a Memory Compression cache to improve virtual machine performance when using memory over-commitment. Memory Compression is enabled by default when a host's memory becomes overcommitted; ESX/ESXi compresses virtual pages and stores them in memory.
Since accessing compressed memory is faster than accessing memory swapped to disk, Memory Compression in ESX/ESXi allows memory over-commits without significantly hindering performance. When a virtual page needs to be swapped, ESX/ESXi first attempts to compress the page. Pages that can be compressed to 2 KB or smaller are stored in the virtual machine's compression cache, increasing the capacity of the host. The maximum size can be set for the Compression Cache and disable Memory Compression using the Advanced Settings dialog box in the vSphere Client.
vSphere 4.1 Network Traffic Management - Emergence of 10 GigE
The diagram at left should be familiar to most. When using 1GigE NICs, ESX hosts are typically deployed with NICs dedicated to particular traffic types. For example you may dedicate 4x 1GigE NICs for VM traffic; one NIC to iSCSI, another NIC to vMotion, and another to the service console. Each traffic type gets a dedicated bandwidth by virtue of the physical NIC allocation.
Moving to the diagram at right … ESX hosts deployed with 10GigE NICs are likely to be deployed (for the time being) with only two 10GigE interfaces. Multiple traffic types will be converged over the two interfaces. So long as the load offered to the 10GE interfaces is less than 10GE, everything is ok—the NIC can service the offered load. But what happens when the offered load from the various traffic types exceeds the capacity of the interface? What happens when you offer say 11Gbps to a 10GigE interface? Something has to suffer. This is where Network IO Control steps in. It addresses the issue of oversubscription by allowing you to set the relative importance of predetermined traffic types.
NetIOC is controlled with two parameters—Limits and Shares.
Limits, as the name suggests, sets a limit for that traffic type (e.g VM traffic) across the NIC team. The value is specified in absolute terms in Mbps. When set, that traffic type will not exceed that limit *outbound* (or egress) of the host.
Shares specify the relative importance of that traffic type when those traffic types compete for a particular vmnic (phyiscal NIC). Shares are specified in abstract units numbered between 1 and 100 and indicate the relative importance of that traffic type. For example, if iSCSI has a shares value of 50, and FT logging has a shares value of 100, then FT traffic will get 2x the bandwidth of iSCSI when they compete. If they were both set at 50, or both set at 100, then they would both get the same level of service (bandwidth).
There are a number of preset values for shares ranging from low to high. You can also set custom values. Note that the limits and shares apply to output or egress from the ESX host, not input.
Remember that shares apply to the vmnics; limits apply across a team.