HP today announced breakthrough networking, storage and server technologies that reduce costs, increase bandwidth flexibility and improve overall performance of virtual server environments. The HP Virtual Connect Flex-10 Ethernet module, a direct connect storage bundle for HP BladeSystem, and the HP ProLiant DL385 G5p server are among HP’s offerings that are helping customers efficiently deploy their virtualized infrastructures. While a growing number of companies deploy server virtualization to gain operational savings within their technology infrastructures, the cost of networking virtual servers continues to climb – for example, a typical server that hosts virtual machines requires six network connections.(1) To reap the benefits of their virtualized environment, companies are finding it necessary to invest in additional networking equipment, including network expansion cards, switches and cables. As an example, customers must purchase expensive network switches in either one Gigabit (Gb) or 10Gb increments to meet the increased bandwidth required for additional virtual server workloads. HP’s new Virtual Connect Flex-10 Ethernet module is the industry’s first interconnect technology that can allocate the bandwidth of a 10Gb Ethernet network port across four network interface card (NIC) connections. This increase in bandwidth flexibility eliminates the need for additional network hardware equipment. As a result, customers deploying virtual machines and utilizing Virtual Connect Flex-10 can realize savings of up to 55 percent in network equipment costs.(2) Virtual Connect Flex-10 can save 240 watts of power per HP BladeSystem enclosure – or 3,150 kilowatt hours per year – compared to existing networking technologies.(3) “Customers looking to eliminate the common obstacles of networking costs and bandwidth flexibility should look no further than HP,” said Mark Potter, vice president and general manager, BladeSystem, HP. “These technologies break down the barriers of virtualized networks, giving customers the greatest return on their investments.” Industry-leading cost benefits and four-to-one network consolidationHP Virtual Connect Flex-10 distributes the capacity of a 10Gb Ethernet port into four connections, and enables customers to assign different bandwidth requirements to each connection. Optimizing bandwidth based on application workload requirements enables customers to leverage their 10Gb investments across multiple connections, supporting virtual machine environments and other network intensive applications. This reduces overall network costs and power usage by provisioning network bandwidth more efficiently. The recently announced HP ProLiant BL495c virtualization blade includes built-in Virtual Connect Flex-10 functionality that enables it to support up to 24 NIC connections. With increased network bandwidth and memory capacity, the BL495c can accommodate more virtual servers than other competitive blade server offerings on the market.(4) Existing HP ProLiant c-Class blade customers can upgrade to Virtual Connect Flex-10 with the new HP NC532m Flex-10 expansion card. Simple, cost-effective storage expansion for HP BladeSystem customersHP’s new direct connect storage bundle for HP BladeSystem includes two HP StorageWorks 3Gb serial attached SCSI (SAS) BL switches and an MSA2000sa storage array. Traditionally, BladeSystem server administrators have had limited direct-attach or shared storage options and have had to rely on personnel with specialized knowledge to build a storage area network (SAN) based solution. This new low-cost, reliable storage option allows server administrators to easily deploy scalable shared SAS storage without the costs and complexity SANs require. By simply purchasing additional MSA2000sa arrays, customers can deploy up to 192 terabytes of external shared storage directly connected to an HP BladeSystem enclosure. The combination of the HP ProLiant BL495c virtualization blade server, Virtual Connect Flex-10 modules and the shared SAS storage bundle reduces the cost per virtual machine by more than 50 percent when compared to competitive solutions.(5) HP has enhanced its Virtual Connect 4Gb Fibre Channel module to allocate storage resources on a per virtual machine basis. This further simplifies storage and virtualization deployments for Fibre Channel storage customers. Customers can assign up to 128 separate SAN volumes per server blade for greater performance and flexibility. Innovative server design removes bottlenecks to virtual server performance
The new HP ProLiant DL385 G5p is a rack-based server optimized for virtualization. It offers up to 6 terabytes of internal storage as well as double the memory and a 67-percent improvement in energy efficiency when compared to previous generations.(6) Based on the new AMD Opteron™ 2300 Series Quad-Core processor, the DL385 G5p improves application performance and expands support for virtual machines.
I recently reviewed Scalent's V/OE, or Virtual Operating Environment, which is software that automates the provisioning of both storage and networking for server operating system images and virtual machines. V/OE also automatically deploys images of any guest OS to a VMware partition. The result is an essentially liquid datacenter, in which server images are completely portable. You could move a server OS from a physical machine to a virtual machine, or from one virtual machine to another, and even back to a physical machine, all without ever having to copy files or change any settings manually. Each server needs to have a lightweight agent installed, but this has minimal impact on the system. The Scalent system automatically changes the VLAN settings on the appropriate switch, the network settings on the server instance, the LUN masking and other storage settings, the VMware partitioning, and the virtual name of the HBA for the server instance. [ See Logan Harbaugh's review of Scalent V/OE and other high-availability and disaster recovery solutions for virtualization environments, including DataCore SANmelody, Marathon Technologies' everRun VM, Stratus Technologies' Avance, and Vizioncore vRanger Pro. ] Because storage replication features can be used to keep the boot-from-SAN images up to date in a backup datacenter, switchover time is very quick, limited to the time it takes the OS to boot from the new SAN image in the new location. This is typically faster than booting from a local disk. The level of flexibility that Scalent offers, in terms of being able to run an OS instance either on hardware or virtual machine, is remarkable -- and especially useful in disaster recovery. Often with other failover systems, the two servers need to be identical to ensure that VMware drivers for CPU type, motherboard type, VMware partitioning, network settings, and capacity all match between the two. With Scalent, the backup server doesn't have to match the original. Partnerships between storage vendors (such as NetApp, EMC, DataCore, FalconStor, and 3Par) and virtualization vendors (VMware, Citrix, Parallels, and Microsoft) promise similarly easy provisioning of guest OS images and storage. Other storage vendors are integrating backup and disaster recovery tools with VMware and other virtualization platforms to automate backups of VMDK files and the equivalents for other virtualization platforms. These partnerships signify not only that virtualization has become a checkbox for many sales calls, but also that more critical applications are being virtualized. An increasing number of virtualization customers are feeling the need for rapid provisioning and disaster recovery and backup options. VMware has announced that storage resource management capabilities will be included in its forthcoming Virtual Datacenter OS, due next year. VDC-OS will enable on-demand provisioning of storage in addition to memory and CPU resources. It will also support thin provisioning of VMFS volumes, which won't use the full size of a partition if there is nothing installed in it. Nearly all of the major storage vendors have announced partnerships with one or more of the virtualization vendors. These partnerships can be a wonderful thing, but only if your storage vendor happens to have a deal with your virtualization vendor. The sooner these partnerships are no longer necessary, the better for customers. But to get to where any storage will work with any virtualization platform, a Storage Networking Industry Alliance standard will be necessary. That will take a few years. In the meantime, you'll need to keep an eye on the alliances and proceed carefully. For more IT analysis and commentary on emerging technologies, visit InfoWorld.com. Story copyright © 2007 InfoWorld Media Group. All rights reserved. Reference : http://www.pcworld.com/article/152821/virtualization_storage.html?tk=rss_news
Gartner released its annual " Top 10 Strategic Technologies for 2009" last week and pride of place goes to virtualization, put right at the top of the list. More surprising, perhaps, is the fact that Gartner placed Cloud Computing directly below virtualization in the second spot. You've probably seen coverage of the list and feel you've gotten the gist of it from the commentary. That's what I thought, too -- until I read it, which I recommend you do, too here. Gartner's discussion of each of the trends is illuminating, both for what they say -- and what they don't. In discussing virtualization, Gartner notes that while server consolidation has been a huge growth area for the technology, 2009 will see storage and client virtualization become strong trends as well. Gartner rolls data de-duplication in under storage virtualization as well, although I must say that I view de-duplication as separate from virtualization -- de-duplication is an initiative that makes sense and should be undertaken (possibly) in partnership with storage virtualization, but is not a prerequisite for it. It might be more accurate to say that storage virtualization makes de-duplication possible, as the various copies of the data which formerly resided on physically separate machines with no opportunity to identify duplicative data available can now be mapped and reduced to one copy. Turning to client virtualization, Gartner uses some curious language to discuss the phenomenon: "instead of the motherboard function being located in the data center as hardware [i.e., as individual blades] , it is located there as a virtual machine bubble." I'm not sure that using the term bubble really clarifies what client virtualization is: the move from putting an end user operating environment on a dedicated piece of hardware, whether a local PC or a data center-based blade, to putting an end user operating environment into a virtual machine which resides on (and co-exists with other virtual client machines) on shared hardware. Or, to put it more in Gartner's terms, multiple client virtual operating environments cooperatively existing on a single motherboard. Gartner goes on to deemphasize the client virtualization trend, stating that "despite ambitious deployment plans from many organizations, deployments of hosted virtual desktop capabilities will be adopted by fewer than 40 percent of target users by 2010." Without trying to criticize Gartner, it is an enormous disservice to the importance of this trend to characterize it with language that appears to downplay its strength. Simply stated, the move to client virtualization is, from an organizational impact perspective, far greater than server consolidation. Server consolidation is a back-room technology primarily of importance to IT operations. In Clayton Christensen (author of The Innovator's Dilemma) terms, server virtualization is a sustaining innovation, in that it improves an existing product. Client virtualization, by contrast, dramatically changes the entire end user value chain delivery. Many discussions of client virtualization focus on the fact that, when done right, the end user sees no difference between his or her screen whether it is delivered via a traditional "thick" client or delivered through a virtualized environment. That is all to the good, and, frankly, if client virtualization imposed a significant difference from the traditional thick client, it would most likely be a non-starter. However, the method by which that identical screen is delivered to the end user is significantly different in a client virtualization scenario. This means that the processes and operations of delivering the client environment must change-a lot-to achieve the benefits of client virtualization (more on that in a bit). Client Virtualization To begin with, new hardware must be placed in the data center to run the virtualized machines. So the cost of creating the operating environment is a necessary investment to put client virtualization in place. Second, individual virtualized machines must be created and made available on the new data center-located machines. In other words, there must a migration of the existing physical machines into new virtual machines. Third, the organization's network capability with regard to capacity and latency must be tested and, if necessary, upgraded to support the flow of data between the client devices and the data center. A network that was previously very capable of carrying the data traffic between thick clients and server-based applications may not be robust enough to carry the increased traffic characteristic of client virtualization. Fourth, the established processes the organization uses to manage client machines will need to be modified. Simply put, a lot of the work formerly necessary to keep client machines up and running goes away with client virtualization. No more worrying about whether the antivirus software is up to date. No more having to make "truck rolls" (i.e., in-person visits) to figure out what's wrong with the machine. The client environment is created on-the-fly back in the data center and served up fresh each time the user logs in. Some (but not, crucially, all) of this work is displaced back to the data center, which needs to have people manage administration of user environments, updating the images from which new virtual machines are created, and so on. So, it's easy to see that there is a lot of churn in moving to client virtualization -which is why Gartner's statement should have been "as much as 40% of companies will undertake client virtualization by 2010." To my mind, the fact that four out of ten companies will take on the work I outlined above in order to implement client virtualization indicates that it must offer significant-nay, remarkable-payoff to make that 40% ready to undergo that burden. So what is that payoff? Why is client virtualization a big deal? Number one, depending on how it's implemented, client virtualization can operate on lower spec hardware at the end user location, which offers hardware savings for new machines as well as the opportunity to stretch out the useful lives of already-existing client machines. So right off the bat, there's some capital expenditure avoidance possible with client virtualization. While the savings on each machine may not be huge, when applied over hundreds or thousands of end users, the money can add up fast. Naturally, some of those savings must be applied to the additional hardware necessary in the data center, but net-net client virtualization should offer savings in this arena. Number two, remember what I said about some-but not all-of the cost savings from less client-side work being transferred to additional work in the data center? It's true that some of the savings are spent, but the rest of the avoided IT operations costs aren't spent. It's hard to estimate what that percentage will be, but considering the amount of money spent on help desks, personal visits on-site to deal with software problems, and so on, it could come to a pretty penny, indeed. Finally, and perhaps most important, there is the money saved through end users who are no longer stuck sitting doing nothing when their PC gets hosed. Every time someone has to stop working because their machine breaks represents lost productivity. This lost labor cost far outweighs the cost of hardware and software devoted to employees, so using client virtualization to keep client machines up and running can provide enormous financial returns. Given the financial benefits client virtualization offers, why doesn't everyone take advantage of it at once? As I mentioned earlier, server consolidation has taken off because it offers a sustaining innovation: it can be applied with very little change in behavior or processes. By contrast, realizing the benefits of client virtualization requires significant change in those areas-and behavior and process change is always more difficult than technology change. Furthermore, the financial benefits of client virtualization don't really kick in when only a portion of the infrastructure is migrated-because you continue carrying the costs of the help desk, being able to do on-site work, etc.-so in fact, a partial client virtualization implementation actually adds to your costs. It's only when the majority of the client machines are migrated that the cost savings start to accrue. So I'm actually impressed with Gartner's prediction that 40 percent of organizations will make the move to client virtualization by 2010. For that percentage of organizations to do so demonstrates the magnitude of the financial rewards client virtualization provides, given the organizational challenge presented by the necessary behavior and process changes. Gartner actually may be optimistic in their forecast, but only in the timescale, not in the ultimate adoption. Reference : http://www.pcworld.com/article/152720/client_virtualization.html?tk=rss_news
On Tuesday, Microsoft released to manufacturing System Center Virtual Machine Manager 2008. The final code will be shipped on Nov. 1. The company bills the software as one-stop organization, allowing administrators to set up and deploy new virtual machines and manage hosts and other virtual infrastructure elements from one console. The 2008 version of SCVMM introduces a wide scope of virtual platform support, performance and resource optimization, and enhances support of high-availability host clusters, among other improvements. In testing RTM escrow code in my lab over the past month or so, I found SCVMM to be a competent and convenient place to manage virtual machines. An Overview
You'll notice immediately upon launching SCVMM that the interface is familiar -- it sports the common three-pane approach to the System Center family of products. What's especially nice is that the integration between SCVMM and the rest of System Center isn't just skin-deep; the product plays nicely in particular with System Center Operations Manager and Configuration Manager. It even goes to the level of being able to manage workloads running on discrete operating systems within virtual machines on a VMware host. SCVMM supports all Windows products -- back to Windows 98 SE -- and some Linux. Centralized virtual machine deployment and management are provided across a range of popular virtual hosting and systems management software, including the suite of Microsoft and VMware products -- Virtual Server, Hyper-V, VMware Server, VMware ESX and VMware GSX. Xen is a possible later addition. Many of the features of the previous version of SCVMM continue to do their jobs. This includes intelligent placement, which analyzes a set of virtual machine hosts that you have identified and, based on the resource needs of the individual virtual machine you are working with, intelligently selects the VM host that best has the available resources to support your desired configuration. Another previous feature is superb physical-to-virtual (P2V) and virtual-to-virtual (V2V) conversions that make it very simple to reconfigure your infrastructure as you need. Also of note: One of the most powerful building blocks for SCVMM is its reliance on PowerShell. Every UI function in the system is built on PowerShell commands and management of any virtual machine is fully scriptable using a very well-documented set of cmdlets for PowerShell. For instance, everything from creating a VM to performing a P2V or V2V conversion -- or even starting a VMware Vmotion Live Migration (more on that in a bit) -- can be done either from the GUI or from a PowerShell cmdlet. That is powerful. Finally, SCVMM 2008 leverages some of the improvements made to Windows Server 2008 in the clustering department. As you may know, Windows Server 2008 has made clustering something even the least experienced of administrators can handle. SCVMM automates the addition of a host cluster to support Hyper-V-based virtual machines and automatically detects when nodes are added to and removed from that cluster. And the intelligent placement algorithm works with a cluster too: SCVMM won't create VMs so that the cluster becomes overcommitted. Architecture and InterfacesSCVMM was designed by Microsoft to be the premier front end for all of your virtual machine management needs, and in that sense, it supports managing VMs running on Microsoft's Hyper-V or Virtual Server 2005 R2, or VMware-hosted virtual machines as well. Indeed, among the smoothest features of SCVMM is its support for live-migrating a machine using VMware's Vmotion technology from one host to another. Interestingly, SCVMM doesn't support live migration among Hyper-V servers because that feature is not yet available in Windows Server 2008. (Live migration is the no-downtime way to switch a virtual machine between compatible hosts.) But in the meantime, SCVMM can and does support VMware's version of this feature, called V-Motion. When Windows Server 2008 R2 ships, Microsoft's native live migration will be available and supported, and SCVMM will work seamlessly with it out of the box. The architecture of SCVMM relies on a single server-side installation of the product and the various interfaces to it. The administrator console, previously mentioned, and the PowerShell integration sit atop the server product. Underneath the server product are the various system management interfaces to the individual virtual machine hosts, a SCVMM library server that allows you to store VM images, scripts to turn ISO images into readily available VMs, a pre-filled template for a virtual machine and other "quick-start" shortcuts. These virtual machines, templates and other elements can be stored on disks within the hosts, or you can leverage your existing SAN environment and its various data-transmission methods to move these artifacts around. Easing the Administrative Burden
New to SCVMM 2008 is the integrated performance and resource optimization (PRO) feature. PRO allows you to set up -- either alone or with System Center Operations Manager -- alerts and policies that notify and govern certain performance scenarios. For example, if problems like disk space, processor usage or memory consumption degrade the health of a virtual machine, PRO can be configured to trigger an alert. (These alerts are extensible.) You can then instruct SCVMM to automatically act on an alert, if you wish, and perform remediating actions which shorten the time between problem and solution. These alerts can also be logged to be addressed later, in a manual fashion, by an administrator. PRO is a neat feature that continues to ensure your virtual machine stable is performing in the best way possible and takes away some of the tedious parts of managing a set of virtual machines. Also of note is the self-service portal available for users to create their own virtual machines that can then be managed by SCVMM. Administrators can set up a policy that allows users to create machines that require different numbers of "quota points" based on their hardware footprint or overall resource profile, and those points are aggregated into an overall total that can be restricted based on the group in which the user is a member. This allows users to service their needs themselves, within reason, without either bothering the administrator with tedious tasks or overburdening the system with VM creep. The Last Word
Overall, I find SCVMM to be a compelling solution for managing virtual machines across your enterprise. I was impressed by the scope of management available, and in some ways, having SCVMM manage VMware hosts and virtual machines seems superior to using VMware's own tools because the UI is easier to use than the VMware interface and SCVMM plugs in better with the rest of your Microsoft infrastructure. And the extensibility of the management capabilities via PowerShell is second to none. The integration with Operations Manager and the PRO performance optimization features seal the deal. If you find yourself looking to integrate VM administration in your enterprise beyond native tools, SCVMM is absolutely worth your time to evaluate. As of this writing, pricing hadn't yet been finalized. Jonathan Hassell is an author, consultant and speaker on a variety of IT topics. His published works include RADIUS, Hardening Windows, Using Windows Small Business Server 2003 and Learning Windows Server 2003. His work appears regularly in such periodicals as Windows IT Pro magazine, PC Pro and TechNet Magazine. He also speaks worldwide on topics ranging from networking and security to Windows administration. He is currently an editor at Apress Inc., a publishing company specializing in books for programmers and IT professionals. Reference : http://www.pcworld.com/article/152716/ms_virtual_machine.html?tk=rss_news
Three months after being made VMware's CEO, Paul Maritz has announced changes at the company that are designed to help it ride out the economic storm while at the same time changing from a fast-growing startup into a mature software company. The changes include implementing a hiring freeze -- or "hiring pause," as Maritz put it -- that started during the third quarter and will probably last into 2009. He will also divide VMware into separate business units to handle different areas of product development and appoint new senior executives to manage those divisions. The changes were announced after VMware reported solid third-quarter financial results Tuesday, including profits that were ahead of expectations. But they also come as the IT industry braces for an expected slowdown in customer spending, and as VMware in particular exits a heady period of rapid growth. Revenue at the virtualization software company grew 37 percent in the third quarter, to US$472 million. That compared with growth of 54 percent in the second quarter, 69 percent in the first, and 80 percent in the fourth quarter of last year. "VMware is coming off a period of very rapid growth, so it's a healthy thing in any case to take stock and make sure we have people focused on the right areas," Maritz said during a conference call Tuesday, in reference to the hiring freeze. "But it's also too soon to say what will happen in 2009" in terms of customer spending, he said. "We'll suspend new hiring except for important and strategic hires," Maritz said. "We'll continue this into the fourth quarter, and frankly into 2009 as well." The company joins SAP and Microsoft, among others, in its decision to limit hiring amid the economic slowdown. Maritz said dividing the company into separate product divisions will help it to execute on its plans while it continues to expand. He didn't say how those divisions would break down. They each will have a separate research and development group but will share a common sales and marketing force, he said. "We're still working our way through the details; our intent is to have them ready and implemented as we go into 2009," Maritz said. The company will hire or promote a senior executive to run each division, he said. The move comes as VMware expands its technology roadmap into new areas. At VMworld in September, the company said it would build a "virtual datacenter operating system," including new products for virtualizing not only servers, but also network and storage gear. It is also developing products to let companies link their data centers to those of cloud computing service providers. The executive changes may also help address turmoil that hit VMware's upper ranks. Diane Greene, its former CEO and co-founder, was ousted earlier this year. She was followed soon after by Chief Scientist Mendel Rosenblum, who is Greene's husband, and Richard Sarwal, who led research and development. Maritz addressed those departures on the call. "My first order of business was to make sure the transition from Diane to myself went as smoothly as possible," he said. "As with any transition, we've had our challenges there, but I can report that we are making our way through them and moving ahead." VMware will also divide its sales division into geographic regions, each with its own profit-and-loss responsibility its own senior manager, Maritz said. The company thinks it can expand significantly in Japan, Korea, Brazil, Russia, India and China, among others, he said. Maritz also discussed competition from Microsoft. Some customers delayed purchases in the quarter to do side-by-side comparisons of the companies' products, he said. "By and large those worked in our favor," according to Maritz. "We did not see any major losses to Microsoft." He argued that Microsoft is still behind VMware with its virtualization technology. "We don't see them catching up to us until the next 12 to 24 months, by which time we will have moved on," he said. Executives were cautiously optimistic on the call but acknowledged that VMware's growth will continue to slow as it becomes bigger and the virtualization market matures. The company will start to follow the seasonal trends typical of the industry, which means its first-quarter revenue will likely decline compared to the fourth, said CFO Mark Peek. The economic climate prompted some customers to avoid long-term enterprise license agreements during the quarter and buy short-term licenses instead, Peek said. Meeting financial targets in the quarter was "certainly very challenging," Maritz said. "As we went into September, we saw uncertainty set in in a big way." Reference : http://www.pcworld.com/article/152592/.html?tk=rss_news
VMware reported better than expected financial results on Tuesday and said it was standing by its forecast for the rest of the year, albeit at the lower end of its guidance. Revenue for the third quarter, which ended Sept. 30, was US$472 million, up 32 percent from the same period a year ago. That's slower growth than VMware has reported in the past, but still ahead of the $463 million that financial analysts had been expecting, according to a poll by Thomson Reuters. Net income was $83 million, or $0.21 per share, up from $65 million, or $0.18 per share, in the third quarter last year. That too was ahead of the analyst forecast, which called for earnings of $0.20 per share. VMware CEO Paul Maritz called the figures "solid" in the face of a "challenging economic environment." The results are evidence that customers are willing to spend money on technologies that can help them reduce costs, he said in a statement. VMware makes virtualization software that's used by companies to consolidate industry-standard servers onto a smaller number of machines. It more or less created the market and became its technology leader, but it now faces new competition from Microsoft, Oracle and others. VMware maintained its forecast for 2008 revenue growth of 42 percent to 45 percent, but it cautioned that the economic uncertainty makes it difficult to predict demand for its products. It said there was "an increased likelihood that 2008 revenue will be at the lower end of the guidance range." The growth estimate is ahead of the analyst forecast, but it continues a trend toward slower growth for VMware as its software becomes more widely used. Its revenue increased 54 percent in the second quarter and 69 percent the quarter before that. Shares in VMware leapt by as much as 24 percent after the results were reported, trading at $23.30 at the time of this report. They ended the normal day's trading at $18.73, which was 9 percent down from Monday's close. Reference : http://www.pcworld.com/article/152586/.html?tk=rss_news
Microsoft is pleased to announce the release to manufacturing (RTM) of System Center Virtual Machine Manager 2008 – the next generation of Microsoft’s solution for managing the virtualized infrastructure. A key member of System Center – a centralized, enterprise-class suite of data center management products -- Virtual Machine Manager (VMM) 2008 enables customers to configure and deploy new virtual machines and centrally manage physical and virtual infrastructure from one console. New to this version of VMM is multi-vendor virtualization platform support, Performance and Resource Optimization (PRO) and enhanced support of “high availability” host clusters, among other new features. SCVMM 2008 provides a management solution for the virtualized data center that helps enable centralized management of IT infrastructure, increased server utilization, and dynamic resource optimization across multiple virtualization platforms Highlights of System Center Virtual Machine Manager 2008 Support for VMs Running on Windows Server 2008 Multi-Vendor Virtualization Platform Support Performance and Resource Optimization (PRO) Host Cluster Support for “High Availability” Virtual Machines Download: SCVMM 2008 – Evaluation Full story at http://hypervoria.com/hyper-v/system-center-virtual-machine-manager-2008-released.aspx
VMware CTO Stephen Herrod drew a cheer at the VMworld conference Wednesday by announcing plans to bring the next version of VMware’s VirtualCenter management software to Linux and the इफोनेIn a speech opening day two of the VMworld show in Las Vegas, Herrod also described improvements to VMware’s core virtual machine technology that should allow businesses to run larger, more demanding applications on virtualized servers. VirtualCenter Management Server, the control node for VirtualCenter, today runs only on versions of Microsoft’s Windows Server OS. VCenter, an updated and renamed version planned for next year, will also be available as a “virtual appliance” that runs on Linux, Herrod said. The company is also working to bring the VirtualCenter client, which currently runs on Windows PCs, to Linux, the Mac OS and also devices like Apple’s iPhone. Herrod showed only a slide photo of the iPhone interface, but it was enough to get him some applause. VMware has been emphasizing application performance and availability throughout the show. “The focus for VMware is to make sure we can run any application at all, no matter how much performance it demands,” Herrod said. To that end VMware will increase the compute capacity its virtual machines can address next year to four CPUs and 64G bytes of RAM, from two CPUs and 4M bytes of RAM today. I/O throughput will increase to 9G bytes per second, from 300K bps today. IT staff will be able to put up to 64 server nodes in a virtual resource pool cluster—the pool of computers available for use in a virtual environment. Herrod walked through VMware’s plan to deliver next year a “virtual data center OS,” a set of technologies for aggregating all resources in a data center, including storage and networking, and for moving virtual machines between them more easily with their policies attached. He demonstrated VMware Fault Tolerance, which was previewed at VMware last year and is also expected in 2009. It uses what VMware calls vLockstep technology to make a constantly updated copy of a virtual machine on a different physical server. Herrod demonstrated the technology running a one-arm bandit application (the slot machine being endemic to Las Vegas)। He showed how if the primary server goes down, because someone kicks a cable or switches it off by accident, the workload switches to the remote server and the application keeps running without interruption, with the same data available to it. Reference : http://www.macworld.com/article/135600/2008/09/vmware.html?lsrc=rss_main
VMware customers are getting a bit more freedom in the way they can transfer virtual machines from one Intel-based server to another, but they shouldn't hold their breath waiting for a bridge between Intel and AMD-based systems, an Intel executive said Tuesday. With its line of Xeon 7400 processors released this week, Intel is enabling customers using VMware's vMotion technology to move virtual machines between two servers even when they are based on different families of Intel chips. VMotion is VMware's technology for moving running virtual machines onto a different physical server. It's used by some customers for load balancing or for building fault tolerance into applications. Before the 7400 series, also known as Dunnington, the two servers had to use the same family of Intel chips for vMotion to work, said Doug Fisher, vice president with Intel's Software Solutions group, at the VMworld conference in Las Vegas. With the 7400 and future chip families, that restriction is lifted. VMware CEO Paul Maritz mentioned the development in his speech at the start of VMworld Tuesday. "Now you'll be able to buy hardware essentially independent of your vMotion strategy," he said. The compatibility goes back only to the previous processor family, the 7300 "Tigerton" series, and will extend to the next generation, known as Nehalem. "We'll always give at least three generations of compatibility," Fisher said. Intel made a big deal about the news, but AMD said its Opteron processors have had a similar capability for years. AMD doesn't change the microarchitecture of its processors as frequently as Intel, so compatibility between different Opteron lines is not an issue, said Margaret Lewis, AMD director of commercial solutions. Customers looking to move virtual workloads between AMD- and Intel-based servers are out of luck, however, at least for the foreseeable future, according to Fisher. "It's not going to happen," he said on the sidelines after his speech. The companies' chip architectures, while both x86, are too different and change too frequently to be made compatible. "We'd have to slow the pace of innovation to make it happen," he said. Lewis suggested it was only Intel, not AMD, that changes its architecture frequently. "We'd need to sit down with Intel and VMware and discuss how to make it happen, and we would welcome that discussion," she said. AMD would stand to gain the most from such compatibility, since it would give companies one less reason to buy Intel-based servers. Dunnington is a six-core processor with a larger, 16M byte Level 3 cache to boost performance. VMware CTO Steve Herrod said VMware will keep its per-socket pricing the same for Dunnington, "so customers can get more virtual machines per processor" without paying more in licenses. It was one of several ways Fisher said Intel is working with silicon to usher in a "second wave" of virtualization. The first wave was using the technology for server consolidation and building virtual environments for software testing, and the second is to use it for load balancing, high availability and disaster recovery. Citing IDC figures, he said that in 2007 about 12 percent of all servers in production were using virtualization, up from 8 percent in 2006 and 4 percent the year before. Virtualized servers run at 52 percent capacity on average, he said, compared to 10 percent to 15 percent for non-virtualized systems. VMworld continues through Thursday। Reference : http://www.pcworld.com/article/151163/.html?tk=rss_news
VMware's CEO made his pitch on Tuesday for a new type of operating system for the data center, and in the process assigned the "traditional OS" to the dustbin of history. Speaking at the start of the VMworld conference in Las Vegas, CEO and President Paul Maritz described VMware's plans to offer a "virtual data center OS" for managing server applications more flexibly and efficiently. The VDC OS is an attempt to extend the use of virtualization beyond the server, where it is widely used today, and apply the same principles to all the other hardware in a data center, including network switches and storage. By creating this virtual environment, Maritz said, IT departments will be able to move application workloads to new hardware easily when extra capacity is needed, and set up new environments for running applications more quickly. It will create an "internal cloud computing environment" for the data center. VMware announced the VDC OS on Monday, and Maritz's job Tuesday was to sell it to a cavernous hall packed with VMware customers. Most of the products that will make up the VDC OS don't exist today; VMware says it will roll out the new software throughout 2009, including products such as vNetworks and vStorage, for managing virtual pools of switches and storage equipment. Maritz barely mentioned Microsoft in his hour-long speech, but his implication was that Microsoft, which is emerging as VMware's biggest competitive threat, will have no advantage from bundling its own Hyper-V virtualization software with its Windows OS. "The traditional operating system has all but disappeared," Maritz said, making his first public speech since taking charge at VMware in July. It will be "deconstructed" and "reassembled" to make it more useful for data center environments. Asked at a question-and-answer session later if VMware is building its own OS, Maritz replied, "Yes and no, it depends what you mean by an operating system." "It is an operating system in the following sense," he said. "It abstracts away application loads from the underlying infrastructure, like traditional operating systems do, but the application loads it handles are different. This is drawing a line at a different point in the hierarchy." "It has many parallels with an OS, in the sense that it has APIs and services," he said, "but it is not a traditional OS. What we expect is that people will increasingly use the services of the virtual data center OS to construct new types of application loads that will fulfill the capabilities that you see in traditional operating systems." The company "agonized long and hard" about whether to describe the VDC OS as an operating system, and also considered "virtual infrastructure" and "meta operating system." "The reason we chose virtual data center OS is because in our interactions with customers we'd try to outline what we were doing, and they would say 'You are building an OS.'" "We could grow the hypervisor into a traditional OS, that's still an option for us down the road," he added. "But that's not what we've currently decided to do." Maritz acknowledged that VMware is embarking on a "big endeavor" and said it will depend on working closely with partners. He was asked what will be the biggest challenges for VMware in the year ahead. "As with all things there's the small matter of execution," he said. "We'll have to mature as an organization in several ways to do that." They include learning to manage "multiple internal technical endeavors" and meeting deadlines for delivering the new products, he said. The company has gone from being "the only game in town" in virtualization to facing new competitors such as Microsoft. He joked that Microsoft, where Maritz once worked, lacks critical virtualization features that "won't ship until Windows 3000, or whenever." "But clearly you can't count Microsoft out," he said। Reference : http://www.pcworld.com/article/151147/.html?tk=rss_news
Over the past two years, running Windows and Windows apps virtually on Apple hardware has become a popular way for consumers to dump their PCs in favor of Mac gear. Microsoft’s liberal attitude, while hurting hardware partners such as HP and Dell, has also enabled the spread of Windows to Apple’s previously-inaccessible hardware. In contrast, Apple has only grudgingly allowed Mac OS X to be run on virtual machines. The regular client version of Leopard cannot be run virtually, whether on Apple’s hardware or not. Only the server version of Mac OS X 10.5 Leopard can be turned into a virtual machine, or guest. That must be on Mac hardware, though desktops, laptops or servers are all allowed. The VMs must also run on top of the base Leopard server OS. The implications of these limitations on price are huge. It costs a minimum of $499—the retail price for Apple’s smallest 10-pack of OS X Server Leopard licenses—to run Leopard virtually today. Meanwhile, a 5-pack of regular Leopard licenses retails for $129. Pete Kazanjy, marketing manager for VMware’s Fusion ( read First Look: VMware Fusion 2.0 Beta 1) Mac-Windows virtualization software, says that from a technical standpoint, there’s “no difference” between the client and server versions of Leopard. Users have not been stopped by the barriers to circumventing Apple’s license. A small vendor, DiscCloud, released software last month it claims can be used to legally enable non-Apple PC servers to host Leopard-client virtual machines. “It’s on a lot of peoples’ minds,” said Kazanjy. “Apple has built its business model of pairing really wonderful hardware with their wonderful software. They are really leery of letting things slide in there.” “We’ve heard requests from our customers” to virtualize the Leopard client, said Ray Chew, senior product manager at Parallels, which earlier this summer released the first software to enable Leopard Server to be virtualized. “We have to tell them you can’t do anything against Apple’s EULA [End User License Agreement.]” An independent technology analyst, Laura DiDio, recently completed a survey of 700 businesses and found 23 percent were virtualizing Windows on at least some of their Macs. She said she heard from several respondents who were interested in virtualizing the Mac OS X client, mostly for software testing purposes. She said one respondent wasn’t letting the higher cost of virtualizing Leopard Server stop its plans of streaming out Leopard virtual desktops from Mac servers to 4,000 Mac client computers. Kazanjy is hopeful that as customer demand builds for virtualizing the Mac OS, Apple will relent. “Apple is a very reasonable company. If they see the market opportunity, they will open up,” he said. Especially if it involved “cementing” the Leopard client to Apple hardware, as the server version is, Kazanjy added. “We have our fingers crossed,” he said. “If it happens, we will be all over it, as we have a bunch of very sharp engineers rarin’ to go.” Others think the demand won’t ever be large. “This is going to be a minor, minor scenario,” said Brian Madden, an independent desktop virtualization analyst. The main reason users want to virtualize Windows is to run Windows apps that are unavailable on the Mac. There are very few Mac apps, especially in the business area, that aren’t also available on Windows, he said. The ones that are unique to the Mac tend to be big, weighty design and animation apps that are so resource-intensive that they aren’t good candidates for virtualization, especially Virtual Desktop Infrastructure (VDI), which involves streaming VMs over a network from a server to a client machine, Madden said. He argues that Macs continue to lack the management software that would make virtualizing Macs attractive to enterprises. “A lot needs to happen first,” Madden said. Parallels’ Chew thinks that Apple’s licensing now only makes virtualizing Leopard attractive to software developers. To encourage use of Mac OS X for VDI, he suggests a possible compromise: to let Leopard clients be virtualized but require users to buy a license for every individual piece of hardware that would receive a VDI stream, which is what Microsoft does. “We’re working very closely with Apple to see if we can expand the scope of virtualization,” Chew said. “But this is something that customers need to take to Apple.” Reference : http://www.macworld.com/article/135569/2008/09/osx_virtualization.html?lsrc=rss_main
Virtualization specialists VMware on Tuesday released VMware Fusion 2.0, the company's application that allows Mac users to run Windows on their Intel-based Macs. According to VMware, the new version adds over 100 new features and enhancements. Among the changes is AutoProtect, a new feature VMware described as being like Time Machine for your virtual machine. Fusion 2.0 also comes with an enhanced Unity. Unity 2.0 allows you to run Windows without seeing the Windows Desktop -- it's a seamless way to access to the Windows applications and still be in the Mac environment. In addition to allowing application sharing between the Mac and your virtual machines, Unity 2.0 also features mirrored folders. The folders on your Mac (Desktop, Documents, Music, and Pictures) will match the corresponding folders on Windows (Desktop, My Documents, My Music, My Pictures). Improved graphics and support for Mac OS X Leopard Server as a virtual machine have also been added to Fusion 2.0. Fusion 2।0 is a free update for all registered users of Fusion 1.x. For new users the application costs $79.99. Reference : http://www.pcworld.com/article/151120/.html?tk=rss_news
VMware, facing increased pressure from rivals Microsoft and Citrix Systems, will announce new products this week intended to let customers extend their use of virtualization beyond servers and into all corners of the data center, including storage and network equipment. The new products, to be described at the company's VMworld conference in Las Vegas this week, are scheduled for release in 2009 and are an effort to build what VMware calls a "virtual data center operating system." VDC OS is not a product itself but a set of capabilities that will appear in updated releases of VMware's Infrastructure 3 software and other products. "The VDC OS aggregates all hardware elements -- servers, storage and networking -- into a unified single resource. You take piece parts of the data center and let them act as a single big computer that can be allocated on demand to any application that needs the resources," said Bogomil Balkansky, VMware senior director of product marketing. VMware thinks customers can use virtualization to transform their data centers into more flexible cloud-computing environments like those offered by Amazon and Google. Among the new software to be announced this week is vCloud, which will allow customers to export virtual environments -- including virtual machines and their attached policy information -- onto the servers of third-party cloud providers. It's an ambitious plan that analysts say VMware needs to pursue to maintain a technology lead over rivals. VMware built an early lead in server virtualization but has been under pressure since Microsoft rolled out its own hypervisor earlier this year, and with Citrix expected to soon update its competing XenServer product. Many questions are likely to go unanswered this week, including how the products will be priced and packaged and a timetable for delivery beyond simply "next year." Paul Maritz, VMware's new CEO, is due to unveil the new products and direction in a speech at VMworld Tuesday morning. The new products can be broken roughly into two categories: software that works at the virtual machine level for improving application performance and availability and infrastructure products for managing the wider data center. On the infrastructure side is vNetwork, which Balkansky said will allow customers to configure a single "virtual switch" for a pool of virtualized servers, instead of having to configure individual switches for each host computer. VMware will announce a product jointly developed with Cisco Systems to let network administrators configure the virtual switch from within Cisco's network management tools. Also planned for next year is vStorage, with "thin provisioning" for allocating storage to virtual machines more efficiently. When IT staff set up virtual machines today they assign to them a certain volume of storage, even though all that storage isn't used right away. Thin provisioning lets the administrator assign a smaller volume of physical storage and then sends an alert when more needs to be added. The alerts will appear in vCenter, an updated version of VMware's Virtual Center management suite also planned for 2009. VMware will release an API (application programming interface) that storage vendors can use to give visibility into vStorage from their own management tools, Balkansky said. VCenter will also gain new modules including CapacityIQ, ConfigControl and Orchestrator. Chris Wolf, a senior analyst with Burton Group, said vNetwork could heal a divide between server and network administrators. Virtualization has "built a wall between server admins and network admins," he said. "The network guys were never really comfortable with the virtualization guys having this hidden, virtual network that they didn't have visibility into. This changes that and lets the network guys manage a virtual network like any other." VMware is opening its architecture more to other vendors, Wolf said. "One of the things that has been going well for Citrix with their XenServer product is that its architecture is probably the most open in the industry. I think this is a good start for VMware, though further opening their storage architecture would help as well," he said. VCloud is a set of technologies that let hosting providers like BT and T Mobile turn their data centers into cloud environments, Balkansky said. It will also allow customers to connect their data centers to those clouds, so they can move virtual environments off their own premises if they want them hosted by a third party. "We'll build a set of APIs that will allow customers to extend a virtual machine from their on-premise infrastructure out into the cloud. It's like Vmotion for moving a virtual machine from an internal to an external data center and back again, while still having those policies for availability and security attached." Vmotion is VMware's existing technology for moving running virtual machines from one physical server to another. Use of vCloud will start with "baby steps," Balkansky acknowledged. "We see interest from large companies that want to be able to rent some of their overflow capacity to others, it will probably start there, and it will start with the kind of noncritical workloads you would be comfortable delegating to a third party." VMware wants to be seen as less of a pure infrastructure provider and more as a company that helps businesses deliver applications more reliably to end users, Wolf said. VDC OS "gives them a message they can use to combat Microsoft, because Microsoft has been building a strong story around the end user and the application and how that relates to the virtual infrastructure." The products for improving application performance will include VMware Fault Tolerance, for ensuring transactions continue in the event of a server failure, and VMware Data Recovery, a basic backup and recovery tool. To help applications scale better the company will provide the ability to add new CPUs and memory to a virtual machine without having to restart it, and it will increase the amount of CPUs and memory a virtual machine can access to eight CPUs and 256G bytes of RAM, from 4 CPUs and 64G bytes today, Balkansky said. Also planned is vApp, a development tool that will let ISVs (independent software vendors) and large enterprises create applications that are prepackaged with multiple virtual machines, along with their policy and configuration requirements. VApp will be based on the Open Virtual Machine Format, a specification that Citrix is also supporting, which is supposed to let the applications be deployed on any OVF-compliant hypervisor. Finally, VMware will update its strategy around desktop virtualization and introduce a new brand, vClient. The company is developing a new "client virtualization layer" for laptop and desktop PCs, and eventually also for smartphones. Customers will be able to run guest operating systems on this virtualization layer without needing a host OS underneath, potentially reducing OS license costs. VMworld starts Monday evening at the Venetian Hotel in Las Vegas, and runs until Thursday। The company expects 14,000 people to attend, up from 11,000 last year. Reference : http://www.pcworld.com/article/151067/vmware_expand.html?tk=rss_news
HP today announced new products, services and solutions to help customers simplify their virtualized environments to realize business benefits from the data center to the desktop. Announced at VMworld 2008, HP’s new offerings provide support for VMware technologies in four key areas: management software, virtualization services, virtual desktop infrastructure (VDI), and server and storage infrastructures. “Customers look to HP to implement virtualization projects and strategies that lower operational costs and reduce pressure on data center real estate,” said Mark Linesch, vice president, Infrastructure Software, HP. “As a leading VMware partner, HP is advancing the state of virtualization to help our joint customers drive continued growth, manage operations and reduce enterprise-wide risk.” Brian Byun, vice president, Global Partners and Solutions, VMware, said, “VMware and HP have a long history of collaboration to help support customers’ data center needs, spanning servers, desktops, storage and services. We are pleased to see the addition of HP products and services optimized for VMware virtualization, further expanding HP’s comprehensive IT portfolio.” Information faster – zero-impact information backup and instant recovery of VMware Protecting virtual machines and their application data with frequent backups and minimal impact to the virtual infrastructure has become one of the biggest information management challenges for technology organizations to overcome. HP Data Protector software simplifies and centralizes automated data protection and recovery operations. This includes increased availability of critical applications with Zero Downtime Backup and Instant Recovery capabilities. Now available for VMware environments, HP Data Protector Zero Downtime Backup and Instant Recovery tightly integrate with HP StorageWorks Enterprise Virtual Arrays, giving customers zero-impact backup of mission-critical application data residing on virtual machines. This integration also provides the recovery of both the virtual machine and critical data in seconds or minutes instead of hours. “We are thrilled with HP Data Protector and VMware solutions to manage our data protection needs,” said Peter Molbæk, IT operations manager, VIA University College, the third-largest educational institution in Denmark. “You cannot ask for a better solution than what HP has provided.” Planning and provisioning physical and virtual resources together HP Insight Dynamics – VSE is the industry’s first integrated solution to visualize, plan and change physical and virtual resources in the same way, improving data center efficiency and lowering cost. Combined with VMware VirtualCenter, the solution provides for cost-effective, high-availability and simplified provisioning of resources across the data center. HP Insight Dynamics – VSE with VMware VirtualCenter can pre-emptively move virtual machines to a different hardware platform before any downtime occurs. The real-time capacity planning capabilities of Insight Dynamics – VSE let customers continuously analyze and optimize their server resources, both physical and virtual. With energy awareness built into the tool, users can optimize server configurations to lower power utilization. “Our company’s success depends on delivering quality business technology solutions to our hospitality industry customers,” said Pete Simpson, vice president, Business Technology – Europe, Middle East and Africa, Micros Fidelio, a leading supplier of information systems to the hospitality industry. “HP Insight Dynamics – VSE is working to consolidate and virtualize our entire data center stack, resulting in greater flexibility and significant operational cost savings to give us a real competitive advantage.” New HP services support VMware virtualization of Microsoft® Exchange 2007 New HP Virtual Exchange Infrastructure (VEI) Services supporting VMware help customers better understand and maximize their VMware and Microsoft Exchange 2007 investments. HP planning, quick-start and implementation services lead customers through the consolidation and migration of existing messaging systems to Microsoft Exchange 2007 on VMware Infrastructure without affecting day-to-day operations. This improves data protection by moving the message stores to the data center from remote locations. HP VDI for VMware virtualization – new services and thin client support New HP VDI Services supporting VMware virtualization allow customers to better utilize their HP VDI and VMware technologies. HP VDI is a desktop replacement solution that provides security for data and applications on a desktop and lowers the cost of desktop life cycle management, while providing users with the experience of a standard desktop. These planning, quick-start and implementation services help customers understand their VDI options, develop a roadmap that matches their business strategies, assess the business value of VDI and implement the right solution. The results include better control over – and more cost-effective management of – desktops, installations, upgrades, patches and backups. HP’s industry-leading thin client portfolio has been certified for VMware Virtual Desktop Manager, VMware’s next-generation connection broker. This certification ensures HP customers receive a superior out-of-the-box experience with smooth and easy deployment of VMware VDI with any HP thin client. Designed for global enterprise desktop customers, VMware Virtual Desktop Manager is a key component of VMware VDI, connecting HP thin clients to centralized virtual desktops that can be accessed securely from nearly any location. Using Virtual Desktop Manager, IT administrators can quickly automate the process of assigning virtual desktop resources to end users. Email stations at VMworld 2008 feature HP VDI technology. Consisting of HP ProLiant servers, HP Thin Clients and VMware VDI software, VMworld attendees encounter a real-life VDI session when they use the email stations to browse the Web or access the VMworld portal to build their event schedules. HP offers first virtualization blade, enhances servers with new Intel® chipset In addition to the recently announced HP ProLiant BL495c virtualization blade, HP has enhanced the HP ProLiant BL680c G5 server blade and HP ProLiant DL580 G5 rack-based server with the first-ever six-core processor for x86 platforms --- the Intel Xeon® processor 7400 series. Ideal for deploying many virtual machines, these servers deliver the performance and expansion capabilities that virtualized environments require. With the launch of new models of the HP ProLiant BL680c with Intel Xeon E7450 six-core processors, HP now has both the top-performing blade server for VMware and the top-performing server overall for VMware. Using the VMmark® benchmark, the HP ProLiant BL680c grabbed the top blade server spot with a VMmark score of 16.05 @ 12 tiles (using 24 cores). For overall VMmark performance, the 8-processor HP ProLiant DL785 holds the top position, scoring 21.88 @ 16 tiles.(1) Automated disaster recovery for virtualized environments HP and VMware have worked together to develop an integrated, simple and automated disaster recovery solution for virtual environments. This offering combines VMware Site Recovery Manager, HP StorageWorks Enterprise Virtual Arrays (EVA) and HP Continuous Access Replication EVA Software. Whether due to complexity or cost, research by Info-Tech Research Group indicates 60 percent of North American businesses do not have a basic plan to mediate the effects of a natural disaster or unplanned downtime. HP’s support for VMware Site Recovery Manager provides customers of all sizes with cost-effective, reliable disaster recovery technology to protect their business-critical applications at all times. The HP EVA’s dual-redundant hardware architecture with replication software also ensures maximum uptime by eliminating any single points of failure. The HP StorageWorks 2000 Modular Smart Array (MSA2000fc and MSA2000i) also supports VMware VMotion; the MSA2000sa is expected to begin supporting the solution in October। When combined with array-based snapshots, this allows small to midsize businesses to have a robust, entry-level SAS, iSCSI or Fibre Channel storage solution for their virtualization deployments. Reference : http://www.hp.com/hpinfo/newsroom/press/2008/080915xb.html?mtxs=rss-corp-news
VMware, facing increased pressure from rivals Microsoft and Citrix Systems, will announce new products this week intended to let customers extend their use of virtualization beyond servers and into all corners of the data center, including storage and network equipment. The new products, to be described at the company's VMworld conference in Las Vegas this week, are scheduled for release in 2009 and are an effort to build what VMware calls a "virtual data center operating system." VDC OS is not a product itself but a set of capabilities that will appear in updated releases of VMware's Infrastructure 3 software and other products. "The VDC OS aggregates all hardware elements -- servers, storage and networking -- into a unified single resource. You take piece parts of the data center and let them act as a single big computer that can be allocated on demand to any application that needs the resources," said Bogomil Balkansky, VMware senior director of product marketing. VMware thinks customers can use virtualization to transform their data centers into more flexible cloud-computing environments like those offered by Amazon and Google. Among the new software to be announced this week is vCloud, which will allow customers to export virtual environments -- including virtual machines and their attached policy information -- onto the servers of third-party cloud providers. It's an ambitious plan that analysts say VMware needs to pursue to maintain a technology lead over rivals. VMware built an early lead in server virtualization but has been under pressure since Microsoft rolled out its own hypervisor earlier this year, and with Citrix expected to soon update its competing XenServer product. Many questions are likely to go unanswered this week, including how the products will be priced and packaged and a timetable for delivery beyond simply "next year." Paul Maritz, VMware's new CEO, is due to unveil the new products and direction in a speech at VMworld Tuesday morning. The new products can be broken roughly into two categories: software that works at the virtual machine level for improving application performance and availability and infrastructure products for managing the wider data center. On the infrastructure side is vNetwork, which Balkansky said will allow customers to configure a single "virtual switch" for a pool of virtualized servers, instead of having to configure individual switches for each host computer. VMware will announce a product jointly developed with Cisco Systems to let network administrators configure the virtual switch from within Cisco's network management tools. Also planned for next year is vStorage, with "thin provisioning" for allocating storage to virtual machines more efficiently. When IT staff set up virtual machines today they assign to them a certain volume of storage, even though all that storage isn't used right away. Thin provisioning lets the administrator assign a smaller volume of physical storage and then sends an alert when more needs to be added. The alerts will appear in vCenter, an updated version of VMware's Virtual Center management suite also planned for 2009. VMware will release an API (application programming interface) that storage vendors can use to give visibility into vStorage from their own management tools, Balkansky said. VCenter will also gain new modules including CapacityIQ, ConfigControl and Orchestrator. Chris Wolf, a senior analyst with Burton Group, said vNetwork could heal a divide between server and network administrators. Virtualization has "built a wall between server admins and network admins," he said. "The network guys were never really comfortable with the virtualization guys having this hidden, virtual network that they didn't have visibility into. This changes that and lets the network guys manage a virtual network like any other." VMware is opening its architecture more to other vendors, Wolf said. "One of the things that has been going well for Citrix with their XenServer product is that its architecture is probably the most open in the industry. I think this is a good start for VMware, though further opening their storage architecture would help as well," he said. VCloud is a set of technologies that let hosting providers like BT and T Mobile turn their data centers into cloud environments, Balkansky said. It will also allow customers to connect their data centers to those clouds, so they can move virtual environments off their own premises if they want them hosted by a third party. "We'll build a set of APIs that will allow customers to extend a virtual machine from their on-premise infrastructure out into the cloud. It's like Vmotion for moving a virtual machine from an internal to an external data center and back again, while still having those policies for availability and security attached." Vmotion is VMware's existing technology for moving running virtual machines from one physical server to another. Use of vCloud will start with "baby steps," Balkansky acknowledged. "We see interest from large companies that want to be able to rent some of their overflow capacity to others, it will probably start there, and it will start with the kind of noncritical workloads you would be comfortable delegating to a third party." VMware wants to be seen as less of a pure infrastructure provider and more as a company that helps businesses deliver applications more reliably to end users, Wolf said. VDC OS "gives them a message they can use to combat Microsoft, because Microsoft has been building a strong story around the end user and the application and how that relates to the virtual infrastructure." The products for improving application performance will include VMware Fault Tolerance, for ensuring transactions continue in the event of a server failure, and VMware Data Recovery, a basic backup and recovery tool. To help applications scale better the company will provide the ability to add new CPUs and memory to a virtual machine without having to restart it, and it will increase the amount of CPUs and memory a virtual machine can access to eight CPUs and 256G bytes of RAM, from 4 CPUs and 64G bytes today, Balkansky said. Also planned is vApp, a development tool that will let ISVs (independent software vendors) and large enterprises create applications that are prepackaged with multiple virtual machines, along with their policy and configuration requirements. VApp will be based on the Open Virtual Machine Format, a specification that Citrix is also supporting, which is supposed to let the applications be deployed on any OVF-compliant hypervisor. Finally, VMware will update its strategy around desktop virtualization and introduce a new brand, vClient. The company is developing a new "client virtualization layer" for laptop and desktop PCs, and eventually also for smartphones. Customers will be able to run guest operating systems on this virtualization layer without needing a host OS underneath, potentially reducing OS license costs. VMworld starts Monday evening at the Venetian Hotel in Las Vegas, and runs until Thursday। The company expects 14,000 people to attend, up from 11,000 last year. Reference : http://www.pcworld.com/article/151067/.html?tk=rss_news
Dell is continuing to push its image as a provider of simple-to-use IT products, but it also may be trying to move upmarket, with a number of announcements designed around virtualization. On Wednesday, Dell introduced two blade servers, support for more capacity in its storage products, and new partnerships with companies that offer virtualization management products and services. The news is tied to a Monday announcement of support for Microsoft's Hyper-V virtualization software. The PowerEdge M805 and M905 blades were designed from scratch with virtualization in mind, said Sally Stevens, director of server platform marketing at Dell, speaking on a conference call last week to discuss the announcements. The M805 is a two-socket AMD blade with 16 DIMMs (dual in-line memory modules); the M905 is a four-socket AMD blade with 24 DIMMs. The blades are power-efficient and offer more DIMMs than do comparable servers from Hewlett-Packard, Stevens said. The M805 will cost US$1,699, and the M905 will start at $4,999. Dell also announced it will support Citrix Xenserver out of the box on its EqualLogic PS series of storage arrays. "When you get your Xen license and hook it up to an array, it's ready to go," said Praveen Asthana, director of worldwide storage marketing for Dell. IT departments will be able to buy a new storage array from Dell that supports more data. The new PS5500E, also being introduced on Wednesday, can handle 576 T bytes using a single management interface. Customers getting into virtualization for the first time, and current virtualization users who want to better manage the process, will also see expanded help from Dell. The company is partnering with Vizioncore to offer backup and restore capabilities tuned for virtualized environments, and is also teaming up with PlateSpin to offer optimization and lifecycle management services. Dell will also offer a version of its Auto-Snapshot Manager that is compatible with VMware. The manager is designed to help protect virtual machines and let users do things such as restore individual virtual machines, rather than having to restore them all even if only one is needed. The announcements appear in line with Dell's attempts to make it easier for companies to use virtualization. "Dell is still very suited for that customer looking to open up the box when it comes in the door and have virtualization," said Mark Bowker, an analyst with Enterprise Strategy Group. Simplifying installation and use may particularly appeal to small or medium-size businesses, which tend to be the budget-conscious organizations that are attracted to Dell products, he said. By comparison, Dell competitors HP and IBM have the reputation of offering a broader selection of products and services for large data centers, he said. Dell has generally had a reputation as the cheap option, agreed Michael Cote, an analyst with Redmonk. But over the past couple of years, Dell has been working toward dispelling that cheap image. "Paying more attention to things like virtualization and offering more than just a box would probably help them out along those lines," he said. In addition, the new blades and storage array have more capabilities and so could appeal to larger companies. Dell joins others, such as HP, Microsoft and BMC Software, making virtualization announcements in the run-up to VMware's annual VMworld conference, kicking off Sept। 15. Reference : http://www.pcworld.com/article/151012/.html?tk=rss_news
A two-horse race. That's how the market for general purpose desktop virtualization packages is shaping up, at least for the foreseeable future. With Microsoft all but abandoning Virtual PC (no updates in more than a year), and with everyone else focusing on the datacenter (including Microsoft), the field now consists of just VMware Workstation and Sun Microsystems' xVM VirtualBox. And in keeping with many such situations -- where a single product dominates the high end and everyone else tries to find a viable niche -- the two players couldn't be more dissimilar. In Lane One you have VMware Workstation, the pedigreed blue-blood of desktop virtualization solutions. If there is a bell or whistle VMware missed, I can't spot it. It truly is the pinnacle of "kitchen sink" engineering. In Lane Two you find Sun xVM VirtualBox, a product Sun acquired from tiny innotek earlier this year. VirtualBox's primary claim to fame is that it's free (both as a closed-source downloadable and a more limited open-source exploitable), and this has made it the choice of anti-establishment types who balk at Workstation's retail price tag. So, the stadium is set। The track is prepared. It's the muscular thoroughbred vs. the scrappy Ol' Paint. And with Sun pouring its vast engineering resources into VirtualBox (for example, it just gained 64-bit guest OS support), the real race may be to see whether VMware can continue to differentiate Workstation at the high-end while VirtualBox slowly eats its lunch among less discriminating customers. It should be an interesting race. And they're off! Legendary Thoroughbred
What is there left to say about VMware Workstation? Few products have spent as much time at the top of the heap. But as I mentioned in my preview of the Workstation 6.5 Beta earlier this year, the company simply refuses to sit on its laurels. With each new major release, VMware raises the bar for would-be competitors. And not just by a few inches -- in the case of version 6.5, think several feet. The change log is that impressive. But where to begin? I suppose I could talk about my favorite new feature, Easy Install. Simply create a new VM, point it to the installation media for the desired Windows OS edition (client or server), and grab a cup of coffee. By the time you return, VMware has installed the OS (including specifying product keys and default user accounts), slipstreamed its own VMware Tools suite, and basically left you with a fully-baked guest OS image that's ready for work. If you spend a lot of time building and tearing down VMs like I do, you will instantly fall in love with Easy Install. Direct3D acceleration is another great feature. When enabled, it allows applications in the guest OS to render Direct3D objects with nearly native performance, allowing even demanding programs like DirectX-based games to run within a VM. I've personally used this feature to resurrect some of my old favorites -- games such as Starfleet Command 3 -- that refuse to run natively on Windows Vista. And, of course, any line-of-business applications that use Direct3D will also reap benefits. Of course, the biggest changes involve Workstation's support for VMware's ACE technology. Whereas in the past you had to run a separate version of Workstation -- the ACE Edition -- to edit and apply ACE policies, version 6.5 incorporates these features seamlessly into the base Workstation UI. You can now enable/disable ACE functionality for a VM with a single click, and given the depth and breadth of options available, one click may be all you need to securely lock down and manage a wayward VM. In fact, it seems clear that VMware intends for Workstation 6.5 to be your primary entry point into its ACE management environment, with similar one-click tools for creating ACE packages, including the popular Pocket ACE for USB sticks. Together, the Easy Install wizard and ACE integration features truly take the drudgery out of VM creation, configuration, and management. I tested VMware Workstation 6.5 under Windows Vista (64-bit) on a 4GB Dell XPS M1710. Installation was a breeze, as with previous editions, and the new Easy Install option made provisioning and configuring new VMs nearly effortless. During preliminary benchmark testing using a Release Candidate build (and with the pre-release debugging features disabled), I achieved OfficeBench throughput levels slightly better (11 percent) than version 6.0 but nowhere near native machine performance. It's worth noting that Workstation 6.5 now allows you to manually override the underlying virtualization model, making it possible to force it to use one of three different modes (Binary Translation, Intel VT-x/AMD-V, Intel VT-x with EPT/AMD-V with RVI) or an Automatic option that selects the best mode based on your underlying hardware and OS configuration. I used the Automatic option during benchmarking. Overall, VMware Workstation 6।5 is a worthwhile upgrade, especially for customers seeking to leverage VMware's ACE management features. But even without ACE, Workstation 6.5 is compelling. Most users will be sold on Easy Install alone; it's a feature that will make support professionals and developers instantly more productive. And although it's hard to put all of version 6.5's improvements into words, suffice to say that the old thoroughbred has never looked better. The Dark Horse Proud. Scrappy. Spoiling for a fight. These are some of the descriptors that come to mind as I look back over the history of VirtualBox. When I first reviewed version 1.3 nearly two years ago, I found a promising product from a small-time player (innotek) that was still a bit rough around the edges. Four major releases later, and VirtualBox has undergone some major architectural changes. These include support for 64-bit hosts (including Mac OS X) and 64-bit guests, and a more modular/programmable architecture. VirtualBox has also picked up some new tricks including USB device support. And it has of course found a new home via innotek's acquisition by Sun Microsystems. In short, VirtualBox has generally matured into a stable, viable alternative to VMware Workstation, at least for casual usage scenarios. And, of course, it's free -- both to download and to re-use as open-source software. In fact, Sun has gone out of its way to promote VirtualBox as the ultimate generic virtualization solution, an everyman's VM tool for bridging the gaps among Unix, Linux, and Windows. So far, the strategy is paying off. VirtualBox is now everywhere, but it's particularly strong in the Linux community where it provides a relatively full-featured alternative to the free VMware Server or commercial VMware Workstation offerings. And with features like real snapshot support, broad host and guest OS compatibility, and the aforementioned support for 64-bit guests, it's easy to see why. Thanks to a growing user base, VirtualBox is quickly cementing its position as the lowest common denominator for the budget-minded VM enthusiast. Just check out how many VirtualBox disk images are floating around the BitTorrent sites. Of course, popularity doesn't always equate with quality. Despite major gains in stability and robustness (thanks, no doubt, to an infusion of engineering knowhow from Sun), VirtualBox is still nowhere near capable enough to challenge VMware Workstation on its home turf -- namely, enterprise support and development teams managing large-scale projects that actually matter. For these users, features like integration with the Visual Studio and Eclipse IDEs, Easy Install, full VM recorder/playback functionality, and support for deployment and manageability controls (VMware ACE) are basic requirements. Needless to say, you'll find none of these advanced tools in the down-market, freebie lane occupied by VirtualBox. Basically, VirtualBox 2.0 is where VMware Workstation was three to five years ago: a maturing, relatively stable tool for running multiple guest operating systems on a host PC. Still, for many casual users this is all they really need. To them, VirtualBox fills a void between the full-featured Workstation and VMware's free Player application, the latter of which places Workstation's powerful runtime engine in a frustratingly restrictive straightjacket with minimal configurability. So while VirtualBox may not be able to compete with VMware on features (it doesn't have all that many to speak of) or performance (it's at least 30 percent slower in OfficeBench tests on the aforementioned Dell XPS M1710), Sun has managed to carve out a niche where its newly acquired product can thrive while growing stronger and occasionally nipping at the heels of its more capable competitor. Calling VMware Workstation 6.5 versus Sun xVM VirtualBox 2.0 a two-horse race might have been misleading. With Workstation's expansive feature set and top-notch performance, it really isn't much of a competition. Still, VirtualBox delivers a combination of features that you simply cannot find outside of VMware, including USB device integration and 64-bit guest OS support. Add to this the killer price (free) and you have the makings of a cult classic. And though VirtualBox doesn't measure up to VMware Workstation today, don't count Sun out. As one of the preeminent engineering powerhouses, the company has the talent and resources to make a serious run at anyone it targets. VMware had better not let its guard down anytime soon. Reference : http://www.pcworld.com/article/150972/vmware_vs_sun_vxm.html?tk=rss_news
VMware reports that its Fusion software for Macintoshes is being used by scientists at CERN, the European Organization for Nuclear Research, creators of the Large Hadron Collider. VMware Fusion enables Intel-based Macs to run non-Mac OS X based operating systems without having to reboot first. CERN scientists are using Fusion to share Linux-based computer code on Fusion "virtual machines" running on Macs. The software links the computers to the LHC Computing Grid -- a network of about 40,000 CPUs. Located underground in Geneva, Switzerland, the Large Hadron Collider (LHC) is the world's largest particle accelerator, and physicists will use the LHC to study the origin of matter by colliding protons together. Scientists hope to produce subatomic particles that will help prove or disprove current theories about the birth of the Universe. Some groups have claimed that the LHC will destroy the planet by creating a black hole; CERN scientists dismiss such claims। The LHC went online with its first proton beam Wednesday, with high-energy collisions expected to start on October 21, 2008. Reference : http://www.pcworld.com/article/150895/.html?tk=rss_news
At a virtualization product launch today, Microsoft give a long-delayed demo of Hyper-V live migration, but then went on to slate the feature's eventual release for the next edition of Windows Server. In showing the upcoming capability to a crowd of customers in Bellevue, WA, Bob Muglia, senior VP of Microsoft's server and tools business, suggested that in the Windows Server product which follows Windows Server 2008, users will be able to instantly migrate virtualized software deployments from one server to the next, for consolidation on the fly. But the live migration demo may actually come as bad news to some data center admins, who have been looking forward to Microsoft adopting some form of live migration since 2006. Microsoft's first delay of this feature was announced 16 months ago, after the company had promised it for "Longhorn," which became Windows Server 2008. The feature was cut, said product managers at the time, in order that Hyper-V could meet its launch window; but then that window was later scooted to 90 days after Windows Server 2008's own launch. A little over one year later, the feature may now be waiting for a launch date two years from now, at the earliest. Without mentioning a specific time today, Microsoft stated live migration would be ready for Windows Server 2008 R2, which it touted as the very next version of the server operating system. Yet this product roadmap updated last month clearly marks the R2 version as "scheduled for release during 2010." Live migration may actually be a necessity for some data centers, especially for systems that use failover clustering. If a virtual machine is running on a system that fails, conceivably a former state of that machine could be restored from a backup, but that would take time. With live migration, data centers can relocate running instances of critical servers between physical processors, with zero downtime. Full Story At Source Reference : http://www.betanews.com/article/Microsoft_postpones_live_VM_migration_for_HyperV_two_more_years/1220915014
Microsoft is the new competitor in the virtualization market, but executives outlined some of the reasons they think the company can dominate it during a Microsoft virtualization event in Bellevue, Washington, on Monday. While VMware is by far the server virtualization market leader, Microsoft hopes it can compete on price, features and the strength of its other products, the executives said. "VMware is ridiculously expensive," said Bob Kelly, corporate vice president of infrastructure server marketing for Microsoft. Microsoft's Hyper-V should cost users about a third of what VMware would, said Kevin Turner, chief operating officer of Microsoft, speaking during a keynote presentation at the event and using VMware prices listed on public Web sites. Microsoft has also worked hard to allow customers to manage both VMware and Hyper-V within Microsoft's System Center management software, the executives said. "So we think customers will deploy us side by side with VMware, and then, because of the price, you'll see customers move to us," Kelly said. Some customers are saying that the cost difference is indeed a factor for them. Matt Lavellee, director of technology for the MLS Property Information Network in Shrewsbury, Massachusetts, said that since the real estate information firm already uses Windows Server to run its Web server farm, the cost savings of using the included Hyper-V instead of VMware proved overwhelming. "Our analysis was that to use VMware would have meant 30 percent of our potential infrastructure expenses would have been just for VMware," he said. VMware-trained IT staffers were also 10 percent to 20 percent more expensive than Microsoft-trained ones, he said. "Cost is such a driver that unless Hyper-V didn't work, we weren't going to look at VMware." Microsoft executives played up advantages the company has for selling a wide array of products and services that customers may already use. "Virtualization is only one part of the solution. You need a complete platform," said Bob Muglia, Microsoft's senior vice president of Microsoft's server and tools business. That idea is a plus for Microsoft. "If their software works as well or nearly as well as VMware, it becomes a challenge for VMware because of the sheer weight of Microsoft," said Michael Cote, an analyst with Redmonk. Still, a lot of companies are waiting to hear from the initial Hyper V users before deciding between Microsoft and VMware, Cote said. One feature they're looking at closely is management capabilities. "With virtualization, you used to have 200 boxes, but now you have that and 500 virtual boxes," he said. "There's gains, but if you're not careful then you end up with more problems." Customers could gravitate toward Hyper-V if it had features that helped them better manage virtualization, he said. However, so far Hyper-V lacks some features that VMware has, and Microsoft might be focusing on capabilities that users don't really care about all that much. On Monday, Muglia demonstrated for the first time a feature that will let IT administrators migrate an application from one server to another without disrupting use of the application. This live migration capability will be available in the next release of Hyper-V along with Windows Server 2008. VMware's VMotion feature already enables live migration. Plus, that capability might not be very important to customers. MLS' Lavellee said live migration is not an important feature to him. Microsoft also thinks it's getting into the market at a good time. Even though companies have been talking about virtualization for many years, just about 12 percent of servers being sold today are being used for virtualization, according to Microsoft's Kelly. Microsoft also says timing could work in its favor in terms of the economic downturn. "We typically see very rapid adoption of technologies that help save money and deliver agility in down markets. Customers have to find cost savings somewhere, and so technologies that help them are pretty critical," Kelly said. Microsoft is itself quickly adopting virtualization, the executives said. All new servers brought into the company's data centers must be virtualized, Turner said. Currently, a "substantive percentage" of the Microsoft.com Web site, including Technet and MSDN, runs on Hyper-V, Muglia said. The servers running those portions of the site are getting more than 50 percent utilization, which compares to the industry standard of about 15 percent or less, he said. Microsoft is among other companies, including Hewlett-Packard and Dell, making announcements in the run-up to VMware's annual conference starting next week। Reference : http://www.pcworld.com/article/150816/.html?tk=rss_news
|