Wednesday, 27 August 2008

Nikon D90 Digital SLR Can Record Video, Too

Nikon's latest digital SLR delivers a range of improvements over its predecessor, the D80, the most intriguing of which is video capture।
Nikon has been busy this summer. First came the D700--a full-frame, professional/enthusiast digital single-lens reflex camera that neatly fills the gap between the company's D300 and D3 models. Now, with the D90, Nikon has unleashed another digital SLR refresh--this time, to its midrange SLR.
The Nikon D90, which replaces Nikon's current D80, provides similar functionality to the D300 but at a lower price ($1000 for the body only, or $1300 with the 18-105mm Vibration Reduction kit lens, as compared with the D300's $1800 body-only price). The resolution is the same (12.3 megapixels); and so too are the sensor cleaning system, the picture control system, and the sensor size (DX format).
What's new: Nikon has improved its Expeed Image Processor to provide better noise reduction on ISO levels of up to 6400. The scene recognition system adds face detection and an improved metering system. And Live View mode gets its own dedicated button and lets you focus on a specific point in the frame. The camera also offers HDMI output (unusual in an SLR): a continuous shooting speed of 4.5 frames per second, and an 11-point autofocus system that is better than the D80's.
What makes me eager to take this model for a spin, though, is the D90's video recording capability--a huge boon, and a move that enables the last desirable point-and-shoot-only feature to migrate to the SLR realm. Nikon achieves its video capture, called D-Movie, by recording motion JPEG .AVI movies at 24 frames per second. The camera has three movie-mode resolutions--1280 by 720 (for 720p high-definition videos that have a 16:9 aspect ratio and can run for up to 5 minutes); 640 by 480; and 320 by 216 (the last two modes carry a 20-minute time limit).
Another nifty feature is support for geo-tagging via an optional GP-1 GPS unit (due in November; pricing to be announced).
The prospect of using a digital SLR to capture video clips is especially enticing in view ofthe range of lenses at your disposable। I'm already salivating at some of the creative possibilities that the D90 opens up, though I worry about the challenge of holding the camera steady for the duration of a video several minutes--or more--long.
Reference :

JavaScript 2’s new direction

Standardization efforts for the next version of JavaScript have taken a sharp turn this month, with some key changes in the Web scripting technology’s direction. JavaScript creator Brendan Eich, CTO of Mozilla, has helped forge a consensus on how to proceed with the direction for JavaScript’s improvements. “JavaScript was sitting still. It was stagnant,” he says.
The fundamental reason to update JavaScript—whose standard hasn’t changed since 1999—is to handle the heavy demands being placed on it. Although the language certainly has caught on for Web application development, it was not envisioned for the workloads now demanded of it by developers, Eich says. “They’re using it at a scale that it wasn’t designed for.”
The biggest change in JavaScript 2’s direction is that the ECMAScript 4 project has been dropped. That change resolves a long-simmering debate as to whether ECMAScript 3.1 or ECMAScript 4 should be the basis of JavaScript 2. (ECMAScript is the formal name for the standard, vendor-neutral version of JavaScript.)
This decision at the ECMA International standards group overseeing the JavaScript standard unites the EMCA International Technical Committee 39, including Eich, with Google and Microsoft around the “Harmony” road map. (The committee and Eich favored a major revision to the ECMAScript standard, while Microsoft and Google opposed such grand plans, Eich says. “Microsoft [in particular] started working on a much smaller improvement to the last version of the standard,” an effort that is now the core of the ECMAScript 3.1 plan, he says.)
First up: a rationalized ECMAScript 3.1
The “Harmony” road map starts with an effort to finalize ECMAScript 3.1, essentially a rationalization of the current version, and produce two interoperable implementations by spring 2009. “I think you could characterize 3.1 as a maintenance release,” says John Neumann, chair of the technical committee. The ECMAScript 3.1 effort will formalize bug fixes but also standardize across all implementations some of the improvements made in the field, Neumann says. That’s key, so applications written for one browser will work in another.
After the ECMAScript 3.1 effort, work will then proceed on a more significant ECMAScript successor dubbed Harmony.
The result is that the standards effort “wasn’t to be the big, scary fourth edition that Microsoft and others objected to,” Eich says. But the decision also means no more stalling on JavaScript 2, as well as agreement to continue to refine ECMAScript 3 after the 3.1 effort is done. Furthermore, developers likely will have to wait until 2010 for the Harmony standard, Eich says.
In essence, the JavaScript 2 effort will no longer depend on ECMAScript 4 being finalized, and instead will proceed from an improved ECMAScript 3.
Next in line, a less ambitious “Harmony” version
The new strategy means that some ideas planned for ECMAScript 4 have been dropped, after being deemed unworkable for the Web. Several ECMAScript 4 ideas “have been deemed unsound for the Web and are off the table for good: packages, namespaces and early binding,” Eich wrote in his blog. But other ECMAScript 4 ideas remain in the mix, though with some changes to make them palatable to the entire technical committee, such as the notion of classes based on ECMAScript 3 concepts combined with proposed ECMAScript 3.1 extensions, he says.
Harmony could feature classes as a stronger way of making objects and generators to enable powerful programming patterns. Generators already have been featured in JavaScript 1.7, ahead of the official specification.
But beyond agreeing that there will be a Harmony effort, the technical committee has yet to figure out what will actually go into the Harmony standard.
Addressing JavaScript security holes
Plans for both ECMAScript 3.1 and Harmony call for providing tools to help developers more easily implement security. That plan will require the technical committee to codify security practices; the committee plans to meet this week to discuss security. “I think a secure ECMAScript will be based on some future revision of ECMAScript,” beyond version 3.1, Neumann says.
Currently, JavaScript is at risk for cross-site scripting attacks in which any application can request executable inclusion in an existing application on a Web page, Neumann says। “The intent is to solve that problem.”

Reference :

Windows Live Hotmail Wave 3 Coming Soon!

Updated Windows Live Hotmail Coming Soon!
Even Faster Even Better !!
Faster than ever. It'll be up to 70 percent faster to sign in and see your e-mail.1 Of course, along with more speed, you'll get powerful technology that deflects spam and helps protect you against viruses and scams.
Simpler, cleaner design. We're combining the classic and full versions of Hotmail, so you get access to everything Hotmail has to offer. The reading pane will let you check out your e-mail without having to open it up.
Put more you in your e-mail. New themes and colors will let you design the look of your inbox, so your personality can really shine through.
Closer to your contacts. Just start typing in the "To" line and you'll get a choice of e-mail contacts that most closely match what you've typed. Plus, it'll be even easier to e-mail groups of people.
Cool stuff coming soon. We've got even more great updates to Hotmail for you to look forward to, like ever-increasing storage2, the ability to IM right from Hotmail, and new calendar features that make it easier to share your calendar with family and friends..............Full Story At सोर्स
Reference :

Epson announces new MovieMate 55 projector

Epson today announced a new multimedia projector, the MovieMate 55. The device includes a projector, CD/DVD player, and speakers in a portable case.
The MovieMate 55 offers 16:9 viewing and can project a 60-inch image from 6 feet away from the screen. The projector uses 3LCD technology, and has a white light and color lamp output of 1,200 ANSI lumens.
A built-in progressive scan DVD allows you to play DVD movies. You can also connect a iPod using a dock cable and the MovieMate 55's USB port for playing iPod slide shows.
The MovieMate 55 has a pair of 8-watt speakers. It also has support for 5.1 Dolby Digital DTS.
The projector measures 12।6-by-9.1-by-5 inches and weighs 8.4 pounds. A built-in handle makes it easy to carry the projector, and Epson includes a cushioned case. The price is $700 and comes with a two-year limited warranty and Epson's ExtraCare Home Service.
Reference :

Xbox 360 Fall Update Coming Out In November

RPG-TV got a briefing last week on the Xbox Live Experience and was told that the Fall dashboard update would be hitting in November, i।e. this fall. Makes sense to me. Before I start sounding too glib, while the fall update has routinely landed in November, an update of this size could have theoretically been pushed back to later in the year.
Reference :

AT&T announces two new international data plans for iPhone

For U.S.-based travelers who spend a lot of time abroad, the iPhone can be a useful device to carry—but getting online around the world is an expensive proposition. AT&T is now rolling out a pair of new international data plans to join its existing offerings। The two new plans will be available beginning Wednesday.
Whether or not these new packages will actually ease the strain on globetrotters’ wallets remains to be seen, as they’re a bit on the pricey side. The lower of the two runs $120 per month for 100MB of international data while the higher goes for $200 month for 200MB—and that’s on top of your existing domestic plan. On the upside, those with commitment issues need not fret: you can add or remove the international data package from your plan at any time.
There are some additional restrictions, however। For example, the plan applies only to 67 countries. In other places, you’ll need to pay between $0.010 per kilobyte and $0.0195 per kilobyte in order to use your plans. A full list of countries is available on AT&T's website.
Reference :

Canon launches new SLR, consumer cameras

Canon on Tuesday unveiled four consumer cameras and a 15।1 megapixel digital SLR camera for advanced amateurs.
The EOS 50D Digital SLR builds on the EOS 40D -- which will remain in Canon's product line. The 50D features Canon’s new DIGIC 4 image processor, improved noise reduction, and in-camera photo editing features. Targeted to the advanced amateur, the 50D also adds a new Creative Auto Mode, which gives users the ability to make image setting adjustments.
The EOS 50D provides ISO speeds from ISO 100 up to ISO 3200 in 1/3-stop increments, along with two high-speed settings -- H1 and H2 -- of ISO 6400 and ISO 12800, respectively.

The Canon 50D digital SLR

To assist with lightning conditions the new camera does peripheral illumination correction, which automatically evens brightness across the image field. This is a task that was normally done in applications like Photoshop after the images were uploaded to the computer.
The EOS 50D also comes with a 3.0-inch Clear View LCD screen and an an HDMI output.
The camera will be available in October as a body only configuration for $1,399 and with a lens for $1,599.
In addition to the higher-end 50D, Canon also released new cameras for the teen and consumer markets.
Targeted to teens, the PowerShot E1 comes in three new colors -- white, blue and pink. It comes with 10-megapixel resolution, 4x Optical Zoom-lens with Optical Image Stabilization, and Canon’s Face Detection Technology. The E1 comes with 17 shooting modes, including an Easy Mode that limits options to the very basics.
The E1 will be available in mid-September for $199.99.
Canon's A-Series gets two new additions to its product line and one new camera has been added to the SX-series of ultra-zoom cameras.
The PowerShot A1000 IS comes with 10-megapixel resolution, 4x Optical Zoom lens, DIGIC III Image Processor and comes in four two-toned colors. The PowerShot A2000 IS -- the second camera in the A-series -- also has a 10-megapixel resolution, DIGIC III Image Processor, is equipped with a 6x Optical Zoom lens, and a 3-inch LCD screen.
The PowerShot SX110 IS features a 9-megapixel resolution, DIGIC III Image Pro= cessor, 3-inch LCD screen and 10x Optical Zoom lens.
Like the E1, these three cameras come with Easy Mode to simplify the process of setting up the cameras to take pictures.
Available in September, the PowerShot A1000 IS and the PowerShot A2000 IS digital cameras will cost $199।99 and $249.99, respectively. The PowerShot SX110 IS will be available at the end of August and will cost $299.99.
Reference :

Google drops Bluetooth, GTalkService APIs from Android 1.0

Google dropped Bluetooth and the GTalkService instant messaging APIs (application program interfaces) from the set of tools for the first version of the mobile phone OS, Android 1.0, according to the Android Developers Blog
But the company made clear that handsets using the Android OS will work with other Bluetooth devices such as headsets, for example.
Dropping the Bluetooth API means software developers won't be able to create applications that utilize Bluetooth for the Android OS. Bluetooth is a short-range radio technology that allows devices to work and communicate together wirelessly. An API is a set of tools and protocols designed to help programmers build new software applications.
The company opted to drop the Bluetooth API because "we plain ran out of time," said Nick Pelly, one of the Android engineers responsible for the Bluetooth API, on the blog posting.
"The Android Bluetooth API was pretty far along, but needs some clean-up before we can commit to it for the SDK (software developer's kit)," he added.
Google promised to support a Bluetooth API in a future release of Android, "although we don't know exactly when that will be."
The API for GTalkService, an instant messaging system on mobile devices that connects people to friends with Android-based handsets or Google Talk on computers, was removed because of security flaws.
GTalkService in its original form might have revealed more details about a person than they might want to let out, such as their real name and e-mail address, according to Rich Canning, a security researcher working on Android.
The feature also posed the risk of giving control of a person's Android-based handset to a Google Talk friend, or could have allowed bad applications on one device to send a message to a good application on another device, hurting the good application.
"Although we would have loved to ship this service, in the end, the Android team decided to pull the API instead of exposing users to risk and breaking compatibility with a future, more secure version of the feature," said Dan Morrill, developer advocate on the Android OS project।
Reference :

Road to Mac OS X 10.6 Snow Leopard: 64-Bits

Next year's 10।6 reference release of Mac OS X promises to deliver technology updates throughout the system without focusing on the customer-facing marketing features that typically sell a new operating system. Here's a look at what those behind-the-scenes enhancements will mean to you, starting with new 64-bit support.

The move toward 64-bit computing is often generalized behind the assumption that "more bits must be better," but that's not always true. In some cases, expanding support for more bits of memory addressing only results in requiring more RAM and computing overhead to do the same thing. However, Apple's progressive expansion of 64-bit support in Snow Leopard will bring performance enhancements across the board for users of new 64-bit Intel Macs. Here's a look at why, along with how it is that every version of Mac OS X since Tiger has advertised "64-bit support" as a key feature.The march toward 64-bitThrough the 1980s, personal computers rapidly moved from 8-bit to 16-bit to 32-bit architectures, with each advance enabling the operating system and its applications to address more memory and more efficiently handle the memory available to them. The 8-bit computers of the early 80s could only directly address 64K, the upper limit of their 16-bit memory addressing; early Apple II systems switched between two banks providing 128K. DOS 8086 PCs with 20-bit addressing could handle a whopping 1MB of RAM, but overhead effectively limited them to using 640K of it. These early machines also highlight the fact that a CPU's architecture, memory address bus, and its data registers (used to load and store instructions) may all have different bit widths.Similarly, the 1984 Macintosh jumped to using a 32-bit 68000 processor with 24-bit addressing, allowing the theoretical use of "only" 16MB, although at the time that was far more RAM than anyone could afford. That seemingly high limit eventually became a problem for memory hungry applications, particularly with the increased demands required by graphical computing and multitasking. By the end of the 80s, Apple had delivered full 32-bit hardware with the Mac II's 68020 processor and the "32-bit clean" Mac System 7 software, which together enabled applications and the system to theoretically use as much as 4GB of directly addressable memory. By 1995, Microsoft was shipping its own 32-bit Windows API with WinNT and Win95 to take advantage of Intel's 32-bit 80386 and 486 CPUs.
More bits here and thereA decade later, the 4GB limit of 32-bit memory addressing would begin to pinch even home computers. To accommodate that inevitability, Apple began its migration to PowerPC in 1994 to make progress toward 64-bit computing and break from the limitations of the Motorola 680x0 processors it had been using. PowerPC offered a scaled down version of IBM's modern 64-bit POWER architecture, with 32 individual 32-bit general purpose registers; Intel's 32-bit x86 was a scaled up version of a 16-bit processor, and only offered 8, 32-bit GPRs. The lack of registers on x86 served as a significant constraint on potential performance and complicated development.In order to attack the RAM limitation problem in advance of moving to 64-bit CPUs, Intel added support for "Physical Address Extension" or PAE to its 32-bit x86 chips, which provided a form of 36-bit memory addressing, raising the RAM limit from 4GB to 64GB. Using PAE, each application can still only address 4GB, but an operating system can map each app's limited allocation to the physical RAM installed in the computer. Being able to use more than 4GB of RAM on a 32-bit PC requires support for PAE in the OS kernel. Microsoft has only supported this extra RAM in its Enterprise, Datacenter, and 64-bit versions of Windows; the standard 32-bit versions of Windows XP, Vista, and Windows Server are all still constrained to using 4GB of physical RAM, and they can't provide full access to more than about 3.5GB of it, making the limit an increasingly serious problem for desktop Windows PC users.In the late 90s, Windows NT was ported to 64-bit architectures such as Digital's Alpha, MIPS, PowerPC, and Intel's ill-fated Itanium, but this also only benefitted high-end workstation users. Apple's own mid-90s PowerPC transition prepared the Mac platform for an easier transition to 64-bit computing, but it wasn't until 2003 that the PowerMac G5 introduced real 64-bit hardware. The G5 processor delivered 32 individual 64-bit GPRs and a 42-bit MMU (memory management unit) for directly addressing 4TB of RAM, although the PowerMac G5 hardware was limited to 8GB.The mainstream PC remained stuck at 32-bit conventions until AMD released its 2003 Opteron CPU using an "AMD64" architecture that turned out to be a more practical alternative to upgrading into the world of 64-bits than Intel's entirely new Itanium IA-64 design. The new 64-bit PC, also called x86-64 and x64, largely caught up to PowerPC by suppling 16, 64-bit GPRs, and potentially a 64-bit memory bus to address 16EB (16 million TB) of RAM. AMD's x64 processors can theoretically address 48-bits, or 256TB, in hardware. In practice, no PC operating system currently supports more than 44-bits, or 16TB of virtual memory, and of course considerably less physical RAM.

The challenge of moving to 64-bitsThere's currently no immediate need for such vast amounts of RAM among home users, but consumers are running into the 4GB barrier of 32-bit PCs, while facing additional problems that prevent mass migration to x64. The main problem is that the potential of the hardware has to be exposed by operating system software. There are two problems: the first is simply addressing more than 4GB of total RAM for the entire system, and the second is allowing RAM-hungry applications to individually access large amounts of RAM.Even with the 64-bit Power Mac G5 hardware, there were still software limitations in 2003's Mac OS X Panther; the 32-bit OS allowed the system to support more than 4GB of memory but still corralled each application into its own 32-bit, 4GB space. With 2005's Mac OS X Tiger, Apple enabled desktop apps to spin off processes and servers that could handle enormous memory addressing of their own: up to a theoretical 16 EB of 64-bit virtual memory and a conceptual 42-bits or 4TB of physical RAM, although shipping Macs still could only support 8GB of RAM.To enable this, Tiger supplied a 64-bit version of libsystem, the system library handling most of its Unix APIs. This followed the LP64 model to allow broad compatibility with 64-bit versions of Linux and commercial Unix. It also delivered a 64-bit PowerPC ABI (application binary interface) for accommodating native 64-bit apps on the G5. Tiger still used a 32-bit kernel (although it was not limited to 32-bit memory addressing, so it could actually make use of the 8GB of RAM installed in G5s), and it was also still missing a 64-bit version of the Cocoa or Carbon APIs, which meant apps with a user interface had to be 32-bit.However, a 32-bit graphical app on Tiger could spin off a faceless 64-bit background process to perform number crunching on a vast data set requiring a 64-bit memory space, which could then communicate the results back to the 32-bit foreground app running in parallel. Apple also delivered a mechanism for deploying applications using a bundle of both 64-bit and 32-bit code, allowing the system to automatically run the appropriate version for the Mac hardware in use. Tiger itself also supplied both 32- and 64-bit underpinnings, allowing one OS to run on any Mac. This has made it easier for Apple to rapidly migrate Mac users toward 64-bit hardware.
Windows and 64-BitsIn contrast, a separate 64-bit version of Windows is required to run 64-bit Windows apps on 64-bit x86 PCs, and any 32-bit apps have to run in a special compatibility environment (below). There is no slick mechanism for deploying bundles of mixed code that "just work" on both architectures, and 64-bit Windows itself lacks the ability to run on either type of PC. This has had a chilling effect on the popularity of and the momentum behind 64-bit Windows that parallels the problems with Vista.This is particularly unfortunate because the advances delivered in the x64 PC are more desperately needed by PC users to gain the same benefits that Mac users and developers gained from the move to PowerPC over a decade earlier. The 32-bit PC is particularly hampered by a lack of GPRs and the 4GB RAM limit imposed by the desktop versions of 32-bit Windows. In addition, 32-bit Windows itself eats into that 4GB to only leave 3.5GB of RAM or less for apps and the system to use, and typically limits individual apps to a tiny 2GB address space. Software compatibility, a lack of drivers, and other problems have also complicated the move to 64-bit Windows, leaving mainstream Windows users stuck at 32-bits. Windows 7 was initially supposed to move users to 64-bits in perhaps 2010, but reports indicate that it too will be delivered in separate 32- and 64-bit versions.
One step back two steps forwardWhen Apple began migrating to Intel in 2006 it actually had to take a step backward, as it only initially supported 32-bit Intel systems with the Core Solo and Core Duo CPUs। Apple had to cope with the same 32-bit PC limitations Microsoft had been dealing with. in the Intel transition, Mac developers lost the features supplied by PowerPC, including its liberal supply of registers. However, Intel's new 32-bit Core Duo was fast enough in other areas to skirt around the problem, particularly in laptops where the aging G4 was holding Macs back. By the end of the year Apple had widened support to include the 64-bit x64 PC architecture in the new Mac Pro and Xserve, and subsequent desktop Macs using the Core 2 Duo also delivered 64-bit hardware support. With updates to Tiger, Apple delivered the same level of 64-bit support for x64 Intel processors as it had for the PowerPC G5.Within the course of one year, Apple had not only adroitly moved its entire Mac product line to Intel but also paved the way forward to rapidly push its users to 64-bits, narrowly escaping the disaster of being left the last member of the desktop PowerPC party. In its spare time, the company also threw the iPhone together while also working to develop its next jump in 64-bit operating system software.

The 64-bit GUI in LeopardIn Leopard, Apple expanded 64-bit support further, adding 64-bit support in the higher levels of Carbon and Cocoa. Apple delivered its own Xcode app in Leopard with support for both PowerPC and Intel in both 32-bit and 64-bit versions, all within the same application bundle. The entire OS is now a Universal Binary as well; it automatically runs on whatever hardware it is installed on. Incidentally, one of the biggest issues in getting Mac OS X to run on generic PC hardware is the need to turn off PAE in the kernel for older CPUs that don't support it. While all of Cocoa is now 64-bit, Apple chose not to deliver full 64-bit support in Carbon's user interface APIs (including legacy parts of QuickTime), forcing developers to migrate their apps to use the modern equivalents in Cocoa in order to deliver full 64-bit applications with a user interface. Carbon can still be used to build faceless 64-bit background apps that interact with a 64-bit Cocoa front end, similar to how Tiger supported 64-bit background apps. Earlier, Apple had added transitional support for mixing Cocoa into Carbon apps to make this move easier. Apple's decision to withhold the development of 64-bit Carbon caused Adobe to announce this spring that its upcoming Creative Suite 4 would only be delivered as a 64-bit app on Windows. Because CS4's legacy code is based on Carbon, Adobe said it wouldn't be able to deliver a 64-bit version of its Mac apps until at least CS5, because it will require porting the interface code of Photoshop and its companion apps to Cocoa in the model of Photoshop Lightroom. Most desktop apps don't necessarily demand 64-bit support, but Photoshop's use of extremely large image files makes it a good candidate for porting. Currently, Mac OS X Leopard hosts both 32-bit and 64-bit apps on top of a 32-bit kernel (below). Using PAE, the 32-bit kernel can address 32GB of RAM in the Mac Pro and Xserve; Apple's consumer machines only support 4GB RAM, but unlike 32-bit operating systems they can use the entire 4GB (with appropriate hardware support). Leopard's 32-bit kernel enabled Apple to ship 64-bit development tools to give coders the ability to build applications that can work with huge data sets in a 64-bit virtual memory space (and port over existing 64-bit code), without also requiring an immediate upgrade to all of Mac OS X's drivers and other kernel-level extensions. That transition will happen with Snow Leopard.How big of a deal is the move to 64-bit apps? As Apple's developer documentation points out, "To put the difference between 32-bit and 64-bit computing into perspective, imagine that you are working with a dataset in which the road area of the Golden Gate bridge can be represented in a 32-bit address space. With 64 bits of address space, you have the ability to model the entire surface of the Earth at the same resolution."
The 64-bit Kernel in Snow LeopardApple is expanding its 64-bit support in Snow Leopard down into the kernel. This will enable Mac systems to accommodate more than the 32GB of RAM currently available via 32-bit PAE. With kernel support for full 64-bit memory addressing, Apple can add as much RAM as users can afford. Of course, if you're buying RAM from Apple, upgrading a Mac Pro to 32GB of RAM currently costs $9,100, so it might be some time before home users decide they need more than that much RAM.While Leopard's 32-bit kernel can run both 32- and 64-bit apps, a 64-bit app can not load 32-bit plugins or shared libraries, and vice versa. The 64-bit kernel similarly requires 64-bit kernel extensions and drivers, as it can't mix 32- and 64-bit code either. The move to a 64-bit kernel will therefore require an across-the-board upgrade for all kernel drivers in Snow Leopard.Snow Leopard will also require developers who write any plugins for Mac OS X apps to recompile their code to 64-bit. This includes everything from System Preferences panes to web plugins. The reason for the massive upgrade will be that Apple will also deliver the entire system compiled as both 32- and 64-bit, from the Finder to iTunes to Safari. On 32-bit Macs, Snow Leopard will run normally, but on x64 Macs, everything will get a significant boost as every app on the system will benefit from the advantages of x64, particularly the extra registers supplied by x64 and missing from the 32-bit PC. That advantage will outweigh the additional overhead caused by moving to 64-bits and the resulting use of larger data items. In contrast, there would be no real advantage in recompiling Snow Leopard and its apps for 64-bit PowerPC G5s, as the G5 is not currently constrained by the register problem of 32-bit x86; the 64-bit G5 has the same number of registers as the G4, because the G4 already had plenty. The G5 actually runs 64-bit apps slightly slower because of the increased overhead imposed by 64-bit addressing. For that reason, Snow Leopard will apparently be Intel-only.
More information on Snow Leopard appears on AppleInsider's Mac OS X 10.6 page।

Reference :

HP Completes $13.9 Billion Acquisition of EDS

HP today announced that it has completed its acquisition of Electronic Data Systems Corporation (EDS), creating a leading force in technology services.
With this acquisition, initially announced on May 13 and valued at an enterprise value of approximately $13.9 billion, HP has one of the technology industry's broadest portfolios of products, services and end-to-end solutions. The combined offerings are focused on helping clients accelerate growth, mitigate risks and lower costs.
The acquisition is, by value, the largest in the IT services sector and the second largest in the technology industry, following HP's acquisition of Compaq, which closed in 2002. The companies' collective services businesses, as of the end of each company's 2007 fiscal year, had annual revenues of more than $38 billion and 210,000 employees, operating in more than 80 countries.
"This is a historic day for HP and EDS and for the clients we serve," said Mark Hurd, HP chairman and chief executive officer. "Independently, each company is a respected industry leader. Together, we are a global leader, with the capability to serve our clients - whatever their size, location or sector - with one of the most comprehensive and competitive portfolios in the industry."
As a business group, EDS, an HP company, will be one of the market's leading outsourcing services providers - with the ability to provide complete lifecycle capabilities in health care, government, manufacturing, financial services, energy, transportation, consumer & retail, communications, and media & entertainment. As previously announced, the group is led by Ron Rittenmeyer, president and chief executive officer, who had served as EDS' chairman, president and CEO. It remains based in Plano, Texas.
"Today marks the beginning of an exciting new era," said Rittenmeyer. "Clients will benefit from the breadth and depth of our solutions, our commitment to unsurpassed quality and our ability to provide truly global service delivery. With the resources of HP's renowned R&D and world-class technologies, we have an opportunity to truly redefine the technology services market."
HP's Technology Solutions Group (TSG) will shift to EDS its outsourcing services operations, as well as portions of its consulting and integration activities. TSG will focus on servers, storage, software and technology services, such as installing, maintaining and designing technology systems for customers, as well as certain consulting and integration services.
"Clients will benefit from the combined scale and strength of our companies as they transform their technology environments," said Ann Livermore, executive vice president, TSG, HP. "This is an important step forward in our ability to help them solve their challenges through practical innovations that deliver valuable business outcomes."
New EDS leadership team
Rittenmeyer announced his leadership team for the new business group, representing a mixture of existing EDS direct reports, as well as new appointments from within EDS and HP. His direct reports are:
Michael Coomer, 55, senior vice president, Asia Pacific & Japan, who held a similar role at EDS.
Joe Eazor, 46, senior vice president, Transformation. He was previously responsible at EDS for corporate strategy and business development.
Bobby Grisham, 54, senior vice president, Global Sales, who held a similar role at EDS.
Jeff Kelly, 52, senior vice president, Americas, who held a similar role at EDS.
Mike Koehler, 41, senior vice president, Infrastructure Technology Outsourcing (ITO) & Business Process Outsourcing (BPO), who held a similar role at EDS.
Andy Mattes, 47, senior vice president, Applications Services. He was previously senior vice president, HP Outsourcing Services.
Maureen McCaffrey, 45, vice president, Worldwide Marketing, who held a similar role at EDS.
Dennis Stolkey, 60, senior vice president, U.S. Public Sector, who held a similar role at EDS.
Bill Thomas, 48, senior vice president, Europe, Middle East & Africa, who held a similar role at EDS.
In addition, functional support will be provided by the following individuals, who will report into global functions at HP, consistent with the company's organizational model. They are:
Craig Flower, 46, senior vice president of IT, reporting to Randy Mott, executive vice president and chief information officer at HP. Flower was previously HP's senior vice president for eBusiness, customer and sales operations.
Tom Haubenstricker, 46, vice president, Finance, reporting to Cathie Lesjak, executive vice president and chief financial officer at HP. Haubenstricker was previously vice president and chief financial officer for EDS' EMEA region.
Deborah Kerr, 36, vice president and chief technology officer, reporting to Shane Robison, executive vice president and chief strategy and technology officer at HP. Kerr was previously HP's vice president and chief technology officer for services.
Mike Paolucci, 48, vice president, Human Resources, reporting to Marcela Perez de Alonso, executive vice president of Human Resources at HP. Paolucci was previously EDS' vice president of Global Compensation and Benefits/HR Business Development.
Sylvia Steinheiser, 43, vice president, Legal, reporting to Mike Holston, executive vice president, general counsel and secretary at HP. Steinheiser was previously HP's vice president, Legal, for the Americas.
Securities analyst meeting
HP will hold a live video webcast of its upcoming Sept. 15 Securities Analyst Meeting, at which Mark Hurd and other executive members will discuss HP's opportunities in the enterprise market, including EDS.
The webcast will be available at।
Reference :

Tuesday, 26 August 2008

New BlackBerry suffering same 3G connection drops as iPhone

Cellular access woes initially pinned on the iPhone 3G's particular hardware now appear likely to be thwarting the BlackBerry Bold's debut with AT&T, according to a new report ।
Citigroup investment research analyst Jim Suva's early testing of the Bold, which uses the same 3G network standard as current iPhones, finds the device with just as unstable a connection as that reported in the US and elsewhere for Apple's handset, with data sometimes dropping to the slower EDGE network or even cutting out entirely."We had a few occasional 3G signal dropping troubles at some locations," Suva writes, "especially on high-rises building streets on our 34th floor... which may be why AT&T has yet to launch the product."And while Rogers Wireless in Canada has already launched Research in Motion's new smartphone, the researcher suggests that an American launch may hinge on either a patch for the Bold's firmware or straightening out network issues with AT&T, which will be the phone's sole carrier in the US. Tellingly, the Bold uses a component of its Marvell processor as its 3G modem where iPhone 3G uses a separate Infineon chipset, ruling out identical hardware as the issue.AT&T has yet to commit to an actual release date for the new BlackBerry despite announcing its plans in May, but hasn't publicly explained the delay.The interpretation isn't a comprehensive study but comes just as Wired has finished an international study which points to US-based iPhone owners as suffering the largest number of failed data speed tests, particularly in dense urban areas where 3G towers are more likely to be overwhelmed।
Reference :

Olympus intros five cameras for fall

Olympus unveiled five new digital cameras Monday that will start rolling out this month until well into the fall.
Highlighting the parade of changes to the camera maker’s consumer line is the introduction of the SP-565 Ultra Zoom. Olympus bills the 10-megapixel camera as a smaller and lighter version of its SP-570 advanced point-and-shoot camera. The SP-565 retains that model’s 20x wide-angle telephoto zoom lens. The camera’s f2.8-4.5 lens provides the equivalent of 26-520mm focal length with 100x total seamless zoom, Olympus says.
Other features include Face Detection capable of tracking up to 16 faces within the frame to automatically focus and optimize exposure and the ability to capture images at 13.5 frames per second. The SP-565 also boasts dual-image stabilization, shadow adjustment, and a preview mode that lets users select various effects on a live, multi-window screen before taking a shot.
Olympus says the SP-565 UZ will ship in October at an estimated street price of $400.
Olympus also added two 8-megapixel point-and-shoot cameras to its FE Series. The $150 FE-360 features a 0.7-inch-thick body and a 3x optical zoom. The $200 FE-370 is slightly thicker at 0।9 inches, but it also offers a 5x optical zoom, dual-image stabilization, and an Intelligent Auto Mode that picks the appropriate shooting mode and optimal settings based on where the camera is aimed.
Reference :

Saturday, 23 August 2008

Intel's Core 2 Extreme Mobile Chips: A New Speed King

Intel's newest chips take "Extreme" to the extreme, with game-friendly features and superior power।
How do you define "Extreme"? How about as a high-velocity, quad-core processor packed into a mobile platform? That's what Intel announced this afternoon at the Intel Developers Forum. Heretofore known as Core 2 Extreme, the cat (or chips) are now officially out of the bag.
In July, the first Core 2 Duo Extreme Mobile X9100--a Penryn dual-core CPU--to show up at our labs debuted inside Micro Express's JFL9290 laptop. The PC World Test Center is still putting that machine through its paces (you can check out our assessment of its little brother, the Micro Express JFL9226, in the meantime), but the initial numbers are impressive. It dominated our WorldBench 6 tests, notching a score of 115 and posting decent frame rates in Doom 3 (47 frames per second at 1024 by 768 resolution, with antialiasing) courtesy of a 256MB nVidia GeForce 9600M GT GPU. The real speed king, though, is the QX9300 (a Penryn Quad Core)--and it's now out the door, launching this week.
Here's the breakdown on what they offer. The X9100 has a 3.06-GHz frequency, two cores, and a 6MB cache running at 44 watts. The QX9300 has four cores running at 2.53 GHz, with a 12MB cache at 45 watts.
The new chip's focus on gaming capability shows up in many ways, starting with the way it emphasizes design choices for dual discrete graphics cards in the system. Another example is the chip's automatic overclocking of RAM (and DDR3 memory). And don't forget Intel's claims of improved I/O read times with the upcoming X18-M and X-25M SATA Mainstream SSDs.
Of course, being "Extreme" means doing extreme things like building overclocking into the BIOS. Good luck if you're foolhardy enough to try and reach the 4-GHz threshold. (One notebook on display at the IDF show, from Flextronics, managed to hit that number, but only thanks to a specially crafted cooling docking station created by CoolIt Systems.) Still, being able to crank your 3.06-GHz CPU up to 3.59 GHz is feasible with the easy-to-use Intel Extreme Tuning Utility, which works inside of the OS. Just make sure to park your tweaked-out laptop on an ice cube or something to keep it cool. (Disclaimer: Overclock at your own risk! Besides voiding warranties, such fate-tempting behavior puts you at risk of corrupting data, burning out the CPU, or worse.)
Intel clearly takes its thermals very seriously. Utilities are available that constantly monitor your hardware...and the chip maker emphasizes that special options such as CoolIt's MTEC Docking Station are all but essential for hitting the performance ceiling without going splat.
How likely are you to buy that extra-hardcore docking station? And how much will it cost you? Those are good questions, and they should be answered when the base launches in January of 2009--just in time for you to see an even beefier Core i7-based Extreme Edition notebook next year।

Next-gen MacBook Air CPU; Apple's SoHo neighbors complain

Intel's Developer Forum has revealed the processors likely to underpin the first refresh of the MacBook Air ultraportable। At the same time, residents near Apple's SoHo retail store in New York City allege that its frequent concerts are ruining the neighbourhood.
Slipping underneath the radar amidst talk of Nehalem and other next-generation technology, Intel at the San Francisco edition of its Developer Forum this week announced its first regularly available processor based on the same, very small chip packaging that made the MacBook Air possible.Nicknamed the Core 2 Duo S, the 1.6GHz and 1.86GHz parts share the same basic architecture as chips released in July but consume about 60 percent less surface area through both a smaller main processor and smaller bridge chips used to interface with memory and peripherals.Although they run at nearly the same clock speeds as the processors in Apple's 13.3-inch ultraportables, they should be faster through a 1.06GHz system bus (up from 800MHz) and a larger 6MB Level 2 onboard memory cache. They also consume less power at just 17W compared to the 20W of Apple's custom-ordered chip.As the only processors that would fit into the extremely tight confines of the Air's chassis, the two Core 2 Duo S chips are a likely direct clue as to Apple's direction for its first update to the lightweight MacBook.Apple's SoHo neighbors file complaints with NYC officialsAs much as some tout Apple's flagship store in the SoHo district of Manhattan for its secondary role as a concert venue, local residents and offices are reporting a very different experience।The neighborhood's SoHo Alliance organization has submitted a letter to the New York City borough's President, Scott Stringer, complaining that the frequent concerts are not only excessively loud and block the streets with fans but that they may violate local laws, including occupancy rules and mandates for public assembly. An August 12th performance by the Jonas Brothers is described as the event that pushed locals past the breaking point."This concert attracted thousands of young teenage girls who SCREAMED INCESSANTLY on the street for hours for their idols, blocking traffic, injuring one resident in the crush, and inconveniencing scores of other people and businesses," the SoHo Alliance writes. "This concert for the Jonas Brothers was like the Beatles at Shea Stadium. The screaming was that loud. However, residential Greene Street is not Shea Stadium."Construction at inappropriate times of the night has also been one of Apple's more serious offenses, the group says. The Mac maker is further accused of lying to the Alliance and to the borough President about night work permits it didn't have.City officials have yet to take action, and Apple hasn't commented on the matter.
Reference :

Microsoft's OOXML Wins ISO Approval

Office 2007's XML-based document formats will soon be published as a formal ISO standard। Now what?

It looks as if Microsoft's OOXML office document file format will be published as an open standard after all. The International Standards Organization (ISO) today rejected four appeals from subsidiary national standards bodies that claimed ballot irregularities during the standardization process. Had these appeals been upheld, an OOXML standard could have been delayed indefinitely, despite Microsoft's best efforts to fast-track the process.
Barring any further hold-ups, ISO is expected to publish the full text of the standard within the next few weeks. But as the dust clears, many IT managers and office software users will likely be left scratching their heads: What does an open standard office file format from Microsoft actually get us?
A competing set of file formats, called ODF (Open Document Format), was accepted as an ISO standard more than two years ago. ODF is already in use in a number of competing office software products, including, AbiWord, and IBM's Lotus Symphony. Its success in the face of Microsoft's protracted effort to produce its own standard even recently prompted Microsoft employee Stuart McKee to remark, "ODF has clearly won [the standards battle]."
Indeed, Microsoft's failure to participate in the ODF standardization process has caused some to interpret the software giant's efforts to pursue its own, competing standard as little more than an attempt to undermine ODF. For its part, Microsoft has recently stated that it will include ODF support in a future update of Office 2007 -- but, interestingly, it will not actually include support for the ISO standardized versions of its own file formats until some future release of the suite.
According to Andy Updegrove of technology law firm Gesmer Updegrove, the rejection of the appeals against OOXML standardization is business as usual for the ISO process. "Today's announcement is not unexpected. It will be significant to learn, however, what the actual votes may have been," he says.
If there were many votes cast in support of the appeals, it may be evidence that ISO's processes may be skewed in favor of the interests of large corporations, such as Microsoft, rather than those of its member countries. "The greater the support, the more urgent it will be for ISO and IEC to reform their processes in order to remain credible and relevant to the IT marketplace," Updegrove says.
Just how much impact an ISO-approved OOXML will actually have on the IT marketplace -- or on users of office software -- remains to be seen. On the plus side, an approved standard should make it easier for competitors, including open source software projects, to interoperate with Microsoft Office, which has been difficult in the past. On the minus side, the proliferation of overlapping standards could serve to further muddy the marketplace, making bewildered customers much more likely to stick to the status quo.
Do you see Microsoft's move toward standardized file formats as positive or negative for your business? Sound off in the PC World Community Forums।
Reference :'s_ooxml_wins_iso_approval.html?tk=rss_news

Microsoft Sends Up Trial Balloons for Windows 7

While Vista takes a beating in the press, Microsoft seems increasingly willing to disclose details of its forthcoming OS।

Windows Vista hasn't fared so well since its debut. Its generally low reputation among customers has led one Forrester analyst to dub Microsoft's latest OS "the New Coke of tech," while some studies have suggested that nearly a third of customers who buy a PC with Vista pre-installed may actually be downgrading those machines to XP.
Still other customers seem to wish the whole thing will just go away. They don't want to hear about Vista at all -- they'd rather hear about Windows 7, the upcoming OS from Microsoft that will be Vista's successor. And given the dismal consumer reaction to its latest attempts to market Vista, Microsoft seems willing to oblige. The sketchy early reports of Windows 7 have lately grown into a steady trickle of hints and rumors. The catch is, not all of it sounds particularly encouraging.
Perhaps because of the beatings it so often receives from the press, Microsoft seems to want you to get your Windows 7 news from the horse's mouth as much as possible. To that end, the Windows team has launched a new blog to chronicle the Windows 7 engineering efforts in detail. Senior Windows 7 product managers Jon DeVaan and Steven Sinofsky promise to "post, comment, and participate" regularly.
Among the factoids revealed in the blog so far: The workforce tasked with assembling the forthcoming OS is immense, and it's dense with middle managers. As many as 2,000 developers may be involved, according to reports. That sounds like a truly Herculean project-management undertaking -- and indeed, if the figures quoted in the Windows 7 blog are to be believed, Microsoft has staffed up with one manager for every four developers. It's enough to make one wonder how Windows 7 will avoid the implementation failures and missed deadlines that plagued Vista's launch.
The engineering blog isn't the only evidence of Microsoft's recent lip-loosening, either. Elsewhere this week we learned even more interesting information. We've known for a while now that Windows 7 is expected to build on the Vista code base, rather than reinventing any substantial portion of the Windows kernel. As it turns out, however, the next version of Windows may be even closer to the current one than we expect.
According to Microsoft spokespeople, the server version of Windows 7 will be considered a minor update, rather than a high-profile new product. In fact, it's expected to ship under the name Windows Server 2008 R2 -- a designation that suggests it will offer few features that aren't already available in the current shipping version of Microsoft's server OS.
As tantalizing as these tidbits of information may be, however, hard facts about Windows 7 remain scarce. At this stage, any talk about the forthcoming product counts as little more than free marketing. As long as we all keep talking about Windows in some form or another, the less likely we are to jump ship to Mac OS X or (heaven forbid) Linux.
According to Microsoft, however, developers can expect to get their first in-depth look at the new OS at the Professional Developers Conference (PDC) conference in October, and further information will be revealed at the Windows Hardware Engineering Conference (WinHEC) the following week। Until then, expect the rumor mill to remain in full force.
Reference :

VMware ESX Bug Causes Outage

A bug in the enterprise virtualization software prevents VMware ESX 3।5 U2 customers from starting new virtual machines, beginning today.

Users are rightfully annoyed when services like Gmail experience unexpected outages. We've come to expect that our e-mail should be available whenever we need it -- even when the service is provided for free. Imagine your frustration, then, if you found out that software you had bought and paid for had suddenly stopped functioning on a certain date.
This is exactly the problem faced by customers of VMware ESX, VMware's enterprise-class virtualization engine. As of today, due to a bug in VMware's license management software, no new virtual machine instances will launch for customers running VMware ESX 3.5 U2. And so far, there's no fix.
Virtualization software allows customers to split their PCs and servers into separate virtual machines. Each of these artificial partitions acts as if it were a separate computer, complete with its own set of hardware, peripherals, and OS. Because it offers ease of use and increased security, virtualization has become an increasingly popular method of provisioning and managing systems in business datacenters.
Demand for virtualization has grown so much in recent years, in fact, that the market for virtualization software has become highly competitive. Open source solutions have appeared that offer features comparable to commercial products. As a result, many commercial vendors offer basic versions of their products free of charge -- VMware included.
ESX, however, is not that free product. ESX is VMware's flagship offering, and customers often shell out tens of thousands to deploy it on mission-critical servers. And today it stopped working. Ironically, the very system that checks to ensure that only paying customers can run the software is preventing those same customers from using it.
As I write this, VMware is scrambling to release a fix for the product. If you are yourself a VMware customer, you can find more information on the bug and potential work-arounds on VMware's message boards. The company has also posted an advisory in its online knowledge base.
I actually feel a little sorry for VMware. Judging by some of the traffic on message boards, the fallout from this one is going to sting. Here's hoping that the problem can be resolved quickly and that the company can find a way to save face with its customers.
UPDATE: VMware has now made express patches available to fix the bug, and updated versions of the U2 update are available, linked to the article on the VMware Knowledge Base.
Reference :

Cloud Computing: Is Midori Sour or Sweet?

A future Microsoft operating system may be entirely Web-based। Here are the pros and cons of Midori and the "cloud computing" concept.

A recent report that Microsoft is preparing a new operating system that would move applications and data from our desktops to the Internet quickly drew the ire of many PC World readers. The idea of relinquishing control of one's apps and data to a server farm owned by a large company like Microsoft or Google--a concept called "cloud computing"--seems to have hit a nerve.
To understand this reaction, you have to look back to the late seventies and early eighties, to the dawn of the microcomputer, or "personal computer." It was the beginning of a huge shift away from the old mainframe/dumb-terminal epoch, in which all data was managed and doled out to the lowly terminals by a central monolith on a "need-to-know" basis. The personal computer era moved computing power, data, and applications from that central mainframe onto our desktops. It was the democratization of data, and it put us, the users, in control. PC World was founded on that concept.
And now it seems we're talking about a new operating system--Midori--that would push us back toward the centralized server idea again. Only this time, our data would be hosted not on giant mainframes, but on huge server farms, such as one Google is building on a 30-acre plot near the Dalles, Oregon. These data centers will serve up our apps, and host and protect our data. Later on, they'll even provide the computing power we need.
So this cloud computing idea is far more than a technical shift; it is also a major cultural shift in tech. It's one that will take some getting used to, if it happens at all.
Our readers' comments on our Midori stories provide a clear snapshot of consumer anxieties about cloud computing. Take this (rather sarcastic) one from PC World forum poster raife1:
"...I can't wait until my system is nothing more than a Microsoft Services delivery-device. I cannot wait to hand over complete control of my property and livelihood... and literally be at the mercy of every communications company, ISP, backbone provider, software provider, or government agency..."
The loss of control feared by raife1 is perhaps the main objection to cloud computing. But there are others, which we paraphrase below--and provide answers that cloud computing proponents might give in response.
1. Server outages could severely impact a user's experience.
The perfect example here is Twitter's all-too-common Fail Whale. Of course, the quality and reliability of the service is totally dependent on the quality of the provider. Twitter is not considered a "critical" application, so the data supported by that service isn't backed up by the huge, redundant server farms you'll find at eBay and Amazon.
One of the main tenets of cloud computing is that your data is hosted on at least two servers, so that if one fails, the second takes over; then yet another server is deployed to provide backup for the new primary server. I would argue that in the life span of applications delivered from host servers (the cloud), fewer server outages have occurred and less data has been lost than in the other paradigm, where individuals or companies host, secure, and back up their own data on their own servers.
2. Users would need a fast, always-on Web connection to access and work with apps and data.
This sounds like a legitimate concern, but you have to look at it in context. One of the main reasons cloud computing for consumers is being taken seriously today is that broadband connections--wired and wireless--are becoming faster and far more ubiquitous. So, yes, we do not live in an always-connected world today, but we are rapidly headed in that direction. And for those times when you can't connect, new tech like Google Gears will provide a way for us to keep working with online apps when we're not connected to the cloud.
Midori aside, Microsoft is currently taking a slightly different approach to the "offline problem''--offering a sort of hybrid where much of the service is delivered via the cloud, but where users also employ self-contained, desktop-based programs (such as Word or Excel) for working while offline. Then when the connection to the cloud is restored, users can sync up to servers, share their files, and collaborate with other users.
3. Not "owning" your own data is risky: A security breach could open up your personal info and files if they're hosted elsewhere.
To deny that such security breaches are impossible would be foolish. They happen, and will happen in the future, just as breaches of financial institutions' data systems happen and will continue to happen. But again, for the amount of data that we, as consumers, have already entrusted to the cloud, the losses in real dollars have been small. Where the consumer is concerned, the argument could be made that a large hosting facility like Google can do a far better job of backing up your data than you can. (By the way, when did you last back up the data on your home PC?)
Some very legitimate reasons exist for moving toward cloud computing. Applications can be built and delivered to millions of users far faster. Applications need no longer run on the desktop, where they have a tendency to interfere with other apps or system hardware. When the processing power itself is hosted in the cloud, PCs will have to do far less, and, conceivably, will cost far less. They would be a lot simpler too, so they wouldn't break down or need to be upgraded as much.
Cloud Computing, Front and Center
Clearly, the news that Microsoft is embracing the cloud, or at the very least having a good, close look at it--as well as the recent boom in Web-based applications like Gmail--has suddenly brought the cloud computing concept front and center in consumer technology. Until recently, the cloud computing idea, otherwise known as Software as a Service (SaaS), has mainly been the province of the business world. Businesses have been using hosted services for years, whether those services are hosted internally on a large corporate network, or externally on large servers operated by a third party (think
Daryl Plummer, who is Gartner's chief of research for advanced IT, says that a shift to Web-based applications is an evolutionary--and a necessary--step for Microsoft.
"Microsoft is in more danger today than they have ever been because their basic models for delivering value through software are being challenged," wrote Plummer in an e-mail interview with PC World. "Midori makes sense as a research project today and may make imminent sense as an offering tomorrow once we know what it really is."
"But one thing is for sure, it will be challenged on all fronts. Some will say it is not as good as Windows. Some will say the OS is no longer important. Some will say the cloud is too risky. I say change happens, and this would be supportive of a continued evolution to a service-oriented world."
I suspect that the cloud computing concept will move into the consumer computing world very slowly, one application at a time--just as it did, and continues to do, in business IT. The idea that Microsoft's next OS will suddenly be "in the cloud" and that it is "giving up on Windows" seems a little far-fetched. More likely, Microsoft will slowly begin building in hybrid hosted/desktop services and apps into its OS in a way that is transparent to consumers, and at a rate that won't send old-school home PC enthusiasts into full-on revolt.
We'd like to know what you think about Microsoft's announcement, as well as the recent spike in online apps। Do you feel uneasy about having your data served and stored elsewhere? Would you welcome such a drastic shift from desktop applications to online-only apps? Let us know your thoughts in the Comments section below.
Reference :

IBM Challenges Microsoft for the SMB Desktop

For most small to midsized businesses, software means Microsoft. For almost any category of business software -- from word processing to spreadsheets, presentations to communication and collaboration -- Microsoft is the de facto vendor of choice. Alternatives do exist, but who wants to be the first one to rock the boat? Microsoft has grown so cocky about its position that it even bragged that it would soon steal five million users away from IBM's Lotus Notus, a competitor to its own Outlook and Exchange.
That's not the kind of threat that IBM takes lying down. On the contrary; it's digging in. Big Blue claims that it is redoubling its efforts to win customers away from Microsoft, beginning with a big win in Asia and new partnerships with major Linux vendors.
First, says IBM, just because Microsoft enjoys seemingly unshakeable dominance of the U.S. business software market doesn't mean that has to be the case everywhere. Big Blue sees a big opportunity for its own software in Asia and other emerging markets, and it's backing up that speculation with real numbers. Just last week it announced a single deal with an as-yet-unnamed Asian company that it says will add 300,000 new seats to its Lotus Notes business.
One major selling point of Notes over Outlook is that while every PC that runs Outlook must also purchase a Windows license, Notes also runs on Linux. Per-seat licensing for commercial Linux distributions is typically lower than that of Windows, and community-maintained Linux distributions can be downloaded and installed for free. IBM is hoping that the low total cost of a PC running Notes on Linux will make such systems attractive to cost-conscious customers in emerging markets.
To further up the ante, IBM announced on Tuesday that it has forged partnerships with major hardware and Linux vendors to ensure that installing IBM business software on Linux systems is as painless as possible. Soon, Linux users will be able to obtain versions of IBM's Lotus Foundations software that have been specially packaged for installation on Novell Suse, Red Hat, or Ubuntu Linux.
Lotus Foundations is a software bundle that includes not just Notes, but also the Sametime enterprise instant messaging system and Symphony, IBM's competitor to the Microsoft Office productivity suite. By prepackaging it for the top three desktop Linux distributions, IBM stands to make Foundations a one-click install for the majority of business Linux customers.
This is certainly encouraging news for anyone who is seeking an alternative to the Microsoft-dominated business software market, particularly in emerging markets such as Asia. Whether any of this momentum will translate into increased sales for IBM's software over Microsoft's in the U.S., however, remains to be seen.
What do you think? Are you itching to break Microsoft's grip on your business? Will easier access to IBM's alternative software make you more likely to switch to the Lotus platform, or are you more interested in Web-based software such as Google Apps? Or, on the other hand, do you feel that there simply isn't any genuine competition for Microsoft's business software? Sound off in the PC World Community Forums।
Reference :

Macs, Aperture a Big Hit at the Beijing Olympics

Along with the top athletes from around the world, the 2008 Beijing Olympics has attracted thousands of reporters and photographers, working around the clock to file stories and images from the biggest event in all of sports. And there's a good chance that the photos you've seen from the games were imported, edited, and transmitted on a Mac.
In the digital photo editing area of the Kodak Photographer's Center--a massive workroom located in the main press center at the Olympic park--hundreds of photographers at a time assemble to file their images using high-end workstations and tech-support supplied by Apple (the same was true at the 2006 winter games in Turino, Italy). Meanwhile, rows of Lenovo computers sit idle.
Headed by Joe Schorr, Senior Product Manager of Photo Applications, Apple set up fifty broadband-connected Mac Pro workstations, complete with 30-inch Cinema Displays and a set of essential photographic tools including photography workflow software Aperture, Adobe Photoshop, FTP software, and PhotoMechanic (a popular tool with many sports photographers).
"We know from experience that photojournalists love Macs, and that our software is used by a huge segment of this industry," says Schorr, scanning the media room for photographers who need help.
"Our goal is to get them to fall in love with Aperture, but we're perfectly happy to have them using other tools as well," says Schorr, who adds that Apple is also providing general Mac support, since "these guys are working under incredibly tight deadlines, and there are situations where they're panicking." Tech support has included fixing a faulty DVD drive, providing missing cables, helping with network issues, and even providing feedback on photographers' editing choices.
Aperture made its Olympic debut in Turino, where many photographers caught their first glimpse of the company's post-processing tool. Many of those photographers have returned to Beijing, now as much more polished users.
The Kodak Photographer's Center, and in fact the whole main press center, is a 24-hour operation--the facility is complete with a McDonalds, a shipping center, a general store, a cafeteria, and more. Waves of photographers flood the press center as events conclude and photographers rush to meet deadlines.
For sports photographers, speed is key--something Apple focused heavily on as part of the 2.0 version of Aperture. "We can really say we've made significant inroads in this segment," says Schorr, "particularly in delivering a quick-editing workflow that can truly keep up with these guys."
In the downtime between deadlines, many photographers have asked Apple's support staff for demonstrations or about system purchase recommendations. In more than one case, photographers arrived at the games as PC users, and since purchased or ordered a new Mac. (The presence of an official Apple store in Beijing has been helpful.)
Schorr says he's also taking notes as he talks to photographers, gathering feedback for future versions of Aperture. "This is a pressure cooker," he says. "You see what these guys really need and what they go through and as a product manager it's been amazing to spend that much time absorbing how people respond to our current product. It's worth more than a big stack of market research papers just to be able to work with these photographers."
With the Vancouver winter games only two years away, the team has already been discussing support possibilities for the next Olympic venue, where photographers will be able to work with a future version of the application developed around the lessons learned at Beijing.
[David Schloss is a photographer and author, whose Aperture Users Network has been providing support alongside Apple to photographers at the Olympics।]
Reference :

Real Time Drives Database Virtualization

Databases are evolving faster than ever, becoming more fluid to keep pace with an online world that's becoming virtualized at every level.
In many ways, the database as we know it is disappearing into a virtualization fabric of its own. In this emerging paradigm, data will not physically reside anywhere in particular. Instead, it will be transparently persisted, in a growing range of physical and logical formats, to an abstract, seamless grid of interconnected memory and disk resources; and delivered with subsecond delay to consuming applications.
[ Stay up to date on the latest virtualization developments with InfoWorld's Virtualization Report blog and newsletter. ]
Real-time is the most exciting new frontier in business intelligence, and virtualization will facilitate low-latency analytics more powerfully than traditional approaches. Database virtualization will enable real-time business intelligence through a policy-driven, latency-agile, distributed-caching memory grid that permeates an infrastructure at all levels.
As this new approach takes hold, it will provide a convergence architecture for diverse approaches to real-time business intelligence, such as trickle-feed extract transform load (ETL), changed-data capture (CDC), event-stream processing and data federation. Traditionally deployed as stovepipe infrastructures, these approaches will become alternative integration patterns in a virtualized information fabric for real-time business intelligence.
The convergence of real-time business-intelligence approaches onto a unified, in-memory, distributed-caching infrastructure may take more than a decade to come to fruition because of the immaturity of the technology; lack of multivendor standards; and spotty, fragmented implementation of its enabling technologies among today's business-intelligence and data-warehouse vendors. However, all signs point to its inevitability.
Case in point: Microsoft , though not necessarily the most visionary vendor of real-time solutions, has recently ramped up its support for real-time business intelligence in its SQL Server product platform. Even more important, it has begun to discuss plans to make in-memory distributed caching, often known as "information fabric," the centerpiece middleware approach of its evolving business-intelligence and data-warehouse strategy.
For starters, Microsoft recently released its long-awaited SQL Server 2008 to manufacturing. Among this release's many enhancements is a new CDC module and proactive caching in its online analytical processing (OLAP) engine. CDC is a best practice for traditional real-time business intelligence, because, by enabling continuous loading of database updates from transaction redo logs, it minimizes the performance impact on source platforms' transactional workloads. Proactive caching is an important capability in the front-end data mart because it speeds response on user queries against aggregate data.
Also, Microsoft recently went public with plans to develop a next-generation, in-memory distributed-caching middleware code-named "Project Velocity." Though the vendor hasn't indicated when or how this new technology will find its way into shipping products, it's almost certain it will be integrated into future versions of SQL Server. Within Project Velocity, Microsoft is playing a bit of competitor catch-up, considering that Oracle already has a well-developed in-memory, distributed-caching technology called Coherence, which it acquired more than a year ago from Tangosol। Likewise, pure-plays, such as GigaSpaces, Gemstone Systems, and ScaleOut Software have similar data-virtualization offerings।

Microsoft's (Likely) Road Map
Furthermore, Microsoft recently announced plans to acquire data-warehouse-appliance pure-play DATAllegro and to move that grid-enabled solution over to a pure Microsoft data-warehouse stack that includes SQL Server, its query optimization tools and data-integration middleware. Though Microsoft cannot discuss any road-map details until after the deal closes, it's highly likely it will leverage DATAllegro's sophisticated massively parallel processing, dynamic task-brokering and federated deployment features in future releases of its databases, including the on-demand version of SQL Server. In addition, it doesn't take much imagination to see a big role for in-memory distributed caching, à la Project Velocity in Microsoft's future road map for appliance-based business-intelligence/data-warehouse solutions. Going even further, it's not inconceivable that, while plugging SQL Server into DATAllegro's platform (and removing the current Ingres open source database), Microsoft may tweak the underlying storage engine to support more business-intelligence-optimized logical and physical schemas.
Microsoft, however, isn't saying much about its platform road map for real-time business-intelligence/data-warehousing, because it probably hasn't worked out a coherent plan that combines these diverse elements. To be fair, neither has Oracle -- or, indeed, any other business-intelligence/data-warehouse vendor that has strong real-time features or plans. No vendor in the business-intelligence/data-warehouse arena has defined a coherent road map yet that converges its diverse real-time middleware approaches into a unified in-memory, distributed-caching approach.
Likewise, no vendor has clearly spelled out its approach for supporting the full range of physical and logical data-persistence models across its real-time information fabrics. Nevertheless, it's quite clear that the business-intelligence/data-warehouse industry is moving toward a new paradigm wherein the optimal data-persistence model will be provisioned automatically to each node based on its deployment role -- and in which data will be written to whatever blend of virtualized memory and disk best suits applications' real-time requirements.
For example, dimensional and column-based approaches are optimized to the front-end OLAP tier of data marts, where they support high-performance queries against large, aggregate tables. By contrast, relational and row-based approaches are suited best to the mid-tier of enterprise data-warehouse hubs, where they facilitate the speedy administration of complex hierarchies across multiple subject-area domains. Other persistence approaches -- such as inverted indexing -- may be suited to back-end staging nodes, where they can support efficient ETL, profiling and storage of complex data types before they are loaded into enterprise data-warehouse hubs.
For sure, all this virtualized data infrastructure will live in the "cloud," in a managed-service environment and within organizations' existing, premises-based business-intelligence/data-warehouse environments. It would be ridiculous, however, to imagine this evolution will take place overnight. Even if solution vendors suddenly converged on a common information-fabric framework -- which is highly doubtful -- enterprises have too much invested in their current data environments to justify migrating them to a virtualized architecture overnight.
Old data-warehouse platforms linger on generation after generation, solid and trusty, albeit increasingly crusty and musty. They won't get virtualized out of existence anytime soon, even as the new generation steals their oxygen. Old databases will expire only when someone migrates their precious data to a new environment, then physically pulls the plug, putting them out of their misery.
Reference : http://www।

IBM Pumps $300M into Business Continuity Centers

IBM today announced that it is spending $300 million to expand its business continuity and disaster recovery business, adding 13 facilities around the world to address what it described as a surge in demand from businesses and governments.
The company said the investment is the largest of its kind in IBM's 40-year history in the business continuity and resiliency industry.
Stan Clanton, vice president of infrastructure at InfoUSA Services Group, said he's excited about the prospects of IBM expanding its overseas business continuity and resiliency services.
"Part of our direction is to expand internationally," he said. "The value of having an international partner will make it more convenient for me to deal with potential recoveries internationally."
InfoUSA is a $750 million direct-mail services and consumer database information company in Omaha. Clanton said he uses only IBM's recovery service for his mainframe environment and has an internal recovery architecture for InfoUSA's 1,400 servers and various storage arrays.
IBM said it will build new "Business Resilience Centers" in cities including Hong Kong, Beijing, Shanghai, Tokyo, Paris, London, Warsaw and New York, as well as Izmir, Turkey; Milan, Italy; and Cologne, Germany. They will open this year and will house IBM's latest remote data management and information-protection capabilities, including the storage, replication and recovery of data and business applications for the first time from a cloud-computing-based environment.
In addition, IBM said it is accelerating the build-out of its Information Protection Services business to deliver cloud-based computing services to support business continuity. Those services use technology gained from IBM's acquisition of Arsenal Digital Solutions Worldwide Inc. earlier this year and combine IBM hardware with storage management software in a fully configured, rack-mounted storage appliance known as a data-protection "vault."
IBM has more than 150 business-resilience centers worldwide।
Reference :

Intel's Nehalem Chips Looking at Long Rollout

Can't wait to get your hands on a system running Intel Corp.'s upcoming Nehalem processors?
Well, you better settle in and put your feet up, because this is going to take a while.
At this week's Intel Developer Forum, the company gave out some more details about its upcoming chip family -- codenamed Nehalem. The first Nehalem chips, which will be quad-core server chips, are expected to ship this fall. After that, the rest of the Nehalem family -- desktop chips, dual-core, more quad-core and eight-core chips -- are slated to be released over the course of next year.
Jim McGregor, an analyst at In-Stat, said at IDF that Intel lightly touched on the fact that the Nehalem rollout will span four or five quarters.
"There's probably more complexity on the chip than they expected," said McGregor from the annual developers' forum, which is being held in San Francisco. "Generally, whenever you have an extended schedule, it usually means there's some challenges in the design or with the supporting chipsets. AMD learned that lesson with Barcelona. When you're trying to put four cores or more on a chip, you tend to run into some kind of trouble."
Intel has not disclosed any problems with Nehalem's design.
Company executives did show off the first 8-core Nehalem chip at the conference. And yesterday Intel said it's six-core Dunnington processor will ship next month. Moving beyond the quad-core processors that to date have been the high water mark in the semiconductor industry, is a major step. And it's a step that keeps Intel well ahead of rival Advanced Micro Devices Inc.
AMD, which is slated to ship its upcoming 6-core Istanbul server processor in the second half of 2009, could use some breathing room in its one-upsmanship battle with Intel.
Dan Olds, an analyst at Gabriel Consulting Group, said AMD will gladly take a long rollout from Intel if that means they won't have to deal with an 8-core chip coming out in the next several months. He added that while it offers a little breathing room, it's not much since AMD is still about a year behind Intel in multi-core offerings.
Another Nehalem feature drawing some attention is the built-in Turbo Boost। The technology basically is designed to shut down unused cores so the remaining cores can be used more efficiently. This energy-saving technology has been used in other processors, according to McGregor, but it's a good addition here.
Reference :

Microsoft Investigating Power Pack 1 File Conflict Errors

A number of Windows Home Server users on WGS and Microsoft’s WHS forums are reporting a File Conflict error occuring in their WHS systems with PP1 installed.
When this occurs it has proven difficult and in some cases impossible to fix the conflict by deleting the files identified as being the cause. This error also seems to be effecting the Backup database with a Backup service is not running also being reported.
Could this be a bug introduced by PP1?
I have experienced these issues myself and found that a Server reinstall has been the only way to “fix” it - however it reoccurs. The other options appears to be to delete the Backup database which I’ve been reluctant to do and start over as it currently has over 3 months of backups in it.
If you are experiencing these issues Lara Jones of Microsoft’s Windows Home Server team has asked users to log a bug report via Connect under the WHS beta program - this is the sticky post on Microsofts Windows Home Server forums:
I would like to take this opportunity to point everyone towards the release notes for Windows Home Server Power Pack 1. This document provides helpful information regarding improvements in PP1 and known issues. In addition, WHS with PP1 is now more active in monitoring the health of your server, specifically storage. This may result in more messages from the server:
Continue At Source
Reference :

Unappreciated New VMware Feature Boosts Compatibility

One of the best things about virtual infrastructures is their ability to minimize the inevitable differences among the servers in a typical server farm. Rather than having to buy X number of servers every quarter with identical configurations, component and firmware versions, virtualization allows data-center managers a little more freedom and a little more confidence that almost-identical hardware will perform almost identically.
There's the same amount of failure in "almost" compatible as there is in "not even close," though, so Intel, AMD and VMware have all been working on closing the almost gap.
VMware's most recent contribution is in the deservedly maligned ESX 3.5.0 Update 2, which managed to annoy a huge chunk of the VMware user base by mistakenly deciding its licenses were out of date.
(VMProfessional posted a surprisingly unbitter LOLcat on the bug that makes me think virtualization has really arrived, on the assumption that you're not really well known in your field until people can use tired Internet memes to make fun of you without having to explain either you or the meme.)
Update 2 also contained Enhanced VMotion Compatibility (EVC), which takes advantage of complementary features in recent Intel and AMD server chips that mask the differences between similar hardware. (Here's a link to a VMware white paper that describes EVC; it's a PDF, so you'll have to scroll to page 6.)
VMware announced EVC at last year's VMworld, with what appears to have been insufficient fanfare. It got little attention from the press or in VMware user blogs at the time, and has been discussed relatively little ever since. VMware slipped it into Update 2 with little or no additional notice, though plenty of users have been looking for it.
EVC uses Intel's Flex Migration and AMD's AMD-V Extended Migration to hide more advanced features of the newest chips and dumb down all the processors in a cluster to a single, lowest-common-denominator level. It does that by modifying the semantics of the CPUID instruction code so that neither the virtualization software nor the OS nor the applications will cause problems with a function call that's present on one physical server but not another.
The feature isn't eliminated, so there's no damage to the firmware or the servers; the VM software and apps just think the advanced feature isn't present, so they don't ask for it. Apps that are written specifically to take advantage of a particular feature can still get to it, if you set things up right, and put the app on the right server.
Masking doesn't fix the potential for complications from almost-compatible chips, and doesn't eliminate the need to do a close comparison among the chips in the servers you are buying.
It emphasizes the need to compare minor feature enhancements in different version numbers of the processors and chipsets in your servers as well as the BIOS and other firmware, in fact.
So in that facet it's actually swapping one headache for another. Reducing the potential consequences of having two groups of servers that are almost compatible while not really eliminating the need for the due diligence you'd need to avoid the problem in the first place.
Assuming Update 2 believes your licenses are up to date, though, EVC can further narrow the distance between "almost" compatible and "really" compatible.
It doesn't eliminate that particular headache। But used correctly, it should reduce the number of Motrins involved in getting it fixed.
Reference :

Nasser Hajloo
a Persian Graphic Designer , Web Designer and Web Developer

Subscribe feeds via e-mail
Subscribe in my preferred RSS reader

Advertise on this site Sponsored links

Labels And Tags



All My Feeds

Computer And Technology News Blog
Hajloo's Daily Note Blog
Development World Blog
Iran ITIL - ITSM Center Blog
Khatmikhi Press Blog
Khatmikhi Blog
Mac OS X in Practice Blog

Subscribe feeds rss Recent Comments


My authority on technorati
Add this blog to your faves