Thursday 4 September 2008

Intel Developer Forum Introduces Your Next Mac

First Apple drops "Computer" from its name to expand its focus to music, phones, and settop boxes, then Intel devotes six keynote-speech hours at last week's Intel Developer Forum to consumer electronics, Internet TV, futuristic human-machine interaction, and 3-D movies. There's a paradigm shift underway, and Apple and its primary chip supplier are shifting right along with it.
But make no mistake. Even though Intel's message to the 6,000-plus international übergeeks that filled San Francisco's Moscone Center West was clearly that it planned to move into consumer electronics in a big way, the company still had plenty to say about the nuts and bolts that hold together the entire world of computing. And much of it figures to influence the future of your Mac.
Of course, nobody would say as much at last week's conference. Apple is, as ever, tight-lipped about future product plans, and Intel's not about to prematurely spill any beans. Still, that doesn't mean we can't take a look at what Intel talked about at its developer forum and consider what may or may not find its way into your next desktop or laptop.
Your next Mac's microprocessor
Not only is the future going to be both centered on both consumer electronics and the Internet, it'll also be parallel-processed, thanks to the introduction later this year of a new multi-core, multi-threading microprocessor architecture that Intel's marketing department now calls Core i7, but which the geeks at IDF still referred to by its codename, Nehalem. Interestingly, all the printed schedules, hand-outs, and plasma-screen announcements at IDF identified this breakthrough architecture as Nehalem and not Core i7--perhaps there's a broken e-mail link between Intel marketing and engineering. Being a wannabe geek myself, I'll use the term Nehalem.
Nehalem's general outline has been known for some time. Rumors began to surface late last year, then Intel released a white paper covering its main attributes this April. With the proverbial cat already out of the proverbial bag, Intel used this year's IDF to elaborate on a few of Nehalem's capabilities and update its delivery timeline: late 2008 for high-end desktops and servers and by the third quarter of 2009 for mainstream desktop and mobile platforms.
Before I dig into Nehalem's goodness, a bit of clarification: The term Nehalem refers to a new microprocessor architecture--the chip's inner workings--and not to a specific microprocessor itself. Intel will release many differently configured Nehalem microprocessors during the architecture's lifetime; the configuration discussed at IDF was designed for high-end desktops (HEDTs) and servers. Each member of the upcoming Nehalem line will have identical cores (where the actual number and data crunching gets done), thus making it far easier for software developers to standardize their development efforts. The differences will be in the number of cores, cache sizes, graphic capabilities, and the like.
Nehalem microprocessors will be built using the same chip-making technology, identified by its process, as are the microprocessors in today's Macs. Processes are defined by their transistor-to-transistor distances; the current process is 45 nanometers (nm). That may sound insanely small--Oprah's hair is about 3,000 times thicker, and Paris Hilton's (unless she dyes it) about 1,000--but since processes shrank to below 130nm, microprocessors' transistors have leaked power like a sieve. Power leakage is a bad; it consumes energy and causes heat. Fortunately, Nehalem uses the same leakage-busting silicon technology, dubbed Power Gate, as does its 45nm older brother, Penryn. (The Penryn chips currently power all Mac laptops except for the MacBook Air, but expect Apple's lightweight laptop to jump on the bandwagon soon, since Intel announced a mobile form-factor Penryn processor at IDF.)
Nehalem, however, introduces new on-chip power-management circuitry called the Power Control Unit (PCU), which watches the processing areas of chip in high detail, turning power on and off to sections of it as needed. Crunching data? You get power. Lollygagging? No soup for you! The extent of the PCU's capabilities can be deduced from the fact that it contains a full one-million transistors--compare that to the 29 thousand transistors in the Intel 8086 introduced 30 years ago. What's more, the power levels of Nehalem's processing cores and its per-core data and instruction caches are decoupled, since caches need higher levels of power to keep their contents error-free, and when the cores heat up above spec, the PCU can step each core's power down in smaller increments than Penryn chips, thus making power corrections less drastic.
Another marquee Nehalem feature is Turbo Mode. Complex in execution, Turbo Mode is simple in concept: Say that a Nehalem processor, as will the first ones to be introduced later this year, has four processing cores. And say that the application it's running is only making use of two of those cores. Not only are those other two cores standing idle, the power that they would otherwise be using is being ignored. With the help of the aforementioned PCU, Nehalem's Turbo Mode technology senses that there's available power, and boosts the clock rates of the cores that are hard at work. Those two cores speed up and work faster, but the total amount of power that the microprocessor as a whole consumes and the heat it generates remains the same as if all four cores were active. Turbo Mode will therefore improve the performance of apps that haven't been efficiently optimized for multicore processors--and, sadly, there are far too many of those littering the Mac ecosystem.
Turbo Mode can adjust to any number of cores that may be active or inactive at any given time and switch core speeds without wasting even a single clock cycle to pull off that feat. In the first Nehalem processors the upticks in clock rate will be 133MHz per inactive core. Oh, and speaking of clock rates, hobbyists will be happy to learn that overclocking protection has been eliminated in Nehalem--may a thousand case mods bloom!
Another radical new feature of the Nehalem architecture is that its memory controller--the circuitry that moves data in and out of RAM--is now included on the microprocessor itself, and not in a separate memory controller chip. While you're to be forgiven for thinking "Who cares?", remember that in previous designs, data flying back and forth to and from memory had to share space on the front side bus (FSB) with traffic from hard drives, graphics cards, USB devices, and so on. Now it has its own private channel--and it's a fast one: over 33GB per second. Your data will get to each core faster, so your Mac will process it faster.
Some future Nehalem processors will also have a graphics controller on-chip. Don't expect blazing performance from those controllers though. Instead, their advantage will be in lower chip counts and lower power requirements. Think cheaper laptops systems with longer battery life. Future Nehalem-based systems will also allow you to choose either the on-chip graphics controller, useful when your laptop is operating on battery power, and a discrete, third-party graphics controller, useful when you need top graphics performance.
Nehalem will eschew the FB-DIMMs in current Macs--it appears that FB-DIMMs are following floppies, LocalTalk, and Zip drives to obsolescenceville. Nehalem will support DDR-3 DIMMs, with three channels of DDR-3 per socket and up to three DIMMs per channel. DDR-3 uses less power than does DDR-2, and requires no buffer power as do DDR-2 FB-DIMMs. There seems to be a theme here: faster, less power required. The performance of Nehalem microprocessors will be highest when they're coupled with matched pairs per of DIMM per pair of slots, although unmatched pairs will still work; the HEDT version discussed at IDF will support 24GB of memory.
Now that memory is no longer flowing over the FSB, Intel decided to dump it and replace it with what it calls the QuickPath Interconnect (QPI). I could talk for an hour about the glories of this fine technology--Intel's Bob Maddox, who presented the QPI session, certainly had no difficulty in doing so--but I'll simply sum up by saying that QPI is a new way for processors to talk with each other and with the rest of the computer. And QPI is fast. Very fast: around 25.6GB per second--that's more than twice the speed of the Mac Pro's 1600MHz FSB. Bottom line: You guessed it--a Nahalem-based Mac will be fast. Very fast.
Your next Mac's hard drive
Well, that intro may be a bit misleading: Your Mac's next hard drive may not be a hard drive at all, but instead a chunk of silicon called a solid-state drive (SSD). At IDF, Intel announced its second generation of SSDs, and this time it appears that it's poised to move these rugged, power-miserly, low-heat, silent, highly reliable, and fast storage devices into the mainstream--so much so, in fact, that the company named one of its two new lines of SSDs "Mainstream."
The new Intel SSDs are all SATA-based, so incorporating them into existing Macs would be a simple matter of plugging them in and watching them go. Current operating systems such as Mac OS X and that other one from Microsoft won't require any new commands to use SSDs, although both operating systems would benefit from optimization to remove some commands (such as ones instructing the system to wait for a hard drive to spin up) that would unnecessarily cramp SSD performance.
Intel's new SSDs come in two flavors: Mainstream and Extreme. The former are designed for people like you and me: users of laptops and desktops; the latter are designed for high-end, hard-working servers in data centers. There are two Mainstream form factors, 1.8-inch (X18-M) and 2.5-inch (X25-M), but the Extreme units are limited to 2.5-inch (X25-E) units. The SSDs will start small, with 80GB Mainstream models shipping in next 30 days, and with 160GB units appearing in the first three months of 2009. The Extreme SSDs will be even smaller, starting at 32GB in 90 days and doubling to 64GB in early 2009.
The specs for the Mainstream SSDs are impressive: up to 250MB/sec read performance and 70MB/sec write, a 1.2 million hour average life (mean time between failure, or MTBF), and a miniscule 150mW power hunger at a typical workload. The Extreme SSDs are even more impressive, with up to 250MB/sec read and 170MB/sec write, and a 2 million hour MTBF.
Performance appears to be equally impressive. One Intel demo showed a laptop with a Mainstream SSD running a suite of straightforward tasks between four and five times as fast as an identically configured laptop with a standard 5,400 rpm hard drive. Intel also said that although final tuning has not yet been completed, in their labs the battery life of an SSD-equipped laptop is over a half-hour longer than an identical hard drive--equipped sibling.
Mainstream SSDs are produced using a 34nm multi-level cell (MLC) technology, while Extreme SSDs use 34nm single-level cells (SLC). SLC not only has faster read rates than MLC, but it's also more robust: SLC SSDs are projected to have 10 times the life of the MLCs--which are no slouches, having a claimed ability to transfer 100GB per day for five years. Do you transfer that much data every day? I don't--our SSDs will therefore last longer than the projected MTBF.
The question, of course, remains: How much will SSDs cost? In answer to that question, an Intel rep told us to wait until the first SSDs ship in 30 days. One tiny bit of pricing information did slip, however, during a demonstration of a Mainstream SSD beating the pants off a high-speed RAID 0 setup. The RAID contained two 300GB Western Digital Velociraptor drives spinning at 10,000 rpm; the capacity of the SSD was not given, but considering that the 160GB Mainstream SSDs aren't even scheduled for sampling until the end of this year, 80GB might be a reasonable guess. During the demo, an Intel rep mentioned that the two storage systems were "about equal" in pricing--and considering that 300GB Velociraptors retail for $300 each, it doesn't appear that SSDs are going to be cheap when first released.
But Intel is deeply committed to SSDs, and projects that by 2010 the market will be filled with "billions of gigabytes" of SSD drives. With that amount of product appearing in desktops and servers worldwide, prices are certain to drop dramatically. For example, remember that Apple's first LCD display, the 15-inch Studio Display, cost a cool $2,000 when it was released in 1998. Today you can't even find 15-inch LCD displays, and 19-inch models start at under $150. Technology marches on; prices march downward.
Your next Mac's wireless connection
In today's wacky wireless world there's a welter of acronyms battling for a spot on your next laptop's logic board or ExpressCard slot. The hottest and heaviest action is in long-range wireless broadband, technologies that'll give you Wi-Fi or better speeds even when you're many miles from a transmitter. When this technology becomes ubiquitous in the next few years, you can bet your 'Book that Apple will follow Intel's lead, seeing as how Intel is planning to release its first long-range wireless broadband chipsets in the next few months.
Of the competing long-range acronyms, Intel is putting its money on WiMAX. However, during the IDF's WiMAX session the presenter--an Intel engineer with the highly impressive name of Tolis Papathanassiou--admitted that its main competitor, LTE, has a lot of things going for it as well.
WiMAX (Worldwide Interoperability for Microwave Access) is the marketing-friendly name for the IEEE-802.16 standard. WiMAX has been around since 2004, but it was in late 2005 that it began to be taken seriously as a mobile competitor when the 802.16e version was introduced--it's the version now in wide use in some areas of the world (notably, of all places, in Pakistan). This version is properly known as Mobile WiMAX, but you may also see it called WiMAX Mobile or (and I'm not making this up) WiBro.
WiMAX 1.0, as the current version is called, maxes out at 60-plus Mbps in ideal conditions--"ideal conditions" in this case meaning when you and your Mac are substantially closer to a WiMAX transmission tower than the standard's maximum 50km (31 mile) range. That's a comfortable broadband speed, but WiMAX plans to take off in the next couple of years, with version 1.5 (802.16e Rev2) reaching 125-plus Mbps in late 2009 and version 2.0 (802.16m) exceeding 300-plus Mbps in late 2010 or early 2011.
WiMAX--and, for that matter, its prime competitor, LTE (Long-Term Evolution)--will reach these speeds using a technology with the mind-numbing acronym of OFDMA + MIMO (Orthogonal Frequency-Division Multiple Access + Multiple-Input and Multiple-Output). Fear not, I won't delve into a gearheaded explanation of OFDMA + MIMO (today, at least...), just know that this wireless system breaks a signal up into multiple parts and then sends them into the ether over multiple transmitters and antennas. If you're studying specs for a wireless-router purchase in a year or two, look for that string of magic letters.
What Papathanassiou repeatedly emphasized about WiMAX and LTE is that due to their being based on OFDMA + MIMO (and also, for that matter, being IP-based at their cores), they're "revolutionary, not evolutionary." Meaning that although current competing wireless systems such as EVDO (Evolution, Data-Optimized) and HSPA (High-Speed Packet Access) and all their various and sundry flavors may currently be as fast as WiMAX and LTE, those weaklings are mere evolutionary steps up from mobile-phone technology. WiMAX and LTE, on the other hand, are revolutionary, built from the bottom-up as wireless data-broadband technologies. Both WiMAX and LTE scale better than their ex-phone competitors, according to Papathanassiou, so both WiMAX and LTE will provide faster, more robust performance in the future.
And it's the near future in which Papathanassiou claims WiMAX has LTE beaten. As noted above, WiMAX is scheduled to reach 300-plus Mbps in late 2010; LTE should reach that pinnacle a year or two later. Will this head start give WiMAX an uncatchable lead? Intel's market-projectors don't think so; they claim that by 2015, WiMAX and LTE should have comparable worldwide-subscriber bases of approximately 100 million.
So why is Intel banking so heavily on WiMAX and not hedging its bets by following both paths? After all, later this year Intel is scheduled to release both logic-board chips and add-in cards that support both Wi-Fi and WiMAX, even though Papathanassiou admitted that "[LTE] is better in some aspects than WiMAX and worse in other aspects."
I was about to ask Papathanassiou exactly which aspects he was referring to when our session ended and I was shunted outside to join the throngs of hungry Intelophiles swarming around the free Mediterranean chicken wraps and cans of ice cold Mountain Dew.
But wait, there's more!
Pat Gelsinger, Intel's senior vice president and general manager of the Digital Enterprise Group, shows of wafers during his presentation at the Intel Developer Forum.
If you've read this far, you may understandably fear that I'm going to recount every moment of the 170-plus hours of technical instruction provided at IDF. Fear not--it's time, instead, to wrap up a few details and hint towards future in-depth articles.
Rather than dig deep into each and every one of the following technologies, I'll just give you a quick peek. If you want to learn more, either search Intel.com or drop a note into the comments below.
• Larrabee: The session on Intel's upcoming multi-core cross between a traditional multipurpose microprocessor and a hard-wired GPU (Graphics Processing Unit) was the only session I saw that was turning attendees away from the packed auditorium 15 minutes before the session opened, and which had to be repeated later during the Forum.
First off, know that Larrabee, like Nahalem, is an architecture, not a chip; When they're released in 2009 or 2010, Larrabee chips will each have their own individual names. They will also each be multi-core, although those cores will be simpler than those in a traditional microprocessor. How many cores? Intel's not saying--but the test-result slides that ex-ATI-and-now-Intel engineer Larry Seiler projected in the crowded session room included results of tests that used up to 64 cores.
Like GPU architectures, Larrabee is designed for throughput-oriented workloads such as graphics and media, and not for general-purpose computing. Unlike GPUs, Larrabee chips will be highly programmable using the familiar IA (Intel Architecture) command set that has been the basis of PC software since the introduction of the aforementioned 8086 processor. By programming Larrabee's multiple cores with this tried-and-true command set, developers will be able to tailor their graphics code to exactly the need at any particular moment, and not watch it get trapped in hardwired on-chip routines that don't apply to that particular image-rendering task. As Seiler put it, "The more complex the [graphics task], the better Larrabee does."
So the answer to the question which one attendee asked, "Is Larrabee a CPU or a GPU?" is Yes. Think floor wax and dessert topping. Yum.
One final note on Larrabee: Seiler specifically pointed out that its highly parallel architecture will greatly benefit the OpenCL language that Apple plans to release next year in Mac OS 10.6, aka Snow Leopard. Hang on to your hats, gamers.
• Mobile Computing Enhancements: In the next fiscal quarter or two, the sales of full-powered notebook computers will pass those of desktop computers for the first time in history. I find it necessary to include the qualifier "full-powered" because Intel also discussed another, lower-powered class of laptops cutely named netbooks. These smaller, lighter, and less-capable units will be powered by Intel's Atom processor and its successors, and will--according to Intel--sell in the millions as either entry-level units (think developing countries), Junior's first device, or a traveling exec's lightweight companion. Apple, of course, is a prime contender for the top of the elegant-netbook heap.
With all these portable units becoming most users' primary or secondary-but-still-important computer, security is becoming a matter of greater concern. During Dadi Perlmutter's "Where Will 'On-the-Go' Go?" keynote he demonstrated a security system that, when notified that your laptop has been stolen, will remotely encrypt the files on that laptop's drive, take a photo of the miscreant with the laptop's built-in webcam (or iSight, of course), track the location of the stolen laptop by means of its built-in GPS, and then allow you to decrypt the files after your laptop has been recovered and returned to you. All Apple needs to do is add a GPS chip to its 'Books and this peace of mind can be yours.
And in conclusion...
From the proverbial 30,000-foot view, last week's IDF presented a future in which a melding of consumer electronics, computing, and the Internet into a human/machine interface with which you will, to quote Pat Gelsinger of Intel's Digital Enterprise Group, interact "24/7 in every modality of your life." Down in the tech trenches, however, the engineers who will make this possible are wrestling with the complexities of Nehalem, SSDs, WiMAX, Larrabee, and more--much, much more.
Whether or not you actually want 24/7 Internet involvement is, of course, an entirely separate question. But make no mistake: the opportunity to link up big-time, all the time is coming; the Internet will be in your car, you keychain, your home security system, and ... oh yes .. in your Mac.
One side effect of this Internetization of everything, as Gelsinger pointed out, is that we're rapidly reaching the limit of the number of Internet addresses that are possible using the current IPv4 Internet-addressing system; it'll max out at a paltry 4,294,967,296 addresses. He suggests that the world get off its collective cyber-duff and rapidly embrace the more-powerful IPv6 system, which can manage a cool 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses.
That should be enough to hold us for awhile.
Mac OS X, by the way, is already IPv6-capable. Ah, Apple ... always one step ahead of the pack.
Rik Myslewski has been writing about the Mac since 1989। He has been editor in chief of MacAddict (now MacLife), executive editor of MacUser and director of MacUser Labs, and executive producer of Macworld Live. His blog can be found on Myslewski.com.
Reference : http://www.pcworld.com/article/150398/.html?tk=rss_news

No comments:

Nasser Hajloo
a Persian Graphic Designer , Web Designer and Web Developer
n.hajloo@gmail.com

Subscribe feeds via e-mail
Subscribe in my preferred RSS reader

Subscribe feeds rss Recent Entries

Advertise on this site Sponsored links

Labels And Tags

Archive

Followers

All My Feeds

Computer And Technology News Blog
Hajloo's Daily Note Blog
Development World Blog
Iran ITIL - ITSM Center Blog
Khatmikhi Press Blog
Khatmikhi Blog
Mac OS X in Practice Blog

Subscribe feeds rss Recent Comments

Technorati

Technorati
My authority on technorati
Add this blog to your faves