Moore's Law: Fast, Cheap Computing and What It Means for the Manager

Moore's Law, named for the co-founder of Intel Gordon Moore, defines expected advances in the need for data storage over time. In reality, it defines much more, beyond simply data storage. Read this chapter and attempt the exercises to gain a broader understanding of the importance and costs associated with Information Systems.

The Death of Moore’s Law?

Buying Time

One way to overcome this problem is with multicore microprocessors, made by putting two or more lower power processor cores (think of a core as the calculating part of a microprocessor) on a single chip. Philip Emma, IBM's Manager of Systems Technology and Microarchitecture, offers an analogy. Think of the traditional fast, hot, single-core processors as a three hundred-pound lineman, and a dual-core processor as two 160-pound guys. Says Emma, "A 300-pound lineman can generate a lot of power, but two 160-pound guys can do the same work with less overall effort". For many applications, the multicore chips will outperform a single speedy chip, while running cooler and drawing less power. Multicore processors are now mainstream.

Today, most PCs and laptops sold have at least a two-core (dual-core) processor. The Microsoft Xbox 360 has three cores. The PlayStation 3 includes the so-called cell processor developed by Sony, IBM, and Toshiba that runs nine cores. By 2010, Intel began shipping PC processors with eight cores, while AMD introduced a twelve-core chip. Intel has even demonstrated chips with upwards of fifty cores.

Multicore processors can run older software written for single-brain chips. But they usually do this by using only one core at a time. To reuse the metaphor above, this is like having one of our 160-pound workers lift away, while the other one stands around watching. Multicore operating systems can help achieve some performance gains. Versions of Windows or the Mac OS that are aware of multicore processors can assign one program to run on one core, while a second application is assigned to the next core. But in order to take full advantage of multicore chips, applications need to be rewritten to split up tasks so that smaller portions of a problem are executed simultaneously inside each core.

Writing code for this "divide and conquer" approach is not trivial. In fact, developing software for multicore systems is described by Shahrokh Daijavad, software lead for next-generation computing systems at IBM, as "one of the hardest things you learn in computer science". Microsoft's chief research and strategy officer has called coding for these chips "the most conceptually different [change] in the history of modern computing". Despite this challenge, some of the most aggressive adaptors of multicore chips have been video game console manufacturers. Video game applications are particularly well-suited for multiple cores since, for example, one core might be used to render the background, another to draw objects, another for the "physics engine" that moves the objects around, and yet another to handle Internet communications for multiplayer games.

Another approach that's breathing more life into Moore's Law moves chips from being paper-flat devices to built-up 3-D affairs. By building up as well as out, firms are radically boosting speed and efficiency of chips. Intel has flipped upward the basic component of chips - the transistor. Transistors are the supertiny on-off switches in a chip that work collectively to calculate or store things in memory (a high-end microprocessor might include over two billion transistors). While you won't notice that chips are much thicker, Intel says that on the miniscule scale of modern chip manufacturing, the new designs will be 37 percent faster and half as power hungry as conventional chips.


Quantum Leaps, Chicken Feathers, and the Indium Gallium Arsenide Valley?

Think about it - the triple threat of size, heat, and power means that Moore's Law, perhaps the greatest economic gravy train in history, will likely come to a grinding halt in your lifetime. Multicore and 3-D transistors are here today, but what else is happening to help stave off the death of Moore's Law?

Every once in a while a material breakthrough comes along that improves chip performance. A few years back researchers discovered that replacing a chip's aluminum components with copper could increase speeds up to 30 percent, and Intel slipped exotic-sounding hafnium onto its silicon to improve power use. Now scientists are concentrating on improving the very semiconductor material that chips are made of. While the silicon used in chips is wonderfully abundant (it has pretty much the same chemistry found in sand), researchers are investigating other materials that might allow for chips with even tighter component densities. Researchers have demonstrated that chips made with supergeeky-sounding semiconductor materials such as indium gallium arsenide, germanium, and bismuth telluride can run faster and require less wattage than their silicon counterparts. Perhaps even more exotic (and downright bizarre), researchers at the University of Delaware have experimented with a faster-than-silicon material derived from chicken feathers! Hyperefficient chips of the future may also be made out of carbon nanotubes, once the technology to assemble the tiny structures becomes commercially viable.

Other designs move away from electricity over silicon. Optical computing, where signals are sent via light rather than electricity, promises to be faster than conventional chips, if lasers can be mass produced in miniature (silicon laser experiments show promise). Others are experimenting by crafting computing components using biological material (think a DNA-based storage device).

One yet-to-be-proven technology that could blow the lid off what's possible today is quantum computing. Conventional computing stores data as a combination of bits, where a bit is either a one or a zero. Quantum computers, leveraging principles of quantum physics, employ qubits that can be both one and zero at the same time. Add a bit to a conventional computer's memory and you double its capacity. Add a bit to a quantum computer and its capacity increases exponentially. For comparison, consider that a computer model of serotonin, a molecule vital to regulating the human central nervous system, would require 1094 bytes of information. Unfortunately there's not enough matter in the universe to build a computer that big. But modeling a serotonin molecule using quantum computing would take just 424 qubits.

Some speculate that quantum computers could one day allow pharmaceutical companies to create hyperdetailed representations of the human body that reveal drug side effects before they're even tested on humans. Quantum computing might also accurately predict the weather months in advance or offer unbreakable computer security. Ever have trouble placing a name with a face? A quantum computer linked to a camera (in your sunglasses, for example) could recognize the faces of anyone you've met and give you a heads-up to their name and background. Opportunities abound. Of course, before quantum computing can be commercialized, researchers need to harness the freaky properties of quantum physics wherein your answer may reside in another universe, or could disappear if observed (Einstein himself referred to certain behaviors in quantum physics as "spooky action at a distance").

Pioneers in quantum computing include IBM, HP, NEC, and a Canadian start-up named D-Wave. If or when quantum computing becomes a reality is still unknown, but the promise exists that while Moore's Law may run into limits imposed by Mother Nature, a new way of computing may blow past anything we can do with silicon, continuing to make possible the once impossible.