Even if Moore's Law is "running out," there's still plenty of room at the bottom

A very good piece by Tom Simonite in the MIT Technology Review looks at the implications of Intel's announcement that it will slow the rate at which it increases the density of transistors in microprocessors.


In one important way, this is an "end of Moore's Law," which predicted that the speed of microprocessors would steadily double every two years, making computation logarithmically cheaper for the foreseeable future. The timescales are getting longer, and may get longer still.

Some pin their hopes on fundamental breakthroughs that restore the tempo of Moore's Law, but even in the absence of such a breakthrough, there is still lots of new things that are both plausible and exciting and don't require fundamental, unpredictable scientific discoveries.


One such advancement is in fundamental computer science, specifically in "parallelization." Some kinds of programs need to be run all in order — you need to compute an answer to a problem, then compute something else on that answer, and then compute something else after that. Some programs can run in parallel: you can compute lots of things all at once, on as many computers as you can find, without having to wait for any one program to finish before the next one can run. Most problems are in between.


Parallelizing has been hugely beneficial to modern computing — but it's also bent the kinds of computing we do. The existence of massively parallel processors like graphics cards — cheap and ubiquitous — has driven computer scientists to work on problems whose solutions seem parallelizable at the outset, with huge dividends in applications like machine learning and even Bitcoin mining.


But there's lots of work to be done here. Programs that appeared for years to be stubbornly "serial" (that is, that needed to be run in order) turned out to have parallelizable components whose cost (the redundant work, wait-time, or backtracking caused by solving different parts of the problem on different processors) was less than the benefit realized by cheap parallelizing. As Moore's Law slows, expect manufacturers and computer scientists to bend their efforts to this kind of work in earnest, finding all kinds of new reservoirs of performance improvement that don't require increases in processor speed. It's not that scientists didn't try to improve parallel computing until now, but the promise of a faster processor around the corner made the project less urgent.

Another exciting possibility is improvements in processor types that haven't kept up with Moore's Law. The article mentions mobile processors, which are an obvious candidate for generating big changes from even modest improvements. I'm more interested in FPGAs, the reconfigurable chips that can be programmed by users, which are often used in prototyping.


Ben Laurie, a respected cryptographer, once told me about his plan to design a computer handled all its I/O through FPGAs that would encrypt all the user input before passing it onto the main computer to do work on, then decrypt the output from the computer to present it to the user. The "special purpose" computer in the middle would never see the user's data in the clear, and the only devices that emitted or received cleartexts would be FPGA based, and could be indepedently verified with a multimeter that the user could physically apply to the chip to find out what it was doing.


Laurie proposed this as a solution to the problem of trusting the chips themselves — a problem whose urgency mounts with each new generation of chips. FPGAs are already powerful, but nowhere near as powerful as conventional chips. There's no reason to think that the factors that are slowing Moore's Law in traditional chip fabrication will also affect FPGAs, and as these chips become more powerful, the realm of computing we can do in auditable, verifiable hardware increases.


Wenisch says companies such as Intel, which dominates the server chip market, and their largest customers will have to get creative. Alternative ways to get more computing power include working harder to improve the design of chips and making chips specialized to accelerate particular crucial algorithms.

Strong demand for silicon tuned for algebra that's crucial to a powerful machine-learning technique called deep learning seems inevitable, for example. Graphics chip company Nvidia and several startups are already moving in that direction (see "A $2 Billion Chip to Accelerate Artificial Intelligence").

Microsoft and Intel are also working on the idea of running some code on reconfigurable chips called FPGAs for greater efficiency (see "Microsoft Says Reprogrammable Chips Will Make AI Smarter"). Intel spent nearly $17 billion to acquire leading FPGA manufacturer Altera last year and is adapting its technology to data centers.


Moore's Law Is Dead. Now What?

[Tom Simonite/MIT Technology Review]


(via Beyond the Beyond)


(Image: Altera StratixIVGX FPGA, Olmaltr, CC-BY)