The bubbles in VR, cryptocurrency and machine learning are all part of the parallel computing bubble

Yesterday's column by John Naughton in the Observer revisited Nathan Myhrvold's 1997 prediction that when Moore's Law runs out — that is, when processors stop doubling in speed every 18 months through an unbroken string of fundamental breakthroughs — that programmers would have to return to the old disciplines of writing incredibly efficient code whose main consideration was the limits of the computer that runs on it.


I'd encountered this idea several times over the years, whenever it seemed that Moore's Law was petering out, and it reminded me of a prediction I'd made 15 years ago: that as computers ceased to get faster, they would continue to get wider — that is, that the price of existing processors would continue to drop, even if the speed gains petered out — and that this would lead programmers towards an instinctual preference for solving the kinds of problems that could be solved in parallel (where the computing could be done on several processors at once, because each phase of the solution was independent of the others) and an instinctual aversion for problems that had to be solved in serial (where each phase of the solution too the output of the previous phase as its input, meaning all the steps had to be solved in order).


That's because making existing processors more cheaply only requires minor, incremental improvements in manufacturing techniques, while designing new processors that are significantly faster requires major breakthroughs in material science, chip design, etc. These breakthroughs aren't just unpredictable in terms of when they'll arrive, they're also unpredictable in terms of how they will play out. One widespread technique deployed to speed up processors is "branch prediction," wherein processors attempt to predict which instruction will follow the one it's currently executing and begin executing that without waiting for the program to tell it to do so. This gave rise to a seemingly unstoppable cascade of ghastly security defects that the major chip vendors are still struggling with.


So if you write a program that's just a little too slow for practical use, you can't just count on waiting a couple of months for a faster processor to come along.


But cheap processors continue to get cheaper. If you have a parallel problem that needs a cluster that's a little outside your budget, you don't need to rewrite your code — you can just stick it on the shelf for a little while and the industry will catch up with you.

Reading Naughton's column made me realize that we were living through a parallel computation bubble. The period in which Moore's Law had declined also overlapped with the period in which computing came to be dominated by a handful of applications that are famously parallel — applications that have seemed overhyped even by the standards of the tech industry: VR, cryptocurrency mining, and machine learning.


Now, all of these have other reasons to be frothy: machine learning is the ideal tool for empiricism-washing, through which unfair policies are presented as "evidence-based"; cryptocurrencies are just the thing if you're a grifty oligarch looking to launder your money; and VR is a new frontier for the moribund, hyper-concentrated entertainment industry to conquer.

It's possible that this is all a coincidence, but it really does feel like we're living in a world spawned by a Sand Hill Road VC in 2005 who wrote "What should we invest in to take advantage of improvements in parallel computing?" on top of a white-board.


That's as far as I got. Now what I'm interested in is what would a contrafactual look like? Say (for the purposes of the thought experiment) that processors had continued to gain in speed, but not in parallelization — that, say, a $1000 CPU doubled in power every 18 months, but that there weren't production lines running off $100 processors in bulk that were 10% as fast.


What computing applications might we have today?


(Image: Xiangfu, CC BY-SA)