If you take a look around the Student Union on an average day it is immediately apparent that, even within the short confines of my own life, society has undergone some pretty serious technological changes. In many ways this is business as usual – technology and the process of creating it has dominated since the advent of the Industrial Revolution; but above this long process an (appropriately named) micro-revolution in computing has changed the way we live as radically as heavy industry. Articulated in various forms, a theory emerged in the 1960’s predicting that every 18-24 months, the number of transistors that can be placed on an integrated circuit inexpensively would double. While the details are complicated, “Moore’s Law,” as it came to be known, essentially (and has more or less successfully) predicted that computing power would double every two years.
The consequences have been staggering. Personal computers and pocket-sized cellphones, at the time, would have been leaps of fancy unworthy of serious consideration, much less laptops and smartphones. Remember, the primitive computers at the time occupied entire rooms and could only be built by the effort of governments in conjunction with the largest corporations. Moore’s Law gave governments and the burgeoning tech industry the theoretical foundation it needed to confidently pursue, and pay for, the foundational research that brought personal electronics into existence. From a financial standpoint, these devices have emerged from nowhere to become a big percentage of overall economic activity, creating an enormous imprint on our daily lives.
The history of Moore’s Law, like Peak Oil, is long. Alan Turing, the father of computing, anticipated Moore’s Law with a similar projection in the 1950’s, which nonetheless failed to predict the extent to which computing power would exponentially build upon itself; and this in a deeply theoretical paper on machine intelligence. The Law’s namesake, Gordon E. Moore, was a co-founder of Intel whose articulation of the Law in a 1965 paper firmly attached his name to the phenomenon. And he put his money where his mouth was, as Intel became one of the world’s foremost producers of semi-conductors by 1969, assured by its founder that profitable growth would continue for decades to come.
And they were right, but it might all be coming to an end. Researchers have finally succeeded in creating an atomic scale transistor, the basic unit of the circuits that comprise many of the chips in our electronic devices. Put simply, we have run up against the theoretical barrier of Moore’s Law, the level at which the transistors we build can grow no smaller. And although practical implementation of atomic scale technology is probably years if not decades away, the end of Moore’s Law is nevertheless a serious affair. In many ways the technology corporations that have helped build our modern way of life will have to find a new way forward, absent the reassurance that predictable obsolescence will create a revolving door for their products. Intel itself predicted in 2008 that secondary effects (quantum tunneling) would create a practical limit to the miniaturization of technology short of the material limitations of Moore’s Law somewhere between 2013-18; indicating at least that the tech industry isn’t deluding itself about the future.
If indeed the end is nigh, the economic and social consequences, while difficult to project, will be profound. It seems clear now that we have been riding a second technological wave, the first characterized by the advent of heavy industry in the 19th century, the second by computing and its related products in our own time. The expectation of constant, profound technological progress has become deeply rooted in human society and culture; and, beyond the vision of some dystopian novel, a technological society absent yearly growth almost seems like a contradiction in terms.