2. A Brief History of the Future
The pitfalls of futurism.
Yogi Berra famously said, “The future ain’t what it used to be.” (You knew that quote would make an appearance, right? Although he wasn’t the first person to utter this phrase.) He had a quirky genius for making a concept that ultimately makes sense sound self-contradictory or nonsensical. What has changed is not necessarily the future itself, but our beliefs about the future. Futurism ain’t what it used to be.
In traveling back in time and into the future at the same time, we can look at some of the visions of yesterday’s future—what did they get right and what did they get spectacularly wrong? When tracking the errors of past futurists, some common themes emerge— what I call “futurism fallacies”—and they will help us shape our own future vision.
Futurism Fallacy #1—Overestimating short-term progress while underestimating long-term progress.
One core challenge of predicting the future is trying to determine not only what technology will develop, but also how long it will take. It is almost a certainty that eventually we will develop fully sapient general artificial intelligence. What is extremely challenging is predicting when we will cross that finish line. When trying to glimpse the future, there is also a tendency to overestimate short-term advancement, while underestimating long-term advancement. We see this frequently in science fiction movies—whether they are serious, comedic, utopian, or dark, the technology in twenty to thirty years is typically portrayed as transformational, rather than more realistically incremental. So, from Back to the Future to Blade Runner, we have flying cars by 2015.
This overestimation of short-term progress comes partly from the tendency to think of “the future” as one homogenous time, much as we often think of “the past” as one indistinct era. My favorite example of this is that Cleopatra (69–30 BCE) lived closer in time to the Space Shuttle (first launch in 1981) than the building of the Pyramids of Giza (2550–2490 BCE).
But, getting back to the future (heh!), we often imagine that “the future” contains whatever the next big advance is expected to be in any given technology. So instead of using phones, we are using video phones in the future. Instead of driving regular cars, we are driving electric, self-driving, or flying cars. We fly to other continents in commercial jets now, so in the future we must be flying rockets instead. Even if “the future” is only twenty years from now, we imagine all these technological transformations have already taken place, therefore overestimating short-term progress.
Underestimating long-term progress is mostly a matter of simple math, because technological progress is often geometric rather than linear. Geometric progress means doubling (or some other multi-plier) every time interval, so progress looks like 2, 4, 8, 16, 32, while
linear means simple addition every time interval: 1, 2, 3, 4, 5, and so on. You can see how geometric is much faster than linear, especially over the long-term. The best example of this is computer technology, such as hard-drive capacity and processor speeds. Processor speeds have roughly doubled every eighteen months, so in the past forty-five years, processors have not become several times faster— they have become millions of times faster. Underestimation also occurs because game-changing new technologies are often missed.
However, while there is a general tendency to overestimate short-term and underestimate long-term progress, each technology follows its own pattern of progress. Therefore, it can be difficult to pick technological winners and losers. The problems we are trying to solve with technology may also be nonlinear, getting progressively harder to advance at the same rate. At first, we may pick the low-hanging fruit and progress may be rapid, but then further advances become increasingly difficult, with diminishing returns and per- haps even roadblocks. If we project early progress indefinitely, that will result in overestimating the strides we will make. But then geo-metric advances and game-changing innovations eventually catch up, causing us to underestimate long-term progress.
A humorous showcase of this fallacy is the short film produced by General Motors in 1956, which imagined the “modern driver” of 1976. They get everything wrong. The film was made to promote their gas turbine engine technology—remember those? Probably not, because they never came into wide use. Several car companies, most aggressively Chrysler, tried to develop a gas turbine engine to replace the internal combustion engine, and none succeeded.
The film also featured “auto control” in which the car was able to take over steering from the driver—in 1976, remember, about a half century too early. But in order to do this, the driver had to first enter the “electronic control lane” and then synchronize the car’s velocity and direction with the external control. This was all done with the help of people in control towers that lined the highway through radio communication.
Science fiction depictions of the future are also rife with this fallacy. In 1968, the movie 2001: A Space Odyssey chronicled a mission to Jupiter (still beyond our current technology), with crew members in cryosleep (also not possible) and featuring a fully artificially intelligent computer, the HAL 9000. These technologies are all at least fifty to a hundred years premature.
Professional futurists, such as Isaac Asimov, frequently fell for this fallacy. In 1964, he made predictions for 2014, fifty years in the future, for the world’s fair. His forecasts were published in the New York Times, although in fairness, he admits these are only “guesses.” He predicted:
It will be such computers, much miniaturized, that will serve as the “brains” of robots. In fact, the I.B.M. building at the 2014 World’s Fair may have, as one of its prime exhibits, a robot housemaid—large, clumsy, slow-moving but capable of general picking-up, arranging, cleaning and manipulation of various appliances. It will undoubtedly amuse the fairgoers to scatter debris over the floor in order to see the robot lumberingly remove it and classify it into “throw away” and “set aside.” (Robots for gardening work will also have made their appearance.)
What about energy?
And experimental fusion-power plant or two will already exist in 2014. (Even today, a small but genuine fusion explosion is demonstrated at frequent intervals in the G.E. exhibit at the 1964 fair.) Large solar-power stations will also be in operation in a number of desert and semi-desert areas—Arizona, the Negev, Kazakhstan. In the more crowded, but cloudy and smoggy areas, solar power will be less practical. An exhibit at the 2014 fair will show models of power stations in space, collecting sunlight by means of huge parabolic focusing devices and radiating the energy thus collected down to earth.
Again, these predictions are at least a half-century premature. In general, futurists need to be much more conservative in their estimates of short-term advancement. It seems like doubling or even tripling the estimated timeline is a reasonable rule of thumb. Anticipate roadblocks, blind alleys, and troubling hurdles, and your estimates will likely be closer to the mark.
Futurism Fallacy #2—Underestimating the degree to which past and current technology persists into the future. Corollary—assuming we will do things differently just because we can.
One particularly ambitious look forward was the 1967 film by the Philco-Ford Corporation imagining the world of 1999, starring a young Wink Martindale. Those thirty-two years were full of advancements with the advent of personal computers, the internet, and essentially a transition to digital technology.
The authors of this film could not see through the thick veil of these technological revolutions, so they relied heavily on their own hidden assumptions, falling for many of the futurism fallacies. They assumed that many aspects of daily life would change simply because future technology would allow for such change. Everything thirty-two years in the future has to be different, right? History has shown, however, that past technology persists into the future to an incredible degree.
In their depiction of a typical day in 1999, even the simple act of drying one’s hands at home had to be done using the most advanced technology possible, such as infrared lights and air blowers. Drying your hands on a towel seems too old school for “the future.” Communicating from the next room was done through video. In this future, all food is stored frozen in individualized portions and heated up in minutes by microwave to serve, replacing all cooking, except perhaps for special occasions. The central computer monitors nutritional and caloric needs and suggests the appropriate menu.
Asimov made similar predictions about cooking in the future (again from his 1964 world’s fair predictions):
Gadgetry will continue to relieve mankind of tedious jobs. Kitchen units will be devised that will prepare “automeals,” heating water and converting it to coffee; toasting bread; frying, poaching or scrambling eggs, grilling bacon, and so on. Breakfasts will be “ordered” the night before to be ready by a specified hour the next morning. Complete lunches and dinners, with the food semiprepared, will be stored in the freezer until ready for processing. I suspect, though, that even in 2014 it will still be advisable to have a small corner in the kitchen unit where the more individual meals can be prepared by hand, especially when company is coming.
The idea that we will be utilizing essentially the same culinary techniques in the future, despite the fact that we’ve been cooking food over heat for millennia, just doesn’t fit a futurist lens. But the reality is we still buy raw vegetables and cut them with knives on a wooden cutting board, and then steam them or stew them in a pot. My modern kitchen and the process I use to cook would be fully recognizable to someone from fifty, even a hundred, years ago depending on the recipe. In fact, I recently purchased some hand-forged kitchen knives. Sure, the appliances are all more efficient, and may have had some incremental functional improvements, but mostly they are the same. The biggest innovation is the microwave oven, which I, like most, use for heating, not for cooking.
Sometimes we do things the old-fashioned way because we want to, or because the simple way is already pretty close to optimal. Sometimes convenience isn’t the most important factor (that is another lazy assumption about the future—everything is about optimizing convenience). Recently, several people close to me purchased the automatic coffee makers where each cup is made from a prepackaged individual container of ground coffee. This method prioritizes ease, speed, and convenience, and these coffee makers became very popular. But after a couple of years there was a backlash, and all it took was being exposed again to a really well-brewed cup of coffee. Compounded with environmental concerns over the waste of all the plastic used in those individualized packages, suddenly the swill they were drinking out of convenience simply wasn’t good enough.
Now many of them (I don’t drink coffee, so I watched this play out from the sidelines) have swung back to the other end of the spectrum, prioritizing quality. They grind their beans fresh and may go through an elaborate process such as slowly pouring boiling water over those grounds in search of the perfect cup of coffee. They enjoy the ritual, and it builds their anticipation of flavorful enjoyment.
Old technology can be remarkably persistent. We still burn coal for energy. Our world is still largely made out of wood, stone, steel, ceramic, and concrete—all materials that have been used for thousands of years. Plastic is probably the one new material that has shaped our modern world, but not everything is made from plastic just because it can be.
None of this is to downplay the truly transformational technologies that make up our modern world and have changed our lives. But the future always seems to be a complex blend of the new and the old. The trick is predicting which things will change, and which will substantially stay the same.
Futurism Fallacy #3—Assuming there is one pattern of technological change or adoption. Rather, the future will be multifaceted.
Sometimes new technologies completely fail and fade away (like the gas turbine engine), sometimes they are adopted but fill a smaller niche than initially assumed (like microwave ovens), and some- times they completely replace (except for nostalgic or historical purposes) the previous technology (like cars did to the horse and buggy). There is no one pattern.
We must also recognize that there are many competing concerns, and this is often why it is extremely difficult to predict how a new technology will play out. Convenience is not everything, and we do not widely adopt new technologies just because they are new. In addition, there are considerations of cost, quality, durability, aesthetics, fashion, culture, safety, and environmental effects. Even the concept of convenience itself can be multifaceted.
What tends to happen is that many technologies exist in parallel, each finding their niche where these combinations of factors make the most sense. I am writing this book on my computer, but sometimes I take notes by writing them down on a piece of paper. It’s just more convenient for some applications. Sometimes I listen to books on audio, sometimes I read them on my ebook, and some- times I like the feel of a physical book in my hands.
We still use natural wood in home construction because of its cost and how easy it is to work with, and for a desired aesthetic. In fact, antiques have a high value partly for their rustic or quaint appearance in home interiors. Conversely, I may spring for artificial wood for my deck because of its weather resistance and lower maintenance.
I drive to work in a car that would mostly seem ordinary to a driver from the 1950s, but they would likely be blown away by my GPS and entertainment system. And yet, I still usually just listen to news on the radio.
Futurism Fallacy #4—Anticipating the end of history.
In his book, Predicting the Future, Nicholas Rescher points to a tendency to assume “the end of history”—that society reaches its equilibrium point, and once achieved we have endless peace and prosperity. In this utopian future, not just convenience but also leisure become everything. Well, history doesn’t stop, at least it hasn’t so far.
A good example of this fallacy is the 1920s film about the twenty-first century Looking Forward to the Future, set after the “War to end all wars” where there would be never-ending peace and prosperity, with increasing leisure time. It predicts people wearing electric belts to control their climate. Men (not women) would be outfitted with a utility belt that would contain, “telephone, radio, and containers for coins, keys, and candy for cuties.” Planes would be enormous, designed like luxury cruise ships, with lounges, dining areas, and activities. The farther back in time we go, the more outlandish our present “future” becomes.
This fallacy mostly stems from a lack of imagination—thinking that all of our current problems will be solved by technological advancements. Once that happens, we will have achieved a stable utopia. But what always has happened so far is that as we solve one problem, new problems emerge. Even the technology we develop to make our lives better can come with a suite of its own challenges— new resources become precious, power shifts, and new conflicts arise. When past futurists looked at the advent of the internal combustion engine, they did not imagine the challenges of global warming, or the rise of power centers in the deserts of the Middle East.
History does not end; it just keeps churning.
Futurism Fallacy #5—Extrapolating current motivations and priorities into the future. Corollary—it’s still not all about leisure and convenience.
This assumption of increased leisure time was not unreasonable a century ago, as the industrial revolution really did free developed societies from previous crushing drudgery. Machines taking over many of the worst repetitive, time-consuming, and dangerous tasks was a defining feature of that era. It stands to reason that they would extrapolate that trend into the future, so they did. This assumes that current trends will continue indefinitely into the future, but they rarely do.
In the United States, for example, the forty-hour workweek did not come about because of technological advancement. In fact, industrial factories made workers more productive, and therefore their hours of work more valuable. Achievement of a forty-hour workweek was the result of a labor fight, fought over a century, and finally achieved by federal law in 1940. Since then, the workweek had been stable but has been increasing recently with the rise of new forms of contract work that fall outside these regulations.
A 2014 national Gallup poll put the average US workweek at forty-seven hours. This number is higher for competitive industries, or gig workers like Uber drivers, who might work a hundred hours a week. This increase is ironically driven by modern technology that facilitates contract work, which would have been hard to predict a hundred years ago. Now there is a push for a shorter workweek and increased work from home, made possible by computers and remote conferencing. These trends may have had a little boost from a once-in-a-century pandemic that was difficult to predict.
The core problem with this fallacy is unrecognized and often lazy assumptions. In the past, progress was equated with convenience, and so future progress was seen through that narrow lens. The question is—what hidden assumptions are we making today that color our thinking about the future?
Futurism Fallacy #6—Placing current people and culture into the future.
“The past is a foreign country: they do things differently there” are the first words of L. P. Hartley’s The Go-Between. By extension, the future is a foreign country too. When we envision the future, by default we tend to place people like ourselves in that future. But this would be the equivalent of imagining people from the Middle Ages living in the twenty-first century. One thing is certain—the people of the future will not simply be us with better technology. People and culture change, often in ways difficult to predict.
People in future societies are likely to have different priorities, ethics, and a different relationship with their technology. We can see this today, as often parents don’t understand how their own children use the latest social media app. We tend to accept technological changes over time, and eventually the unthinkable becomes every day. We may be squeamish about genetic manipulation, for example, but people in a century may think nothing of it. How will individuals in future generations relate to artificially intelligent robots?
This fallacy is common in science fiction, which is why the classic 1956 film Forbidden Planet imagined a starship from the twenty-third century being crewed entirely by men who would have seemed completely at home aboard a World War II battleship. Without more imagination, their vision of the future is already rendered obsolete.
This futurism fallacy is so common it has become almost synonymous with retro-futurism. We can see this, for example, as a deliberate aesthetic in the Fallout video games. It presents the world of 2071 as imagined by futurists in the 1950s, complete with unaltered 1950s culture, fashion, and attitudes. But these people from the 1950s are living in a world of retro-futuristic technology, including robots and nuclear-powered cars.
People and technology evolve together as a dynamic system, and yet futurists tend to imagine that time stands still, except for technology.
Futurism Fallacy #7—Everything that happens in the future will be planned and deliberate.
One more common assumption that shines through these older attempts at predicting the future is that society and technology in the future will be more planned, designed, deliberate, and controlled. The 1935 City of the Future film makes this assumption explicit, stating outright that in the future every detail will be planned. This high level of planning fits better with utopian rather than dystopian futures, but it is based on the notion that the powers that be will craft our future to best meet our needs.
In the Ford film about 1999 (the one with Wink Martindale), the center of their futuristic home is an entire room dedicated to the home’s central computer. The computer itself looks like a monstrosity out of the 1950s, with banks of switches, blinking lights, and vacuum tubes. It is presented as an overmind that controls every aspect of the home and the family’s life, from planning meals to homeschooling the children.
Reality, of course, remains far messier. The deep philosophical principle that shit happens continues to hold true. Things evolve in a chaotic way, for often quirky and highly contingent reasons, again contributing to our inability to accurately predict the future.
The Segway promised to change the way people get around. This is still sexy technology: an electric two-wheel stand-up platform that can transport you quickly around a city or large indoor space like a mall. Steve Jobs reportedly said that the Segway will be “as big a deal as the PC.” However, the Segway just never caught on, largely for practical and economic reasons. Perhaps the hardest thing to predict about technology is how people respond to and use that technology—will people want to ride wheeled platforms around the city? Is it better than other options, enough to justify the expense?
Similarly, futurists failed to ask, will people want to communicate routinely with video? The surprising answer turned out to be no; they would rather text, of all things. I don’t think anyone predicted this. Video phones are almost ubiquitous in portrayals of the future, but they remain a tiny slice of how we communicate today, despite the technology being fully mature.
Our utilization of future tech is likely to contain a lot of chaos because people are unpredictable.
Futurism Fallacy #8 (the Steampunk Fallacy)—Extrapolating existing technology into the future without considering new and potentially disruptive technology.
Another feature of past predictions of the future is what is not there, the things they did not predict. Looking back from the vantage point of the 2020s, the biggest technological advance over the last few decades, largely missed by futurists, is the digital revolution. Past portrayals of today are often still analog—they extrapolated their current technology into more advanced forms but failed to account for the possibility of disruptive technologies.
One dramatic example of this is the epic science fiction series by Isaac Asimov, the Foundation trilogy, written in the 1940s and 1950s. He imagines a distant future, thousands of years from now, that is completely analog. In the first two books, computers are not even mentioned. It is also striking how much the people of his future resemble those from the 1950s, complete with hat-wearing, cigar-smoking, male domination.
This fallacy often reminds me of steampunk fiction, which aesthetically imagines a world in which steam industrial technology remained the cutting edge and continued to advance to more complex and intricate instruments and devices. In this world, it was never replaced by electronic or digital technology.
We keep imagining a steampunk future. Futurists of the early nineteenth century missed electronics, and those of the first half of the twentieth century missed computers, and later missed the miniaturization and therefore wide distribution of computers. Typically, we don’t start seeing new technologies in future fiction until it already exists.
Even then, we are terrible at picking technological winners and losers. Remember the coming hydrogen economy? In the early 2000s, it was widely believed that internal combustion engines would be replaced by hydrogen fuel cells in vehicles, and this would be the cornerstone of a hydrogen-based economy. Hydrogen fuel cell vehicles are not dead yet, but they are soundly losing to battery-electric vehicles, which are clearly the favorite to replace gasoline engines (they are just more energy efficient than hydrogen).
Back to Wink Martindale in 1999; the filmmakers do anticipate shopping from home, but it’s all analog—a camera pans across items for purchase. And of course, the wife makes purchases in one room, while the husband pays for it at his bank of consoles in the next, complete with knobs and buttons, but no keyboard or other interface. They anticipated doing something in a new way but no further than an extrapolation of then-current technology.
Forecasting which new technologies will be game changers and which will fizzle is perhaps the greatest challenge of futurism. This fallacy also produces the greatest fails because a disruptive technology (or its absence) can completely change our vision of the future.
Futurism Fallacy #9—Assuming the objectively superior technology will always win out.
Yet another challenge of predicting the future is competitive technologies. Around the turn of the twentieth century, there was intense competition among steam-powered, electric, and gasoline-powered cars. Whichever technology won out would materially shape the next century and beyond in many ways. Of course, in hindsight it seems inevitable that gasoline engines would win, but it wasn’t. Each technology had its advantages and disadvantages.
While this is a complex story, it did mostly come down to infrastructure. Electrification was not yet sufficient to allow convenient intercity travel with electric cars, and the availability to frequently refill the steam engines with water was also a limiting factor. Gasoline simply beat the others to developing the critical infrastructure, which created a self-reinforcing feedback. Adoption led to further investment in infrastructure, which led to further use. It also helped that Ford chose the gasoline engine for his first mass-produced car—a choice by a single individual that could have changed the technology landscape for an entire industry.
This was not the only historical competition where the winner was not inevitable. For long-distance travel, jets now seem obvious, but rockets were seriously considered. In fact, Elon Musk’s SpaceX plans to bring back the idea of rockets for the longest trips. They certainly would be much faster.
Remember General Motors’ gas-turbine engine? These are quieter, smaller, less polluting, and run cooler than internal combustion. They also started more reliably in cold environments than cars of the time did. But they were less fuel efficient and more expensive to produce. Never able to be cost competitive, they failed to become popular.
Often the marketplace victory of VHS over Betamax for home video recording is given as the classic example of this phenomenon, as Betamax was seen as a superior technology because of higher resolution, better sound, and a more stable image, and so it grabbed the early market. However, the makers of Betamax got one decision wrong—their more compact tapes could only record one hour, while VHS could record two or more. A typical movie could be recorded onto a single VHS tape, and that one convenience is what likely doomed Betamax.
Ultimately, what makes a technology “superior” may be subjective.
Futurism Fallacy #10—Failing to consider how technology will affect people, our options, and the choices we make.
In 1981, Bill Gates, cofounder of software giant Microsoft, said, “No one will need more than 637 KB of memory for a personal computer—640 KB ought to be enough for anybody.” The computer on which I am writing these words has more than 32 million times as much memory. Many very smart people in the past have all fallen for one or more of the futurism fallacies I have been detailing in this chapter. Even still, the Bill Gates quote is shocking in hindsight. He seemed to miss the fact that once the personal computer became popular, people would want to do more with their device. That desire would benefit from more memory and power, which would then allow for other applications that in turn needed yet more memory and power. It can be reasonably argued that the computing power-hungry gaming industry is driving computer technology, something Gates apparently did not anticipate.
How inevitable were the technologies we now take for granted? It was once possible that we would be living in a world today powered by direct current and fueled mostly by nuclear power, in which all-electric vehicles have always been standard, long-distance inter-city travel is mostly by rockets, and our home devices are powered by nuclear isotope batteries. Was the personal computer really inevitable? What if it remained largely a device marketed to corporations and institutions for big computing jobs? If no personal computers, then there would be no World Wide Web, no social media, no smartphones. How different would our world be?
History is full of singular events that changed the course of technology. If not for the Hindenburg explosion, might zeppelins still be a popular form of travel? It seems likely they would eventually have been replaced by commercial jets, but perhaps they would have survived the way ocean cruise lines have, as luxury vacations rather than a means of getting to a destination.
After the near disaster of Apollo 13, in which a small onboard explosion threatened the lives of the crew and forced them to return to Earth without ever landing on the moon, the last two Apollo missions, 18 and 19, were canceled. There were certainly other factors involved as well, but a different set of circumstances might have led the United States to complete the Apollo program and continue plans for crewed missions to the moon and even Mars. Our space program might be in a very different place today were it not for the one short circuit.
If our present was not inevitable, but rather the quirky creation of many individual choices and events, then no particular future is inevitable either. The future will be indelibly shaped by the choices we make today. These choices are made at every level, from individual purchasing decisions to the judgments of CEOs at large corporations. How we invest in infrastructure will also matter—do we invest billions in the information superhighway? Do we invest in charging stations or hydrogen refueling stations?
Quirky decisions by people, individually and collectively, mold the future in unpredictable ways. But there is another layer here as well—culture and society. In the next chapter we will explore how societies of the future are likely to be different, and how that impacts our attempts at predicting the future of technology.
3. The Science of Futurism
People of the future will not be the people of today.
If we are going to apply a truly skeptical filter to our exploration of the future, that means we must base our musings as much as possible on science, empirical evidence, and logic. In his excellent Foundation series mentioned previously, Isaac Asimov writes about the science of “psychohistory” in which these cultural trends are studied scientifically so that the future course of human history can be mapped. Even in Asimov’s optimistic fiction, this process has to be constantly monitored and revised, and even then, fails spectacularly because of the introduction of unpredictable quirky elements.
Is “psychohistory” even a theoretical possibility?
Opinions as to whether there can even be a scientific approach to the future range the entire spectrum. In his 2003 book, Foundations of Future Studies, Wendell Bell argues that futurism as an academic field represents “a body of sound and coherent thought and empirical results.” He, therefore, thinks futurism is a legitimate field of academic study. However, despite his optimism, the field of futurism has been waning in academia.
Futurism might be struggling to win academic acceptance because of the attitude of its many detractors. At the other end of the spectrum, for example, is William Sherden’s 1998 book, The Fortune Sellers: The Big Business of Buying and Selling Predictions, in which he likens futurism to astrology. That is pretty harsh, compar- ing legitimate attempts to map the likely course of future science, technology, and society with an ancient superstition. But Sherden has a point when it comes to their relative success rate.
Is there, then, any scientific analysis we can bring to bear when thinking about the future? Identifying where past futurists have gone wrong and then trying to make error corrections, as we will try to do here, is certainly one reasonable approach. But how about looking at the progress of technology through history? Are there any big trends that we can reasonably extrapolate into the future?
In his 1999 book The Age of Spiritual Machines, Ray Kurzweil proposed the “law of accelerating returns.” He argues that in any evolutionary system, such as technological progress, over time there will be an exponential increase in the rate of progress. For example, efficiency or speed or power will double over some interval of time. The most obvious recent example of this is Moore’s law: Computer scientist Gordon Moore observed in 1965 that the number of transistors per silicon chip doubles every eighteen to twenty-four months. This trend has continued fairly consistently since then. Now we all enjoy the fruits of the exponential growth in computer drive capacity and processing speed. But is this pattern typical?
Kurzweil argues explicitly that it is if you take a broad view of evolutionary progress. He states:
An analysis of the history of technology shows that technological change is exponential, contrary to the common-sense “intuitive linear” view. So we won’t experience 100 years of progress in the 21st century—it will be more like 20,000 years of progress (at today’s rate).
This view is not universally accepted, however. Sometimes technology runs into hurdles that it cannot overcome. That’s why we still don’t have that flying car—it takes a lot of energy to keep something heavy enough to carry people in the sky and rolling on the ground will always be more energy efficient.
While we cannot make this sort of prediction for each type of technology or application, it does seem like a reasonable reading of history that technological progress is accelerating and probably will continue to do so. But here too we cannot extrapolate current trends endlessly into the future. Perhaps we will run into some general technological barriers that will stall overall progress for extended periods of time. Those pesky laws of physics may impose limits that require discovering new laws or developing entirely new technologies to overcome, and that could take an unpredictable amount of time.
The challenge is that we are trying to get an overview of the arc of technological progress while we are still in the middle of it. We also only have one data point: humanity. It would be fascinating to have access to a galactic database in which we could review and compare the technological histories of dozens of civilizations and look for patterns. But that’s not likely to happen anytime soon, so we must make do.
Many futurists approach technological advancement over time from an evolutionary perspective but draw different lessons from evolution than Kurzweil. In a 2019 paper, Mario Coccio writes that technological advancement is a complex system, involving technical choices, technical requirements, and scientific advances that attempt to solve complex problems. He references two underlying theories: One is the theory of competitive substitution, in which better technologies replace older ones. However, Coccio also introduces the idea of technological parasitism, which looks at technological advancement as a complex system of interacting technologies.
For example, as car engines become more powerful, this drives innovation to optimize car tires, suspension, and steering. Overall improvement in cars then affects how people use them, impacting society in unpredictable ways, such as where people live and how they plan their travel. These changes, in turn, drive further transformations in cars and other technology.
Clearly predicting how complex interacting systems will behave is extremely difficult. But not all past predictions were laughably wrong. There were some impressive hits if you consider the broad picture. Mark Twain of all people, in a way, predicted the inter-net. In his 1898 short story “From the ‘London Times’ in 1904,” he wrote:
As soon as the Paris contract released the telelectroscope, it was delivered to public use, and was soon connected with the telephonic systems of the whole world. The improved “limitless-distance” telephone was presently introduced and the daily doings of the globe made visible to everybody, and audibly discussable too, by witnesses separated by any number of leagues.
In 1900, engineer John Elfreth Watkins made a number of predictions that anticipated digital color photography, cell phones, tanks, and television. He wrote, for example, “Photographs will be telegraphed from any distance. If there be a battle in China a hundred years hence snapshots of its most striking events will be published in the newspapers an hour later. Photographs will reproduce all of Nature’s colors.” Sure, he also got a lot wrong, like the disappearance from the alphabet of the letters C, X, and Q, but his technology predictions were prophetic.
Futurists envisaged the widespread use of refrigerated cars for transporting produce, which has had a significant impact on the modern diet. Mobile homes, birth control, and medical imaging were also predicted (at least as general concepts).
In a 1967 report, Walter Cronkite gave his viewers a glimpse of the modern home of 2001. He predicted there would be an office where one could work from home. Such offices would contain multiple computers that could receive news and monitor the weather or stock market. There would still be a landline phone, but you could use it to make video calls. These predictions are all incredibly accurate, if a bit premature.
But the details were still quaintly off. Each function listed above had its own dedicated monitor, which you would control by turning knobs. The news reader could print a hard copy if you wanted one. The office, of course, was for the man of the house, who could also monitor other rooms in the house through closed-circuit TV, in which we see mother and daughter making a bed.
Beyond the occasional success, there is a lot to learn from these attempts at futurism. Understanding the many challenges to successful futurism can teach us about ourselves, our place in history, and how our present can shape our future. We now look back at the future predictions of the past as a window into their psychology, culture, and relationship with technology.
Hopefully we will get more right than wrong as we try our own hands at predicting the future of science and technology. At the very least this work will become part of the time capsule of futurism. Perhaps future futurists will look back at us and learn a little something about our present.
"In this age of real and fake information, your ability to reason, to think in scientifically skeptical fashion, is the most important skill you can have. Read The Skeptics' Guide Universe; get better at reasoning. And if this claim about the importance of reason is wrong, The Skeptics' Guide will help you figure that out, too."—Bill Nye
And don't miss The Skeptics' Guide to the Universe!
An all-encompassing guide to skeptical thinking from podcast host and academic neurologist at Yale University School of Medicine Steven Novella and his SGU co-hosts, which…