There are many lists circulating naming which sci-fi or scientific predictions were right about future developments. Most of them have to do with things like the cell phone, artificial limbs, and robotics.
And sci-fi has often enjoyed figuring out when and how we’re likely to destroy ourselves with our own curiosity and hubris.
Look at something as old as Mary Shelley’s “Frankenstein: A Modern Prometheus.” Written in 1818 – so, yes, 200 years ago – it was written at the dawn of mechanization and when the first blood transfusion had just been accomplished. She was writing a horror tale, to be sure, but at least part of the tale is right in the title: don’t “create” humans. Now, while it’s possible to consider the upside and downside of cobbling together a real human (or at least a fully biological one), it’s perhaps more of a DEW (distant early warning) not to even try to create an artificial one.
Philip K. Dick, a giant among sci-fi writers, wrote the book that informed the hit film, “Blade Runner.” The novel “Do Androids Dream of Electric Sheep” was published in 1968, and suggested a world in which the androids, bred to be faster, smarter, stronger and the semi-mechanical, semi-biological slaves of human beings, rebel against their masters. The theme of rebelling robots, or AI, is not new.
Fast forward to December of 2022, and the website AI Multiple published:
“The greatest fear about AI is singularity (also called Artificial General Intelligence), a system capable of human-level thinking. According to some experts, singularity also implies machine consciousness. Regardless of whether it is conscious or not, such a machine could continuously improve itself and reach far beyond our capabilities. Even before artificial intelligence was a computer science research topic, science fiction writers like Asimov were concerned about this and were devising mechanisms (i.e. Asimov’s Laws of Robotics) to ensure the benevolence of intelligent machines.
For those who came to get quick answers:
Will singularity ever happen? According to most AI experts, yes.
When will the singularity happen? Before the end of the century.”
(https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/)
Naturally, many will suggest that this will never occur, and that many sci-fi promises, particularly about robots, have never and likely will never happen.
But then we’re reminded of Nicola Tesla, and his assurance in a 1926 Collier’s article that “When wireless is perfectly applied the whole earth will be converted into a huge brain, which in fact it is, all things being particles of a real and rhythmic whole. We shall be able to communicate with one another instantly, irrespective of distance. Not only this, but through television and telephony we shall see and hear one another as perfectly as though we were face to face, despite intervening distances of thousands of miles; and the instruments through which we shall be able to do his will be amazingly simple compared with our present telephone. A man will be able to carry one in his vest pocket.”
Well, that device in your pocket may not be in a vest pocket, but it certainly can do everything he promised, and more. So when it comes to intelligent technology, never say never. And consider whether running headlong into AI, or even engineered biological humanity, comes with a “Buyer Beware” tag.
Moving on to another sci-fi book, written in 1967, we find ourselves promised death by ice, rather than the fire now predicted. “Ice,” by Anna Kavan, “(her) last (and best) sci fi novel provides a haunting, claustrophobic vision of the end of the world, where an unstoppable monolithic ice shelf is slowly engulfing the earth and killing everything in its wake.”
“The Time of the Great Freeze” by Robert Silverberg, published in 1988, promised that a great freeze would drive mankind underground, and only after centuries do some intrepid adventurers come up to the surface to try to make contact with other surviving humans.
The 2004 film “The Day After Tomorrow” tells the story of climatologist Jack Hall, whose research pointed out the possibility of a superstorm developing and setting off catastrophic natural disasters across the world. Only rather than Earth melting down, it freezes as the Atlantic current slows and then stops.
In fact, apocalyptic sci-fi about the big freeze were not uncommon before about 1990. After that, it was assumed too preposterous for the enemy to be cold when we were promised we were all going to cook, not chill. In fact, authors and movies that were well-received and respected are now considered laughably unscientific if the temperature enemy isn’t heat and flooding.
Reviewers with 20-20 hindsight find it hard not to be snarky about sci-fi that promised cold rather than heat, as we are all certain (though we weren’t as late as the 1970s) what the temperature of Earth will be in 10-20 years, especially if we don’t “do something!” Still, no matter what the prediction, sci-fi has usually taken one of two paths when it comes to the climate: either it’s a threat from space that we can’t or don’t avert, or it’s our own darn fault that we didn’t stop the catastrophe when we still could!
Finally, there is death by virus, or biological agent, and again, though history has presented us with the Bubonic Plague, anthrax, and the 1918 influenza, it is a man-made plague that occupies sci-fi’s imagination. And there are some truly terrifying and in many ways accurately depicted versions of how man is his own worst enemy, especially when it comes to illness.
1995’s film, “Outbreak” and 2011’s “Contagion” are perhaps the most accurate depictions of a world facing an epidemic of truly enormous proportions, and the literal race against a virus that can spread faster than human beings have the resources to combat. From people wearing masks and hiding out from one another, to a CDC that can’t quite decide if people should panic or remain calm, watching – as we recently did – a virus spread from “somewhere” to Milan to the US within a few short months, and people running from a sneezing neighbor and carrying hand sanitizer to take out the trash – these were two films that did an excellent job of predicting how the world might react to another even more virulent pandemic.
I recall reading “Earth Abides” as a kid. It was written in 1949, and tells the story of Isherwood Williams, who is bitten by a snake while hiking, and after a long and frightening illness, heads back home to find the bite probably saved him from the illness that carried away nearly every other human on earth. Williams and a few others are left to figure out how to carry on.
And of course there is sci-fi great, Richard Matheson’s wonderful “I Am Legend,” telling the story of a lone man who has survived a pandemic that creates vampires who now hunt him, and the ingenious ways he has found to stay alive – foraging only during the day, and hunting his enemies by night.
The point of most of these pandemic books and films usually was that the virus, or pathogen, was not typically “natural.” Often, it was either manufactured, or amped up by research left to play without limits – and humankind paying the price of our endless curiosity and fearless willingness to “see what happens.”
Sci-fi writers love to play with the “what if” concept – and the very best ones usually offer us a warning. Sometimes they get it very wrong, others seem to have had insight into the future, and for others – the jury of time is still out.