Post-disaster reconstruction (i.e. Puerto Rico), Simulations, AI, and the Future of Humanity

Image Credit: Tesla

Reference Link Business Insider: Puerto Rico is taking a big step toward revamping how it gets power — and it could be a model for the rest of the US

Reconstruction after disaster as a primary driver for investment into infrastructure makes some sense. Basically it’s the “wait and see” approach to governance. And there’s some merit to it.

Oftentimes, ostensibly well-intended interventions are unsuccessful or produce net negative results. Classic example is the cane toad exported from the Americas to Eastern Australia to eat the beetles which were then feeding on sugarcane crops. Turned out, of course, cane toads secrete a compound in their skin which is deadly to all its predators, and its population could not be controlled.

Plenty of other examples of such interventions with similarly disastrous “unintended consequences”.

After a natural disaster like a hurricane, however, it’s harder to make things worse than they already are — so you have a lot more freedom to get things wrong without public outcry. I.e. Human-made disasters are more readily forgiven when they are attempts at recovery from nature-made disasters.

A reasonable question follows: Under what conditions should we “wait and see” and under what conditions should we perform some intervention and risk incurring unintended consequences?

Presumably the answer to that question will change significantly if the software used to perform simulations of the intervention and its effects is improved.

Simulations are used to test structural models for bridges, tall buildings, tunnels, etc. However, simulation models are not necessarily available for many other contexts. Presumably, we should be investing in methods for devising simulation models, evolving simulation models, and ensuring they can be maintained and upgraded as computational infrastructure evolves.

Worth noting is that humans can design software with fixed rules that can be understood by human intelligence. That’s what keeps planes in the air, cellular signals bouncing around the atmosphere, electrical grids stable… It takes a lot of work, but over time humans can work out the details and build a really robust and functional system.

However, if we are to evolve our approaches, we must venture down either 1 of 2 paths…

1. We will either invent and deploy software which evolves its own (better) models of how the earth works (including the living organisms it hosts) by way of continuous experimentation (i.e. 24/7/365) and makes recommendations to inform global-scale human behaviors and resource allocation. Inevitably, humans will rely on the computer’s recommendations for several critical survival factors. The computer then holds human lives in the balance. And, by definition, the computers intentions will be inscrutable to the human intelligence — that’s precisely what it was designed for.

or…
2. We will continue to approach the asymptotic limit of human capability, which would appear to be constrained not so much by the capacity for human intelligence but its deployability within human systems. I.e. Humans get in their own way. Inter-human interactions produce friction which often leads to a kind of thermal runaway… attention gets redirected from the “problem to solve” to the “problem solvers” themselves.

Naturally, the aim is to walk a middle ground — Safe AI. Kind of a crazy idea. I’ve thought of it previously as loosely akin to the Obi Wan to Anakin Skywalker situation. I.e. We know this kid will be very powerful, and he could be a great force for good.. or he could go to the dark side and wipe us all out. Yoda would say “don’t train the boy”. But we all know he’s going to get trained.

For my part, I have determined that all plausible future scenarios will benefit from a larger population of well-developed humans. The model of what constitutes a “well-developed human” is, of course, subject for debate. But there will be no future in which humanity will not benefit from widespread proliferation of wisdom, curiosity, ingenuity, enthusiasm, compassion, self-discipline, physical mastery, historical perspective, courage, and appreciation & respect for the flourishing of naturally-evolved life forms.

That last one may also be key to developing Safe AI. Somewhere along the line, AI must develop an appreciation and respect for the flourishing of naturally-evolved life forms. Just as a portion of humanity is deeply committed to the preservation of living fossils — like the 22 million year old tree lobster from Lord Howe Island, which was declared extinct a hundred years ago (same old story… habitat invaded by rats onboard intercontinental sailing ships), rediscovered on a nearby rock jutting out of the ocean, and brought back to life from a population of 24 members to now well over 10,000.

Basically, we have to build into AI an appreciation for the historical significance of naturally-evolved life, including humans. And statistically, we really shouldn’t plan for it to acquire that appreciation by our present or historical example.

Leave a Reply

Your email address will not be published. Required fields are marked *