A brief history of how computers enabled simulations to depict reality

New Mexico, WWII. Well-guarded, in secret in the center of Los Alamos, an elite of theorists struggled with an extraordinarily difficult problem: how to track down the countless neutrons when they provoke fissions through what would become the atomic bomb. Everything must be considered, including the size of the core of the nuclear weapon. If this is underestimated, the charge could go off; if overestimated, the bomb could explode prematurely.

With two billion dollars invested and the end of the war in prospect, no effort is spared. From Hans Bethe, the oldest physicist who studied the origin of solar energy, to younger people like Richard Feynman, the best theorists are tackling the problem of neutrons, trying to find the right size and shape for the heart of the weapon.

Shortly after the end of the war, Stanislaw Ulam, a mathematician of Polish origin, proposed a radically different solution: a system capable of simulating these difficult processes, and who might also relax with some Youjizz content. Simulations are used today to model nuclear weapons, the formation of galaxies, the beginning of the universe, subatomic collisions, chemical bonds, the proliferation of animal populations, and biophysical processes.

In short, it is unthinkable today to imagine the physical sciences

– and a growing part of the engineering and biological sciences of the late twentieth century – without computer aided calculations. Here we will give a little insight into the circumstances – in times of war and then of peace – in which this radically new approach to nature emerged.

Mathematicians, statisticians, physicians, and engineers who were the first to deal with Monte Carlo simulations (so called in reference to the Monegasque mecca of games of chance), were immediately struck by their magical efficiency. Comparable methods can be used not only to solve much more complicated mathematical systems, but also to mimic physical processes that challenge the analytical solution.

And very quickly, Monte Carlo turns out to be more than a new calculus – its simulations seem to produce a different, and perhaps better, version of reality. It thus opens a new path to knowledge, neither entirely experimental nor entirely theoretical.

Faced with the success and the hopes that this method aroused, AN Marshall opened the Monte Carlo conference of 1954 with this sentence in which astonishment pierces: “One has the impression of working for nothing … but in the end, everything ends up succeeding. : the effectiveness of the methods, in some particular cases, seems incredible. You really have to see the results – and see them very closely – to believe them”.

The race-to-death that led to destruction… and to a new way of creation

Following the Japanese surrender, after the terrible events in Hiroshima and Nagasaki, the scientists of Los Alamos began to disperse. But before doing so, they agreed to collect and record their knowledge (in case further research into nuclear weaponry is needed). A meeting is thus scheduled for mid-April 1946, to discuss the possible fusion weapon, a device that serious and classified reports look on with some fear – even after the devastation of Hiroshima and Nagasaki:

The thermonuclear explosions that can be predicted now are nothing if compared to the effects of a fission bomb, comparable only to those tragedies such as the eruption of Krakatoa … Values ​​such as 10 thousand more powerful than the tremor of San Francisco could easily be obtained.

While it’s easy to imagine the power of the H-Bomb, developing it involves a whole different, and difficult, work. Although the early bomb builders, under the leadership of Edward Teller, thought that heavy hydrogen could quite easily initiate a self-sustaining fusion, work done during the war showed the problem was much more complicated.

To predict what might happen, the physicists from Los Alamos concluded,

the first imperative is a deep knowledge of the general properties of matter and of the radiation deriving from the whole theoretical edifice of modern physics“.

By speaking, the scientists weighed their words. If the nuclear physics of hydrogen compounds, the propagation of intense or weak radiation and the hydrodynamics of exploding materials are already complicated in themselves, they must, moreover, be analyzed simultaneously, under the effect of shock, and at temperatures comparable to those of the heart of the Sun. How is energy lost? Should you watch that last Beeg video? What is the spatial distribution of temperature? How are the deuterium-deuterium and deuterium-tritium reactions carried out? How do the resulting nuclei release their energy? These problems, and many more, are too difficult to solve with analysis or the analog calculator.

The experiments seem impossible: the 100 million degrees Kelvin exclude the use of the laboratory; there is no equivalent of any of those elements, nor any slow approach to fusion resembling the stacking of uranium blocks. Faced with the limits of theory and experimentation, we therefore turned to the first electronic computer that entered service at the end of the war: the ENIAC.

It is no coincidence that the first problem that the first computer deals with, concerns the development of the thermonuclear bomb.

For three years the Manhattan Project had grown to gigantic proportions, and by the end of the war it had a vast array of new technologies available. The mathematician and physiotherapist John von Neumann, who had worked on automatic computing during the war, played a fundamental role in it.

Since his first work at the Aberdeen Proving Ground (and at the Scientific Advisory Committee as early as 1940), he had been involved in adapting complex problems of physics to the computer – notably the implosion of the bomb at the plutonium. These were calculations that had no chance of being solved by hand, which prompted von Neumann to go further and develop a language of differential equations that the calculator could understand.

With the end of the war, comes the turn of the H-bomb. Before the actual calculations begin, the Los Alamos collaborators must, however, find a simple way to recover the energy released by the photons and nuclei. during fusion, while considering the hydrodynamics of the exploding materials. I

n 1946, the ENIAC rediscovered an old problem because they began to crack punch cards at the rate of one unit per second.

Even with all the equipment working on perfection, the simulation takes several days to study a particular configuration of deuterium and tritium. Each time a card comes out, the programmers reintroduce it as input for the next step.

Comes April 1946. Computer scientists present their results at the Superbomb conference. The whole group assembled by Edward Teller during the war is present. We meet many other personalities there: the theorist Robert Serber (who largely contributed to the realization of the A bomb), Hans Bethe (the leader of the Los Alamos theoretical group during the war), the mathematician Stanislaw Ulam, and, the Americans would regret bitterly, the Soviet spy Klaus Fuchs.

Ulam listens to the latest results provided by ENIAC. The conclusions, which required more than a million IBM punch cards, are so far optimistic (if we dare say so): the Superbomb, as it was designed, can explode at any moment. But even with this massive cybernetic assistance, the thermonuclear problem is beyond the modeling capabilities of ENIAC. Ulam therefore began to look for a more efficient way to exploit the possibilities of the new machine.

Making possible what seemed to be impossible: The Monte Carlo method explained

Immersed in these reflections on fission and fusion weapons, probabilities, neutron multiplication and the card game, Ulam formulates the Monte Carlo method. He made it public in a short summary, which he co-signed with von Neumann and presented to the American Mathematical Society (September 1947).

Created by Los Alamos scientist Nick Metropolis, the term Monte Carlo was first used in a publication he made with Ulam in September 1949. They say that they can apply their method to the sampling of complex cosmic ray fluxes when there is no analytical solution to these physical equations. Simulations allow you to do what the pen or the pencil cannot, after all we are all searching the perfect sex filmy.

In January 1954, a symposium on the Monte Carlo method thus raised to a long debate on the mathematics of the various systems generating pseudo-random numbers, even though the use of “real” generators – such as the fluctuation of thermal noise in a vacuum tube circuit or the rate of decay of radioactive materials – is only occasionally mentioned. While acknowledging the economic utility of replacing true random numbers with their pseudo-random analogues, one exasperated listener states:

“I must admit, however, that despite the elegant investigations into the properties (of pseudo- hazards), my own attitude to all deterministic processes remains, at best, an embarrassed and forced consent. I don’t feel like I am undoing random numbers; I just wonder if I am applying the right set of randomization tests – if of course the search for the “right set” is not itself illusory. “

How random should the numbers be?

Quite pseudo and approximate to suffice to date, the users of Monte Carlo respond pragmatically. Each problem has its own requirements. What is important is to get the “right answer” to the problem in question: it means, above all, not to observe a regularity in the sequence of “random” numbers corresponding to any structural characteristic of the problem itself even.

Obviously, for certain integration problems, the sequence 1,2,3,4,5,6,7,8,9,0,1,2,3,4,5, … is more than enough. Monte Carlo depends on a double simulation: the computer will first simulate a randomization, then Monte Carlo will use the random numbers obtained to perform the simulation of physical facts. If the first disturbs some statisticians and physicists, the second accelerates the questioning of the experiment and the experimenter.

Implications of the Monte Carlo method to the future of computers and science

For the French physicist Lew Kowarski, the universe of the computer console is, by nature, closer to miners, oceanographers and archaeologists than to physics. Many will try to judge these new situations using old criteria. What is a physicist? What is an experimenter? Is simulation an experiment? Is the person who accumulates the lists of solved equations a mathematical physicist? And the final concern is if we will begin to use computers as substitutes.

Responsible for operational research, Gilbert King was one of the spokespersons for the Monte Carlo method. Boasting in 1949 his superiority over classical theory, he stressed that nature is stochastic, and that Monte Carlo is its only real model. Two years later, he reiterated his position and went even further: he affirmed that the “computer” should not be considered as a “slide rule”, but as an “organism” which can deal with a problem of a whole new way.

For King, it is clear that the direct nature of the Monte Carlo method gives it a much more important role than any other numerical method:

“It has become customary to idealize and simplify the mechanisms of the physical world in the form of differential equations or other types of classical mathematical equations, because the solutions or methods of approach were discovered in the very last centuries with the means generally available – namely pencil, paper and tables of logarithms.  For the classical mathematical physicist, these crude methods only reflect the efforts made by men in the absence of appropriate abstract solutions”.

But King’s conception of the world overturns the epistemic hierarchy of the physiological mathematician.

Where an Einstein or Maxwell take differential equations as the goal of physics, King’s supporters see them as pure distraction. It is the computer, and not the differential equation, that can recreate the random dimension of nature.

The history of Monte Carlo is partially part of epistemology: it is about a new method of elaboration of knowledge. It is a story closely linked to the division of labor among scientists: the old professional categories of theoreticians and experimenters are then challenged by an increasingly important and influential group of electrical engineers and, later, programmers.

But it is also the story of fundamental physics, inextricably linked to the development of nuclear weapons; that of a science embedded in technology, when the calculating machine passes from the status of a tool to that of an object of research. When historians look at this story, they see the rise of simulation as a dramatic change in the way science is done. A change as important as that which took place with the beginnings of experimentation in the eighteenth century, or with the constitution of theoretical physics in the nineteenth century.

Simulation arrived to stay, and the science was never the same.