There is an interesting history behind the relationship between computer programming and physics, especially in astrophysics.

In physics, many problems do not have exact, analytical solutions. What this means is that these solutions cannot be written as exact functions. In fact, many of the problems we learn about in school often represent special cases where an exact answer can be calculated using pen and paper with very specific tricks and fancy footwork. Most problems in physics are not as simple.

Fortunately, thanks to mathematicians, we do have a variety of tools in the form of numerical methods that can approximate, often to a very accurate degree, solutions to many of the problems in physics. The only drawback is that these techniques require numerous calculations to be done over and over again - maybe thousands, or even millions of times - before an approximate solution is arrived at.

Prior to the mid-20th century, if someone wanted to perform a numerical integration technique in order to solve or model a problem, they would have to employ a team (or an unlucky individual) to sit in a room and spend their days performing the calculations over and over again. Here is a quote from “Structure and the Evolution of Stars” by Martin Schwarzschild (1958) on solving the equations of stellar structure:

A person can usually accomplish more than twenty integration steps per day for a set of differential equations… Thus for a typical single integration consisting of, say, forty steps less than two days are needed… the entire numerical work for this fairly typical case can be accomplished by one person in one month.

Two days doesn’t seem that bad, but an *entire month*? To make it worse, he was only talking about the relatively simple case solving for stellar structures:

However, if extensive evolutionary model sequences including a variety of physical complications are to be derived, then numerical integrations by hand may become prohibitive and the advantage of large electronic machines will be incontestable.

Thankfully, by the late 1950’s, the use of computers started becoming more prevalent in academic research. Many physicists saw the potential that computers had in doing these calculations. For example, computers were able to perform calculations more efficiently than a regular human could, without getting tired, or making mistakes.

They saw the opportunity early on and took advantage, with the rise of computers coinciding with the rise of the need for work in numerical methods in physics. To quote Martin Schwarzschild in one of his papers in 1954, “Numerical Integrations for the Stellar Interior”:

It seems not unlikely that in the future much of the numerical work in the theory of the stellar interior will be done on large electronic computers.

Since then, physicists have had a close relationship with computer programming. Today, it’s rare to call yourself a physicist without having some kind of programming ability, even just in Matlab. Many physics departments require that their students take at least one course dedicated to scientific computing, with many other physics and math courses actually being programming courses in disguise.

As an undergraduate student, I took a course on Stellar Structures, where we did the exact same numerical integration that Martin Schwarzschild did in his 1954 paper, except instead of taking a month to get a solution by hand, we were able to get the solutions in less than a second.