Friday, December 15, 2017

Research Update

Hi Everyone!

It’s been a while since I last posted. I think it’s time to give an update on what I’ve been up to.

As many of you know, I was on leave from Caltech for two years. Caltech fully reinstated me this summer. I decided I’d need a fresh start. I’ll be leaving Caltech at the end of the year.

So what have I been up to? Lots of exciting science. I’ve been going back and forth between Pasadena and Kyoto, working, in parallel, with folks at Caltech and the Yukawa Institute for Theoretical Physics (YITP). Three of the Caltech students I’ve been working with graduated this year. The student I work with at YITP will defend in January.

My collaborators, students, and I put together a whooping 28 research papers (and four PhD theses!) in these two years. I’ve put together an ADS list if you’d like to take a look: ADS library).

Here are a few recent highlights of our work:

Core-Collapse Supernovae now exploding in 3D!

This blog isn’t titled blowing up stars for nothing. We’ve gone full 3D with our core-collapse supernova simulations. This image was generated by Joey Fedrow (YITP) from one of our 3D datasets coming from our most recent set of simulations. We just submitted a paper describing them to the arXiv: An earlier paper describing more details of our simulation code came out last year in the Astrophysical Journal: Roberts et al. (2016),

The image is a volumetric rendering of the entropy distribution in the supernova. Entropy measures “disorder” in the system. Hot, disordered regions will have high entropy (red and yellow colors). Cooler, “more ordered” regions have green, cyan, and black colors. The supernova shock front is in cyan. The image frame is about 1200 x 1200 km and shows the innermost part of the collapsing (and exploding!) star. The explosion is just about to begin!

Stars, from a distance, look spherical.  So it was natural for astrophysicists to try to simulate their collapse and explosion assuming spherical symmetry (also called “1D”). The main advantage is that it’s a radical reduction in dimensionality: radius (from the center) is the only spatial coordinate. This saves a lot of computer memory and execution time. Back in the day (we’re talking 1970s-1980s), supercomputers like the Cray-1 were needed to run 1D simulations. Today, they can be done on a laptop! (If you want to try, check out our open-source code GR1D).

The big problem: once 1D simulations got really good and included all the important physics in the early 2000s, they couldn’t explode stars. Actually, to be clearer: they couldn’t explode theoretical models of stars that astronomers observe as exploding every night.

The answer to the problem is: stars and supernovae are not spherical at all!

After the initial core collapse, there’s a super-hot newborn neutron star (a “proto-neutron star”) at the center. The outer parts of the original star are piling up on it. Because the gas is so hot and dense, energy is released in neutrinos, extremely weakly interacting elementary particles. But because the density is so high, some of these neutrinos (~10%) are reabsorbed by gas about ~50 km above the proto-neutron star. This heats the gas there, creating convection (hotter bubbles rising, colder bubbles sinking) and turbulence. This is what breaks spherical symmetry and creates the complex shape of the supernova shock wave we see in the image.

It turns out that this breaking of symmetry is what’s crucial for the supernova to explode. In 3D, gas can move in more ways than in 1D (two more degrees of freedom). It can stay longer in the region where the heating is happening. The turbulence that’s driven by the heating also provides extra pressure that helps push the supernova shock outward.

3D core-collapse supernova simulations have become possible only in the last few years with the help of the current generation of supercomputers like Blue Waters (a Cray machine) at the National Center for Supercomputing Applications (NCSA, University of Illinois). In the paper we just posted, we looked at how the initial mass of the star that goes supernova influences collapse and explosion. We used five stellar models with initial masses of 12 to 40 times the mass of our sun. It took over a year to complete these simulations and each used up to 10 Terabytes of main memory and about 10,000 CPU cores.

These four volume renderings show the supernova developing inside four of our simulations. Notice that everything is really fast, in less than a second! The time is measured in milliseconds and we set the zero of time to when the proto-neutron star is formed.

Our results show there’s a complicated relationship between the progenitor star’s mass and the dynamics of the supernova. Less massive stars may actually need more time to explode than their more massive counterparts.

All explosions are quite asymmetric, which agrees with astronomical observations of supernova remnants. This can help explain neutron star and black hole “birth kicks” via recoil. Essentially, the explosion goes off in one direction and the central object recoils in the opposite direction.

Interestingly, the most massive star begins to explode first, but much of its outer material still manages to fall back onto the proto-neutron star. This leads to its collapse to a black hole, though we couldn’t simulate for the time it takes for this to happen. The least massive star we studied, a 12-solar mass star, didn’t develop an explosion in the 0.5 seconds were able to simulate before we ran out of computer time. It’s likely, however, that it will follow its 15-solar-mass counterpart and just explode a little later.

Binary Black Hole Mergers inside of Stars!

You’ve all heard about LIGO’s exciting discoveries. In September 2015, LIGO observed gravitational waves from a pair of merging black holes more than a billion light years from Earth. Gravitational waves are a prediction of Einstein's General Theory of Relativity that was unconfirmed until LIGO's discovery. LIGO has since seen multiple more such mergers and even a merger of two neutron stars (Here's a full list: List of GW Observations (Wikipedia) ). The discovery of gravitational waves earned LIGO pioneers Rai Weiss (MIT), Kip Thorne (Caltech), and Barry Barish (Caltech) the 2017 Nobel Prize in Physics.

It's needless to say that LIGO's neutron star merger got astronomers very excited. For the first time, they could point their telescopes and find a counterpart in electromagnetic waves (that is light, infrared, ultraviolet, gamma rays etc.) to a gravitational wave observation.

But that's not quite right -- in fact, when the first gravitational waves were observed in 2015 from a pair of black holes, a satellite gamma-ray observatory observed a flash of gamma-rays coming from about the same direction in the sky. The sky is big and lots of things go bump in the night, so it could have been a chance coincidence. Nevertheless, it had to be taken seriously.

The general belief at the time among black hole researchers was that merging pairs of black holes are very old, perhaps a billion years or older. This means they have had plenty of time to suck away any gas that surrounded them at their birth. If there's no gas (atoms and electrons), no counterpart radiation can be created, making the merger a part of the "dark side" of the Universe that can be seen in only in gravitational waves.

Then an idea was proposed by Harvard Astronomer Avi Loeb: What if the merging black holes formed during the collapse of a very massive star? This can -- in theory -- happen if the star's central region (its "core") is spinning very fast. When it contracts, its spin becomes even faster. In fact, so fast that it goes unstable and breaks into two pieces (“fragments”). If the pieces are massive enough, they collapse to two black holes that then quickly coalesce. That creates the signature gravitational wave signal observed by LIGO. Since all of this happens inside a star, there is now plenty of gas around that can be heated, accelerated, and made to radiate electromagnetic waves. In this way, the gamma-rays observed at the same time as the first gravitational wave could be explained.

I was visiting YITP when Loeb’s paper came out. We were walking back from lunch one day, when the question came up: Wouldn't all that gas in the star have an effect on the gravitational waves that are sent out by the black holes in their death spiral toward merger?

Grad student Joey Fedrow, myself, and our collaborators decided to go ahead with a computational experiment: Using the open-source Einstein Toolkit numerical relativity package (supported by the National Science foundation, we carried out black hole binary merger simulations, placing the binary into a gas environment. We varied the gas density and studied how this affects the dynamics and the gravitational wave signal.

It turns out that the high gas densities inside stars have an enormous effect on the gravitational waves! Here’s an example:

The standard case of two black holes merging in vacuum is plotted in gray. The red curve shows the gravitational waves from the merger inside a star. The density is at the lower end of what’s reasonable for the cores of massive stars. Yet, the effect is huge. As the two black holes orbit each other, their extremely strong gravitational pull drags stellar gas around with them. This transfers energy from the orbiting black holes to the gas, driving the black holes faster together.

This movie shows how the black holes drag the gas around with them:

The gravitational waves output by our simulations show that LIGO's black hole mergers could not have possibly taken place inside stars -- the observed waves would have been very different and much shorter! This pretty much puts an end to speculations that LIGO's black hole pairs could have formed inside a massive star shortly before merger. It's more likely that they formed eons ago in the collapse of two massive stars to a black hole each. This also means that the gamma rays seen in the aftermath of LIGO's first black hole merger may not at all be related to the merger.

Our paper, J. Fedrow et al., was published in Physical Review Letters in October 2017. A preprint is available on the arXiv at

A new Nuclear Equation of State Framework for Astrophysics Simulations

A very important ingredient in core-collapse supernova and neutron star merger simulations is the equation of state (EOS). An EOS relates thermodynamic quantities with each other. For example, it returns the pressure as a function of density, temperature, and gas composition. Any fluid dynamics simulation needs an EOS to close the system of hydrodynamic equations.

The EOS in a supernova or neutron star merger is very complicated. It includes contributions from nucleons (neutron and protons) and nuclei (that’s why it’s called ‘nuclear’ EOS), electrons, and radiation (photons). The electron and photo contributions are well understood and can be calculated exactly. There’s great uncertainty in the nuclear part, though. Because the density is so high, the nucleons are squished together so closely that the extremely short-range nuclear force becomes important. This force is what holds atomic nuclei together. At extremely short separations, it is very repulsive, keeping nucleons from being squished together further. (N.B.: Without this repulsive part of the nuclear force, there wouldn’t be any neutron stars – stars would just collapse to black holes!)

The problem with the nuclear force is that it’s not well understood. This means the nuclear EOS is uncertain. It depends on the nuclear force model and parameters that one uses. There’s a range of nuclear physics experiments that have put constraints on the EOS. Astronomical observations of neutron star masses and sizes also give some clues. LIGO’s recent observation of a neutron star merger also helped narrow down the EOS.

One way to learn about the nuclear EOS is to carry out supernova and neutron star merger simulations with many different possible EOS descriptions. This gives predictions for the dynamics of these phenomena and of their signals in gravitational waves, neutrinos, and electromagnetic waves. These predictions can then be compared with observations to narrow down the EOS.

The problem with this was that there were only very few (less then ~10!) EOS models on the market comprehensive enough to be used in simulations. André da Silva Schneider, a postdoc at Caltech, Professor Luke Roberts (Michigan State), and I started a project about three years ago to fix this.

We decided to build on the work of Jim Lattimer and Doug Swesty, who put together the first open-source nuclear EOS in the early 1990s. For almost a decade, the Lattimer & Swesty EOS (LS EOS) was the only openly available EOS for supernova and neutron star merger simulations.

André led the charge and re-implemented the formalism of Lattimer and Swesty while generalizing it to allow us to more freely choose EOS parameters. The result is this:

The plot shows neutron star mass-radius (M-R; the radius is half the diameter of a spherical object) relationships for thousands of different EOSs, generated with André’s new EOS framework (the Schneider-Roberts-Ott [SRO] EOS). The M-R relationship is a standard thing to look at when comparing EOSs, because we can measure neutron star masses and radii. The yellow and orange regions are mass-radius regions currently allowed within the errors of astronomical observations (the orange region is the astronomer’s best bet). All EOSs shown in the plot obey the constraints from nuclear physics experiments.

The SRO EOS code and pre-generated EOS tables are freely available at André’s work now allows the supernova and neutron star merger communities to explore systematically how changing EOS parameters affects simulations and predicted gravitational wave, neutrino, and electromagnetic signals. The paper describing the SRO EOS has just been published as Schneider, Roberts, Ott (2017) in Physical Review C and is, of course, also available on the arXiv:

Sunday, June 12, 2016

Formaline: The Provenance of Computational Astrophysics Simulations and their Results (Part I)

Key to all scientific research is reproducibility. In areas of science that rely heavily on computer simulations, the open-source approach is essential for allowing people to check the details of the simulation code and for re-running simulations for reproducing results.

Open-source is essential, but it's not enough. Anybody involved in simulations knows that the code used for running the actual simulations often differs in non-trivial ways from the actually released source code. Even when we use version control software like git, too often do we run simulations with a version of the code that is either not tagged or has uncommitted changes. And this does not take into account that complex simulation codes typically have a myriad of parameters that are set individually for each simulation and parameter files typically don't make it into publications or git repositories. A similar issue exists also with the initial conditions (i.e. the input data) from which a simulation is started. Rarely are these available for others to reproduce results.

Provenance is a word used in computational science (and in some other research areas). It describes the full history of a (computational) scientific result. So the provenance of a simulation does not only include the released open-source code, but also the full information on the exact source code that was used, perhaps even the way it was compiled and where it was run, the parameters that were set for the simulation, and all initial conditions and other ingredients that were used in whatever form (and also their respective provenances).

We have to tackle provenance and reproducibility step by step. In this two-part series of posts, my focus is on preserving the actual source code that was used to run a simulation and produce specific results or to create some other (astro)physics result (for example, a neutrino-matter opacity table used in core-collapse supernova simulations). This can be accomplished by what I will generally call Formaline. Formaline is a chemical compound, a 10% solution of formaldehyde in water; used as a disinfectant or to preserve biological specimens. Formaline for code/simulations is to preserve code and simulation/physics metadata. Formaline is an idea by Erik Schnetter, Research Technologies Group Lead at Perimeter Institute for Theoretical Physics. Erik implemented Formaline for the first time for the Cactus Computational Toolkit some time in the early 2000s.

Using Formaline is super easy! (Python example)

Formaline comes in two forms and in both forms it preserves source code: (1) It can include source code in the binary of the simulation code and, upon execution, the code writes the source code (usually in a tar ball) into the output directory. (2) It can include source code and parameter files in HDF5 data files that are now very broadly used for simulation and physics inputs to simulations (such as a neutrino interaction table).

I'll talk about Formaline (1) in my next post. Here, I'll focus on how to implement Formaline (2) and include (and later extract) a source tar ball in/from an HDF5 data file. It turns out that this is extremely easy if you have the h5py and numpy Python packages installed.

Let's consider the case in which you have some standalone code that produces an HDF5 table called table.h5 and that is generated by some Fortran 90 code in and below directory src and that takes a top-level parameter file called parameters, a Makefile, a for machine-specific build flags, and a README file.

Here is the code that puts all source code into the HDF5 file:

So, after running this script, your HDF5 table table.h5 contains a dataset that holds the tar.gz ball of our code source, input, & parameter files. You can view the table of contents of any HDF5 file with h5ls [filename].

Now, let's say 5 years later, we find table.h5 and want to know how it was created. We can now go and extract the source/input/parameter tar.gz ball! Here is the script that does just that. It first creates an output directory called saved_code to make sure nothing is overwritten.

So that's a super easy way to make results reproducible -- just store the source code and all input/parameter files with the results! I think it's a no-brainer and we all should be doing it. HDF5 is a standard, well documented format that will be readable decades from now -- so your work is safe!

I wrote this implementation of Formaline for the NuLib neutrino interaction library and and a variant of it is bundled with NuLib and available on github at

In my next post, I'll talk about Formaline (1), which includes a source tar ball inside the executable. This is the original incarnation of Formaline that Erik Schnetter came up with.

Friday, April 15, 2016

A Beginner's Super Simple Tutorial for GNU Make

(GNU) Make has been around for a very long time, has been superseded (as some may argue) by CMake, but it is still an incredibly useful tool for... for what? For developing code on the command line. Yes, that's right. We still use editors and terminals to develop code. Integrated "windowy" development environments (IDEs) are great, but often overkill. And it's really hard to run them when you are developing remotely on a supercomputer.

But even a hard-core terminal shell & editor developer needs some help for productivity. Make is a key productivity tool for developing code.

Back in the old days (say just after punch cards were no more; way before my time), most people would write code in one big source file and then would just invoke the compiler to build the executable. So your entire code would be in a single file. Not very convenient for editing. Certainly not convenient for working collaboratively. When codes got bigger, it just became impractical. Compile times became huge, since every time a single line changed in one routine, the entire code needed to be compiled.

So people started breaking up their code into pieces, each of them living in its own source file. That's natural, since code is usually broken down into some kind of logical parts (e.g., subroutines and functions in procedural programming, classes/objects in object-oriented programming). So it makes sense to put logical units of the code into their own source files. But how do you get your executable built from an array of source files? How do you prevent that not everything has to be recompiled all the time?

This is where Make comes in!

Let's assume we are working with Fortran 90+ here and compile things using gfortran (if you like C better, just replace everything Fortran specific with C; or with whateva language you like). Let's say you have three source files.


Note that I am using upper case F90, instead the more common f90. The upper case tells the compiler to pre-process the source files before compiling them. In this way, we can use preprocessor directives, see here if you are interested: We won't use these directives here, but perhaps I'll write a post about them in the future.

So that you can follow along easily, let's actually write these routines:


program myprogram

  implicit none

  call subroutine_one
  call subroutine_two

end program myprogram


subroutine subroutine_one
  implicit none
  write(*,*) "Hi, I am subroutine one"
end subroutine_one


subroutine subroutine_two
  implicit none
  write(*,*) "Hi, I am subroutine two"
end subroutine_two

Naively, you could build your program this way:
gfortran -g -O3 -o myprogram main_program.F90 subroutine_one.F90 subroutine_two.F90
So every single time you compile, everything gets compiled. This produces an executable called myprogram. By the way, the options I am using in the compile line are: "-g" -- add debug symbols and "-O3" -- optimize aggressively (not really needed here, but I do it out of habit (I want fast code!)).

Recompiling is of course no problem if the routines don't do anything of substance as in our example. But let's for a moment assume that each of these routines does something incredibly complicated that takes minutes or longer to compile. You don't want to have to recompile everything if you made a change in only one of the source files. The trick to avoid this is to first build object files (file suffix ".o"), then to link them together and only remake those whose source files changed.

Let Make take care of figuring out what changed. You tell Make what to do by writing a makefile in which you specify what you want to make and what it depends on. Here is the makefile for our example. Call it Makefile, then Make will automatically find it when you type make on the command line. Here is what will be in your makefile:

myprogram: main_program.o subroutine_one.o subroutine_two.o
     $(FC) $(FCFLAGS) -o myprogram main_program.o subroutine_one.o subroutine_two.o

main_program.o: main_program.F90
     $(FC) $(FCFLAGS) -c main_program.F90

subroutine_one.o: subroutine_one.F90
     $(FC) $(FCFLAGS) -c subroutine_one.F90

subroutine_two.o: subroutine_two.F90
     $(FC) $(FCFLAGS) -c subroutine_two.F90

     rm -f *.o

There is a very clear structure to this. The first two lines are definitions of variables you are using later -- the compiler and the compiler flags (you could call these variables anything you want). Then you have lines that start with the target on the left, followed by a colon. After the colon, you list the things that the target depends on (the dependencies). This tells Make what needs to be done before working on the current target. So in our case, myprogram depends on a bunch of object files. The rules for making these sub-targets are specified subsequently. One important thing to remember: In the line where you tell Make what to do for a given target (once the dependencies are met and up-to-date), the first character on that line has to be a tabulator character (i.e., you have to hit your "tab" key). Only this gives the actual command the correct indentation (that's something historic, I guess). But clearly this is not a show stopper!

Note the "clean" target -- I added this so that you can easily clean things up.

So, for example, let's say you want to compile your code, but you want to compile it freshly from scratch. The first thing you do is say (on the command line) make clean. This will get rid of all previously compiled object files. Next you just type make and Make will compile and link everything for you! Now you go and work on one of the source files. You save it and then you want to recompile. Just type make. This time only the source file that you changed is recompiled and then linked together with the other object files to make the executable. Voilá!

I hope you see the benefits of using Make now! Make is of course much more powerful than this simple example can tell. You can read more about more advanced uses of Make here and here and here and at many other places that your favorite search engine will find for you.

Before I close, there is one more thing I want to mention: Make is also great for compiling LaTeX documents, in particular if you are using cross-referencing and a bibliography software like BibTeX.
Here is an example makefile for compiling a LaTeX document called paper.tex to paper.pdf with BibTeX. I assume that you have your BibTeX bibliography entries in a file called paper.bib:
paper.pdf: paper.tex paper.bib
    pdflatex paper.tex
    bibtex paper
    pdflatex paper.tex
    pdflatex paper.tex
This should work. Enjoy!

Saturday, January 30, 2016

Simulation vs Modeling

For a long time, I have been thinking about how a clear distinction could be made between "modeling" and "simulation." The answer to this question likely depends on the field of study. I will limit myself to the field of (computational) astrophysics. Many colleagues use "modeling" and "simulation" interchangeably, but I believe that a semantic distinction needs to be made.

For starters, "modeling" sounds less sophisticated/complicated/detailed/involved than "simulation." But this is just purely subjective. Let's go a bit beyond the sound of these words.

A good example that can help us appreciate the differences between simulation and modeling comes from gravitational wave physics: Advanced LIGO needs templates (predictions) of gravitational waves to find the same or similar waves in its data stream. Such predictions can come from detailed, approximation-free simulations that implement Einstein's equations on a computer ("numerical relativity simulations", see Alternatively, they can come from so-called post-Newtonian "models" or from phenomenological "models" that try to approximate the numerical relativity simulations' results. These models are a lot simpler and computationally cheaper than full numerical relativity simulations. So that's why they find frequent use. But their results are not quite as good and reliable as the results of numerical relativity simulations.

Another useful example comes from supernova simulations and modeling of observables: It is not currently possible to simulate a core-collapse supernova explosion of a massive star end-to-end from first principles. The process involves the very last stages of core and shell burning, core collapse, core bounce, the postbounce phase during which the stalled supernova shock must be revived (all occurring within seconds of the onset of collapse), and the long-term propagation (up to days of physical time!) of the re-invigorated shock through the stellar envelope. So, on the one hand, detailed simulations are used to study the mechanism that revives the shock. But these simulations are too computationally intensive to carry out for more than a few hundred milliseconds, perhaps a second in 3D. On the other hand, simpler (often spherically-symmetric [1D]) modeling is applied to predict the propagation, breakout, and expansion of the ejecta and the resulting light curves and spectra. These explosion lightcurve/spectral models (most of the time) start with fake ad-hoc explosions put in at some mass coordinate in various ways.

Here is a slide from a talk on simulation and modeling of gravitational wave sources that I gave at a recent Instituto de Cosmologia y Fisica de las Americas (COFI) workshop in San Juan, Puerto Rico:

I think that the items listed on this slide are broadly applicable to many problems/phenomena in astrophysics that are currently tackled computationally. Ultimately, what we want is to simulate and have fully self-consistent, reliable, and predictive descriptions of astrophysical events/phenomena. This will require another generation of supercomputers and simulation codes. For now, it is a mix of simulation and modeling.

Saturday, March 28, 2015

A super simple Introduction to version control with Git for busy Astrophysicists

Welcome, collaborator! You may be reading this, because you, I, and others are working on some kind of joint document. It could be a manuscript to be submitted to a journal, some kind of vision document, or (as is often the case) a proposal.

Sending around LaTeX files via email is so 1990s. Let's not do this. Let's also not use Dropbox. Dropbox is great for syncing files between computers and serves as a great backup solution. But it's terrible for collaborating on text documents -- it can't handle conflicts (two people editing the same file at the same time creates a second copy of the file in the dropbox; Dropbox won't merge text for you).

Let's move on and use version control.  It will make working together on the same document infinitely easier and will minimize the time we have to deal with merging different versions of the file we are working on together. There are multiple version control packages. For our project we will use git. You can learn more about git and version control in general on If you are unconvinced, read this: Why use a Version Control System?

Here is how it works:

(1) Preparations
  1. Make sure you have git installed on your computer (I presume you have either a Mac or a PC that runs a flavor of Linux). Just open a terminal, type git, hit enter, and see if the command is found. If not, you need to install git.

  2. Make and send me a password-protected public ssh key as an email attachment (not cut & paste!). The standard path and file name for this are ~/.ssh/ or ~/.ssh/ If you don't have such a key or you don't remember if it's password protected (or you dont remember your password...), type on the command line:
    ssh-keygen -t rsa
    and follow the instructions. By all means, give your key a strong password. Then send me the file ~/.ssh/ as an email attachment (do not cut & paste it into the email body) 

(2) Cloning the repository from the server
By now I have told you details about the repository and sent you the command needed to clone it from the server. Go into your preferred working directory (that's where you keep projects, papers, proposals etc.). Type:
This will clone the repo -- in other words, it will create a directory with the repository name in your working directory and will download the contents of the repo into it. If this does not happen and instead you are asked to enter a password, then your ssh key is not loaded. In this case, load your ssh key like this:
ssh-add ~/.ssh/id_rsa
Then try again. If this still does not do the trick or if you get an error message about no "ssh-agent" running, then try
This will spit out a few lines of text. Copy them and paste them back into the prompt, hit enter, then repeat
ssh-add ~/.ssh/id_rsa
This should do the trick!

(3) Pulling in changes.
Before starting to work on a section of the document, you need to make sure to get the most recent changes. You must absolutely always do this before starting to work! Go into the repository directory.
git pull -r 
(the "-r" is to make the pull a "rebase" -- don't worry about the meaning of this, just do it).  

(4) Committing and pushing changes.
Whenever you have completed a significant addition or change to the document -- this may be a completed or edited paragraph, something that takes no longer than about 30 minutes of work, you need to commit and push your changes so that others can pull them onto their computers.
git commit -m "Your commit message goes here"
git pull -r           # this is to pull in any changes by others!
git push
Please give sensible commit messages so that we others know what you changed!

(5) Looking at the history of changes and checking the status of your repo.
That's very easy! First pull in all recent changes via git pull -r. Then use git log to browse through the repository history. Try git status to see what the status of your repo is. This will tell you which files have changed and should be committed. It will also tell you which files you have not yet added (i.e. "untracked" files).

(6) Golden Rules
Working with version controlled documents is easy and painless as long as you obey two simple rules:
  1. Always git pull -r before starting to edit.
  2.  Extremely frequently:
    git add [FILE]
    git commit -m "[COMMIT MESSAGE]"
    git pull -r
    git push
If you stick to these rules, working collaboratively on the same document will be easy and conflicts will be rare.

(7) Adding additional files to the repo
That's easy!
git add [FILENAME]
git commit -m "Added file [FILENAME] to repo"
git pull -r
git push
(8) Resolving merge conflicts
Oh boy. Somebody was editing the same portion of text you were editing (did you perhaps not sufficiently frequently commit and push?!?). You now need to resolve the conflict. Let's do this by example. Let's say you were editing a file called You added, committed, then pulled and ended up with the following git merge error:
remote: Counting objects: 5, done.
remote: Total 3 (delta 0), reused 3 (delta 0) Unpacking objects: 100% (3/3), done.
23ae277..b13d53f master -> origin/master Auto-merging
CONFLICT (content): Merge conflict in
Automatic merge failed; fix conflicts and then commit the result.
Good luck! You now need to resolve the conflict. Let's look at the (hypothetical) file in question.
$ cat 
<<<<<<< HEAD 
>>>>>>> b13d53f9cf6a8fe5b58e3c9c103f1dab84026161
git has conveniently inserted information on what the conflict is. The stuff right beneath <<<<<< HEAD is what is currently in the remote repository and the stuff just below the ======= is what we have locally. Let’s say you know your local change is correct. Then you edit the file and remove the remote stuff and all conflict stuff around it.
$ cat 
Next you git add and git commit the file, then you execute git push to update the remote with your local change. This resolves the conflict and all is good!

Wednesday, May 14, 2014

Be Scientifically Limitless

Don't you misunderstand the title of this post!

I don't mean to say there should be no limits on how you, the reader of this post, do science. There are the very clear boundaries of the scientific method and of professional conduct that you must respect.

What I mean to say is: Do not limit yourself in what you can achieveDo not let yourself be limited by circumstances or your environment.

More often than I would like, I encounter the following self-limiting behavior:

A person is confronted with a problem (perhaps a scientific problem, or a coding problem, or something else). They realize that it is hard, perhaps harder than what they have done before. They also have no experience in solving such a problem. Perhaps they have seen others fail trying to solve similar problems. Some give up at this point.

Others might try for a bit, fail to make progress, get frustrated and quit trying. They often don't ask for help, for a variety of reasons, the two major ones being: (1) They might think they are just stupid/inexperienced and should learn to solve problems on their own. (2) They are afraid of what others might think about them if they ask for help and let "others solve their problems too frequently".

Always ask for help. Those that ask are the ones that do things and go places. If you don't ask for help, you limit what you can achieve.

Another thing I have seen too often: People tend to put people into comfortable little virtual mind boxes based on what they think about them and, perhaps, what their history has been. This then limits what the person in the box thinks they can and should do now and in the future. They remain limited in what they achieve to the boundaries of the box they have been placed in. You need to realize that these boundary conditions are made up. They are made up by people no smarter than you (despite of what you may think!). You can and should and must break through them. Liberate your thoughts and dreams!

Here are two very inspirational YouTube videos that are excerpts from a longer 1994 interview with Steve Jobs. You can think about this man what you want. The advice he gives is brilliant and invaluable.

On asking for help:

On not limiting yourself:

Sunday, April 27, 2014

Nature has it in for us: Supernovae from Massive Stars in 3D

[A version of this post has appeared in the Huffington Post. It is a more generally accessible discussion of our recent Astrophysical Journal Letters paper: Mösta, Richers, Ott et al., The Astrophysical Journal, 785, L29 (2014). ADS]

We don't know what precisely happens when a massive star -- about ten times the mass of our Sun or more -- first collapses and then goes supernova and leaves behind a neutron star or a black hole. The explosion expels the products of stellar evolution into the interstellar medium from which, ultimately, new stars and planets are made. These products, carbon, oxygen, silicon, calcium, magnesium, sodium, sulfur, and iron (among other elements), are the elements of life. It's arguably quite important to understand how this works, but we don't. At least we don't in detail.

The supernova problem is so complex and rich that computer simulations are crucial to even begin to formulate answers. This sure hasn't kept people from trying.  Los Alamos National Laboratory maverick Stirling Colgate (1925-2013; "I like things that explode: nukes, supernovae, and orgasms.") was one of the first to simulate a supernova in the 1960s (think: punch cards).  Over the following decades, computers got much faster, and simulations got better and better.

Today's supercomputers are more than 100 million times more powerful than the computers of the 1960s. Yet we are still struggling with this problem.

We are witnessing a revolution in dimensionality -- We are finally able to simulate stars in three dimensions:

For decades, it was possible to cram in all the complex physics of supernovae only into simulations that were spherically symmetric. Assuming that stars are spherical, the computer codes had to deal only with one spatial dimension, described by the radial coordinate inside the star. Turns out that this is a very bad approximation. Stars are not spherical cows. They rotate, the have regions that are convective (buoyant hot bubbles moving up, cold ones moving down) and turbulent, and they have magnetic fields, which are fundamentally nonspherical.

Two-dimensional (2D) simulations were the next step up from spherical symmetry. They became possible in the early to mid 1990s. In such a simulation, a 2D slice (think of the x-y plane in a graph) of the star is simulated and assumed to be symmetric under rotation about one of the axes. If a star's core is rotating rapidly and has a strong magnetic field, then 2D simulations show that such stars can explode in a "magnetorotational" explosion. Such an explosion is mediated by a combination of rapid rotation and a strong magnetic field that pushes out matter symmetrically along the rotation axis in jets that bore through the stellar envelope in the north and south directions.  This is called a bipolar explosion.  Such an explosion can be very energetic and could possibly explain some extremely energetic supernova explosions that make up about 1% of all supernovae and are referred to as hypernovae.

But not so fast. Nature tends to be 3D. And Nature has it in for us.

The current generation of supercomputers is the first to allow us to simulate supernovae in all three dimensions without the limiting assumption that there is some kind of symmetry. Studying supernovae in 2D was already quite exciting and we thought we'd already understood how things like rotation, magnetic fields, or convection affect a supernova.  So when we set out last year to simulate a magnetorotational supernova in 3D (read about the results in Mösta et al. 2014, ApJ 785, L29), we had an expectation bias: such explosions worked beautifully and robustly in 2D. We expected them to work quite similarly in 3D, perhaps with some slight, hopefully interesting variations about the general theme of a jet-driven explosion.

Colormap of a slice through the meridional plane of a rapidly rotating magnetized stellar core. The left panel shows the axisymmetric (2D) simulation, while the three slices to the right show the full 3D simulation without symmetry constraints at different times. The color coding is specific entropy. High values (indicated by red and yellow regions) of this quantity can be interpreted as "hot", "low-density", and "disordered". 2D and 3D yield fundamentally different results.

We were wrong. Nothing is perfect in nature. Even a quickly spinning star is not perfectly symmetric about its spin axis. There will always be small local variations and if there is some amplifying process around (an "instability"), they can grow. And that is precisely what we found in our 3D simulations. An instability grows from small variations from rotational symmetry. It distorts, twists, crumples, and ultimately destroys the jets that quickly and energetically blow up 2D stars. In 3D, we are left with something fundamentally different that we are only just beginning to understand. It's certainly not the runaway explosion we were looking for.

Volume rendering of the specific entropy distribution in our 3D magnetorotational supernova simulation. Red and yellow regions indicate high entropy ("high disorder", "high temperature", "low density"). The flow structure is fundamentally aspherical. Two large-scale polar lobes have formed that slowly move outward, but are not yet running away in an explosion. This snapshot is from about 180 milliseconds after the proto-neutron star is made.
Here is a YouTube movie from our SXS collaboration YouTube Channel. It shows the dynamical 3D dynamics that drive the supernova towards the state shown in the above picture. The color coding is again specific entropy. Blue and green regions are "cold/ordered/high-density", yellow and red regions are "hot/disordered/low-density". (Viewing tip: Switch to HD and look at the movie full screen!)

Here is another movie, this time showing what is called the plasma β parameter encoding the ratio of gas pressure to the effective pressure exerted by the magnetic field. Small values of β mean that the magnetic field dominates the pressure (and thus drives the dynamics). Regions in which this is the case are color-coded in yellow in the below movie. Dark colors (black/blue) indicate dominance of fluid pressure and in red regions, the magnetic field plays a role, but does not dominate.

And we should have known!

When a spinning magnetized star collapses, it's magnetic field lines get wound up tightly about the star's spin axis, sort of like a tight coil or a phone cord. Plasma physics laboratory experiments have shown long ago that such magnetic fields are unstable to a "kink" instability. If one introduces a small kink that pushes the field lines further apart on one side, those on the other side are compressed, creating a stronger magnetic field there. This increases the force that is excerted on the stellar material, pushing it to the other side. As a result, the small kink is amplified into a bigger kink. In this way, a small microscopic deviation from symmetry will become macroscopic and globally change what is happening in the supernova. This instabilty is fundamentally 3D -- in 2D, there is no "other side," since everything is forced to be rotationally symmetric about the spin axis.

Now that we know what happens in 3D, we feel like we should have known.  We should have anticipated the magnetic field in our star to go kink unstable -- it's really textbook physics. In fact, it's the same problem that plasma physicists struggle with when trying to get thermonuclear fusion to work in Tokamak reactors!

The final word about what ultimately happens with our 3D magnetorotational supernova is not yet spoken. It could be that the explosion takes off eventually, blows up the entire star, leaving behind the central neutron star.  It's also possible that the explosion never gains traction and the stellar envelope falls onto the neutron star, which will then collapse to a black hole. We'll see.  We are pushing our simulations further and are ready for more surprises.

Meet the Team:

Science is a team sport; and this is true in particular for the kind of large-scale, massively-parallel simulations that our group at Caltech is pushing. The full author list of the Mösta et al. 2014, The Astrophysical Journal, 785, L29 (2014) includes Philipp Mösta, Sherwood Richers, Christian Ott, Roland Haas, Tony Piro, Kristen Boydstun, Ernazar Abdikamalov, Christian Reisswig, Erik Schnetter. 

Everybody on this team made important contributions to the paper, but I would like to highlight the roles of the first two people in the author list:

Philipp Mösta
Philipp Mösta is a postdoc in TAPIR at Caltech and part of our Simulating eXtreme Spacetimes program. He is funded primarily by a National Science Foundation Astronomy & Astrophysics research grant (NSF grant no. AST-1212170). Philipp received his PhD in 2012 from the Albert Einstein Institute (the Max Planck Institute for Gravitational Physics) in Potsdam, Germany. Philipp spent the past two years adding magnetic fields to our code and making the entire 3D simulation machinery work for these extremely demanding magnetorotational supernova simulations. His previous training was primarily in numerical relativity, but he picked up on supernova physics with an impressive pace. Philipp carried out all simulations that went into our new paper and he worked closely with grad student Sherwood Richers on analyzing them.

Sherwood Richers
Sherwood Richers is currently a second-year graduate student in physics at Caltech. Sherwood is independently funded by a Department of Energy Computational Science Graduate Fellowship (CSGF) and we are delighted that he has chosen to collaborate with us. Sherwood received his undergraduate degree from the University of Virginia, where he carried out research on magnetohydrodynamics (MHD) with John Hawley. Sherwood is a real expert in all things MHD and he and postdoc Tony Piro are the ones who pointed out that what we are seeing in our 3D magnetorotational supernova simulations is most likely an MHD kink instability. Sherwood also participated in visualizing our simulation output and he is the one responsible for the pretty pictures and movies that we were able to produce from our simulation data!