Helen Meskhidze, (What) do we learn from code comparisons?
There has been much interest in the recent philosophical literature on increasing the reliability and trustworthiness of computer simulations. One method used to investigate the reliability of computer simulations is code comparison. Gueguen, however, has offered a convincing critique of code comparisons, arguing that they face a critical tension between the diversity of codes required for an informative comparison and the similarity required for the codes to be comparable. In this talk, I will present the scientific and philosophical results of a recent collaboration that was designed to address Gueguen's critiques. Our interdisciplinary team conducted a code comparison to study two different implementations of self-interacting dark matter. I first present the results of the code comparison itself. I then turn to investigating its methodology and argue that the informativeness of this particular code comparison was due to its targeted approach and narrow focus. Its targeted approach (i.e., only the dark matter modules) allowed for simulation outputs that were diverse enough for an informative comparison and yet still comparable. Understanding the comparison as an instance of eliminative reasoning narrowed the focus: we could investigate whether code-specific differences in implementation contributed significantly to the results of self-interacting dark matter simulations. Based on this case study, I argue that code comparisons can be conducted in such a way that they serve as a method for increasing our confidence in computer simulations being, as Parker defines, adequate-for-purpose.