2022 - 2023 Seminar Series

What Machine Learning Tells Us About the Mathematical Structures of Concepts

“What are concepts?” is one of the fundamental questions in philosophy, where Aristotle, Kant, Wittgenstein, etc. have proposed different models of concepts. Meanwhile, machine learning literature has developed powerful methods to learn representation from data, as well as mathematical models to analyze such representations. This presentation aims to bridge these two traditions by identifying mathematical models & machine-learning counterparts of the (1) Aristotelian, (2) Wittgensteinian, (3) Functional, and (4) Symmetry-based theories of concept.  


Towards an Artificial Muse for new Ideas in Physics

Artificial intelligence (AI) is a potentially disruptive tool for physics and science in general. One crucial question is how this technology can contribute at a conceptual level to help acquire new scientific understanding or inspire new surprising ideas. I will talk about how AI can be used as an artificial muse in quantum physics, which suggests surprising and unconventional ideas and techniques that the human scientist can interpret, understand and generalize to its fullest potential.

(What) Do we learn from code comparisons? A case study of self-interacting dark matter implementation

There has been much interest in the recent philosophical literature on increasing the reliability and trustworthiness of computer simulations. One method used to investigate the reliability of computer simulations is code comparison. Gueguen, however, has offered a convincing critique of code comparisons, arguing that they face a critical tension between the diversity of codes required for an informative comparison and the similarity required for the codes to be comparable. In this talk, I will present the scientific and philosophical results of a recent collaboration that was designed to address Gueguen's critiques. Our interdisciplinary team conducted a code comparison to study two different implementations of self-interacting dark matter.  I first present the results of the code comparison itself. I then turn to investigating its methodology and argue that the informativeness of this particular code comparison was due to its targeted approach and narrow focus. Its targeted approach (i.e., only the dark matter modules) allowed for simulation outputs that were diverse enough for an informative comparison and yet still comparable. Understanding the comparison as an instance of eliminative reasoning narrowed the focus: we could investigate whether code-specific differences in implementation contributed significantly to the results of self-interacting dark matter simulations. Based on this case study, I argue that code comparisons can be conducted in such a way that they serve as a method for increasing our confidence in computer simulations being, as Parker defines, adequate-for-purpose.


The use of AI in Scientific Evaluation

The standard peer-review process has been extensively criticized as time-consuming and costly. Moreover, it is questionable how objective the evaluators are and how well they can predict scientific success. The use of AI in grant review is appealing because automation brings quicker, cheaper, and potentially more objective results. 

Before applying AI to the challenging task of predicting scientific success, we first need to analyze the following two questions: Can we successfully use AI in the process of scientific evaluation and is it morally permissible to do it? The first question is focused on the epistemic and the second on the ethical component of the algorithmic grant review process. 

I will present the results from a pilot study on algorithmic grant review in high-energy physics (Sikimić and Radovanović 2022) and critically assess the benefits and limitations of the implementation of such an approach in practice. Some of the requirements for the responsible use of AI in grant review concern data handling and securing algorithmic fairness. Moreover, different unobservable variables such as team cohesion or the motivation of the researches play a role in project success. These considerations open the door for hybrid approaches where AI will be used together with the standard peer-review approach. 


A Theoretical Account of Interdisciplinary and its Consequences

Interdisciplinarity remains a principal concern and priority of funding agencies as a well as science policy and university administrators. While there are other possible forms of cross disciplinary interaction, generally when interdisciplinarity or transdisciplinarity are referred to, policy-makers and others almost always think of interdisciplinary interaction in terms of collaborations amongst research from different disciplines. As we will see science policy and funding bodies have specific reasons for thinking primarily of collaboration as essential to interdisciplinarity, and have specific expectations about what collaborative interdisciplinarity should produce, namely the solutions to specific applied or real-world problems through integration and methodological innovation. But these preferences and expectations are challenged by the reality that much research which might be nominally called interdisciplinary, or which happens by virtue of interdisciplinary funding, tends not to produce novel methodological outcomes or substantial integration. Interdisciplinarity seems fundamentally conservative when it happens, preferring relatively minor modifications to practices rather than larger transformations.

 My position is that current interdisciplinary policies fail in these respects in so far as they have not been developed based on deep accounts of scientific practice. Philosophy of science however has developed its own theoretical account of practices and their cognitive structure for other contexts which can be applied to help model collaborative interdisciplinarity and form reasonable expectations about it. These include Chang's systems view of practice, epistemic landscape models of scientific investigation and Humphreys account of model templates. Conjointly these help explain why institutional reforms and incentives often fail to produce the high-level integration policy-makers generally expect, and more fundamentally, to challenge the seemingly unquestioned notion that "collaboration" is somehow the best route to interdisciplinary innovation.

Evidence, Computation, and AI: Why Evidence Ain't Just in the Head.

Can scientific evidence outstretch what scientists have mentally entertained, or could ever entertain? This paper focuses on the plausibility and consequences of an affirmative answer in a special case. Specifically, it focuses on how we may treat automated scientific data-gathering systems—especially AI systems used to make predictions or to generate novel theories—from the point of view of confirmation theory. It uses AlphaFold2 as a case study. 

Colliding Black Holes, Exploding Stars and the Dark Universe: Finding what you are looking for?

The detection of gravitational waves, ripples in the fabric of spacetime, from pairs of black holes has opened up the possibility to observe the Universe in ways not previously possible. To measure the minute effects of gravitational waves, one requires a priori knowledge of the signal. This a priori knowledge, in turn, depends heavily on numerical simulations, which are based on approximations and assumptions. Using the detection of gravitational waves as an example of modern, large-scale physics experiments, we will explore the question of whether we are increasingly only able to confirm our a priori knowledge and that the challenge this poses for future scientific breakthroughs.

On the Role of Philosophy for the Science of Explainable AI

AI systems are often accused of being ‘opaque’, ‘uninterpretable’ or ‘unexplainable’. Explainable AI (XAI) is a subfield of AI research that seeks to develops tools for overcoming this challenge. To guide this field, philosophers and AI researchers alike have looked to the philosophy of explanation and understanding. In this paper, I examine the relation between philosophy and this new Science of Explainable AI. I argue that there is a gap between typical philosophical theories of explanation and understanding and the motivations underlying XAI research: such theories are either too abstract, or else to too narrowly focused on specific scientific contexts, to address the varied ethical concerns that motivate XAI. I instead propose an alternative model for how philosophers can contribute to XAI, focused on articulating “mid-level” theories of explainability, i.e., theories which specify what kinds of understanding are important to preserve and promote in specific contexts involving AI-supported decision making. This programme, which I call philosophy for the science of XAI, is conceived as an inherently interdisciplinary endeavour, integrating normative, empirical and technical research on the nature and value of explanation and understanding.

Two dimensions of opacity and the deep learning predicament.

Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models are successfully used in exploratory contexts, scientists face a whole new challenge in forming the concepts required for understanding underlying mechanisms.