2023 - 2024

Please use the following zoom link to attend: 

March 22, 2024

7:30AM EST / 11:30AM GMT / 7:30PM HKT

Recording coming soon!

Theory-mediated detection of novel phenomena in astrophysics: the case of the photon ring

This talk will examine the opportunities and pitfalls of theory-mediated measurement in astrophysics. I locate the main danger not in the use of models of the target phenomena, but rather in the methodological context where these models are deployed. To illustrate this, I zoom in on a recent controversy among astronomers concerning attempts to detect the photon ring. I provide an account of what went wrong in this ``detection'' in conversation with other cases of (attempted) theory-mediated detection of novel phenomena in astrophysics---in particular, the retracted gravitational-wave detection claim by BICEP2 and the successful gravitational-wave detection claim by LIGO-Virgo.

Objectivity and Citizen Science

Citizen science (or community science, participatory science) refers to participation of the general public in science as researchers. Natural scientists broadly celebrate the potential of community science to generate data at previously undreamed of scale (and at little cost). However, natural scientists have worried about the quality of research conducted by participants who lack scientific training, and question the objectivity of contributors who may be driven by value-laden goals. In this talk I link these worries to philosophical models of objectivity and the appropriate relationship between science and social values. I argue that for a thoroughly social model, conceptualizing citizen science as collaboration across knowledge communities.

Closing the Attribution Gap: A Framework for Attributing Credit for AI-Generated Outputs

The recent wave of generative AI (GAI) systems like Stable Diffusion, ChatGPT or CoPilot that can produce images, text and code from human prompts raises controversial issues about creatorship, originality, creativity and copyright. This paper focuses on creatorship: who creates and should be credited with the outputs made with the help of GAI? There is currently significant moral, legal and regulatory uncertainty around these questions. We develop a novel framework, called CCC (collective-centered creation), that helps resolve this uncertainty. On CCC, GAI outputs are created by collectives in the first instance. Claims to creatorship come in degrees and depend on the nature and significance of individual contributions made by the various agents and entities involved, including users, GAI systems, developers, producers of training data and others. We demonstrate how CCC can help navigate a range of ongoing controversies around the responsible development and deployment of GAI technologies and help more accurately attribute credit where it is due.

The epistemology of approximations in computer intensive science

An approximation is only as good as its error. This is the central epistemological claim in computer intensive science. The more compute one has the smaller the error can be made and according to Moore's law compute increases exponentially. Approximations should soon be perfect. Or wouldn't they? Here I will argue that compute and scaling of error is not everything there is to the epistemology of approximations in computer intensive science. Not all computational errors are created equal and not all computational errors can be reduced equally. If a computational error can be scaled according to compute depends on the amount of control we leverage over it. Error control we only gain by understanding the processes which create the error and by controlling for them. I will use the case of Monte-Carlos to show where errors can be controlled for tightly and where in the same method some leverage has to be allowed. I argue that the least controlled for error determines the overall quality of the approximation.

Mapping Possibilities: Philosophy of Science and Scenarios of the Future 

This paper argues for the value of using philosophy of science to construct scenarios about possible futures of science. Scenarios are structured descriptions of hypothetical future situations and events. Philosophy provides theoretical resources for scenario construction by offering accounts of scientific change. Different philosophical theories lead to different scenarios, revealing unique possibilities for the future.

Scenarios do not predict but rather map possibilities to allow preparedness. Their philosophical grounding enables systematic and reasoned scenario development. This regulates imagination and counters pure speculation.

Comparing scenarios based on competing theories enhances understanding of the scope of possible futures. It also facilitates critical reflection on philosophical accounts when interpreting scenarios. Scenario analysis aligns with core objectives of futures research: identification of patterns, challenging conventional thinking, and informing decisions.

Connecting philosophy of science and scenarios can make philosophy more relevant to anticipating implications of scientific change. It also indicates philosophy's value for clarifying possibilities, managing uncertainty, and enabling accountability regarding the future. Exploring futures of science via philosophical scenarios is thus a promising approach warranting further development. 

January 17, 2024
9:30 AM CET / 4:30 PM HKT

No recording available

How good are artificial neural networks at providing understanding?

On a popular view, artificial neural networks (ANNs) are good at prediction and classification, but bad at providing understanding. The reason given is often that they are themselves poorly understood. Recently, some philosophers, e.g. Pietsch and Sullivan, have challenged this view. In this talk, I provide a systematic analysis to investigate to what extent ANNs can provide understanding. I argue that, on a wide range of accounts of understanding and explanation, ANNs can make predictions without fostering human understanding. More optimistic views on ANNs typically concentrate on examples in which some prior explanatory knowledge is at hand. My analysis allows me to pinpoint the kind of knowledge about an ANN that is needed if we are to say that it provides an explanation.

November 7, 2023


No recording available

Simulation Verification in Principle 

Large-scale numerical simulations are increasingly used for scientific investigation; however, given that they are often needed precisely because ordinary experimental and observational methods are of limited use, this naturally raises questions as to how their use can be epistemically justified.  Drawing on the adequacy-for-purpose framework, I characterize the problem of model assessment under conditions of scarce empirical evidence.  I argue that, while a single model may not suffice under these conditions, a suitable collection of models may be used in concert to advance a community's scientific understanding of a target phenomena and provide a foundation for the progressive development of more adequate models. 

September 19, 2023 


No recording available

Algorithmic Randomness and Probabilistic Laws

Co-Badged with Lingnan Department of Philosophy Seminar series

Probabilistic laws of nature, as they are usually understood, are extraordinarily permissive. For example, a probabilistic law of a coin toss experiment is compatible with any results, including the all heads sequence (HHHHH......) and the alternating heads-tails sequence (HTHTHT......). This feature gives rise to a variety of metaphysical and epistemological underdetermination, and a host of conceptual problems about how to connect probability to the real world. We consider two ways one might use algorithmic randomness to strengthen the content of a probabilistic law. The first is a generative chance* law. Such laws involve a nonstandard notion of chance. The second is a probabilistic* constraining law. Such laws impose relative frequency and randomness constraints that every physically possible world must satisfy. While each notion has virtues, we argue that the latter has advantages over the former. It supports a unified governing account of non-Humean laws and provides independently motivated solutions to issues in the Humean best-system account. On both notions, we have a much tighter connection between probabilistic laws and their corresponding sets of possible worlds. 

(Joint work with Jeffrey A. Barrett; paper version at https://arxiv.org/pdf/2303.01411.pdf)