Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-22T10:10:06.592Z Has data issue: false hasContentIssue false

“Fundamental” “Constants” and Precision Tests of the Standard Model

Published online by Cambridge University Press:  25 May 2022

Adam Koberinski*
Affiliation:
Department of Philosophy, University of Waterloo, Waterloo, Ontario, Canada Lichtenberg Group for History and Philosophy of Physics, University of Bonn, Bonn, Germany
Rights & Permissions [Opens in a new window]

Abstract

I provide an account of precision testing in particle physics that makes a virtue of theory-ladenness in experiments. Combining recent work on the philosophy of experimentation with a broader view of the scientific process allows one to understand that the most precise and secure knowledge produced in a mature science cannot be achieved in a theory-independent fashion. I discuss precision tests of the muon’s magnetic moment and effective field theory as a means to repurpose precision tests for exploratory purposes.

Type
Symposia Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the Philosophy of Science Association

1. Introduction

The Standard Model of particle physics is our best theory of the subatomic constituents of matter, boasting some of the most precisely confirmed predictions of all time. Yet it is widely considered to be an effective theory, to be replaced by some unknown successor. How do we reconcile the idea that the Standard Model has produced secure knowledge with the fact that it will be replaced? Here, I explore how the precise knowledge gained predicated on the Standard Model can actually constrain and guide inquiry into theories beyond the Standard Model. In doing so, I argue that physics is best understood as a collective enterprise of knowledge creation that builds up over time. As part of this perspective, it is noted that theory-predicated experimentation—to contrast with the often-pejorative term “theory-laden”—often provides our most secure knowledge, and this virtuous circularity should be encouraged rather than avoided.

Recent work elucidating exploratory experimentation and the theory-ladenness of experimental design in particle physics has sought to soften the blow of theory-ladenness (Beauchemin, Reference Beauchemin2017; Karaca, Reference Karaca2017; Staley, Reference Staley2020). Coming from the philosophy of experimentation tradition, theory-ladenness is often seen as an epistemic defect needing justification or defense. However, the work of Stein (Reference Stein1995) and Smith (Reference Smith, Biener and Schliesser2014) suggests that theory-laden inquiry is both necessary for the success of science and provides the most stable, precise claims to knowledge. Those defending experimentation from vicious circularity often reach similar conclusions, although the scope of the argument is made on a case-by-case basis. I hope to make more general claims about the process of establishing knowledge using scientific theory and experimentation in tandem. This knowledge is predicated upon theory, although the construction of ever-more-detailed knowledge refines and makes precise the concepts at the heart of theory. I call this virtuous circularity, although the more appropriate picture is that of an ever-tightening spiral moving forward in time, as theory and experiment work together to construct more and more precise agreement. As Stein (Reference Stein1995), Smith (Reference Smith, Biener and Schliesser2014), and Koberinski and Smeenk (Reference Koberinski and Smeenk2020) emphasize, this theory-predicated knowledge is stable through theory change. Precision testing allows us to gain insight into the often-overlooked qualifiers of a theory’s success. We push the limits of the domain of applicability and the degree of approximation to which we trust the theory and its functional relationships within that domain.

In particle physics, high-energy exploratory experiments, production experiments, and low-energy precision tests are all predicated upon the Standard Model or its effective field theory (EFT) reformulation. I focus on the structure of precision testing in particle physics, using the EFT framework to guide searches for new physics. The recent tension regarding the predicted versus measured value of the muon’s anomalous magnetic moment ( $a_{\mu }$ ) provides a good example of the ways that theory and experiment intertwine and how theory-predicated experimentation produces quantitative knowledge that can highlight small flaws in the framework. These anomalies serve as crucial tests for new theories that go beyond the Standard Model.

The remainder of this article is structured as follows. In section 2, I discuss the structure of precision tests of the Standard Model, focusing on the case of $a_{\mu }$ . I highlight how precision testing is only possible when predicated on theory and how precision tests can point the way toward new physics. In section 3, I discuss how the EFT framework provides a method for parameterization to a more general theory space. Precision tests can play the dual role of testing the Standard Model and constraining parameter space beyond the Standard Model. These are generally taken to be very different experimental modes, although they come together here and in some other contexts. The EFT framework further allows for better control of systematic uncertainty in experimentation. This serves to underscore the broader claim that theory-ladenness, on its own, is not a vice to be avoided; the best knowledge produced in the history of physics has been virtuously predicated on theory, and this does not prevent the evidence from serving a wide range of purposes.

2. Precision tests of the Standard Model

Koberinski and Smeenk (Reference Koberinski and Smeenk2020) have discussed in detail the structure of precision tests in the Standard Model. They focus on quantum electrodynamics (QED), in particular low-energy properties of the electron. Here, I will adapt that discussion to a similar property: the anomalous magnetic moment of the muon, $a_{\mu }$ . The muon’s magnetic moment is determined by a low-energy expansion of its self-interaction, including effects from QED, the weak sector, and the hadronic sector of the Standard Model. Causal factors relevant to $a_{\mu }$ are ordered by magnitude via a perturbative expansion in the coupling strength, and one can determine quantitative error bounds on one’s theoretical prediction induced by neglecting high-order terms in this expansion. Koberinski and Smeenk argue that precision testing of the Standard Model is virtuously predicated on the background theoretical structure. Thus, the theory-ladenness of these tests is essential in the same way that the theory-ladenness of Newtonian astronomy was (Smith, Reference Smith, Biener and Schliesser2014). Rather than apologizing for this supposed defect, philosophers should recognize the epistemic strength of research programs in physics whose mature practice involves theory-predicated experimental design.

Perturbative methods are used to extract predictions from the Standard Model as follows. In QED, one uses the fact that the coupling $\alpha \sim 1/137 \ll 1$ to write observable quantities $Q$ as a perturbative expansion in powers of $\alpha$ :

(1) $$Q(\alpha ) = \sum\limits_{n = 0}^\infty {{A_n}} {\left( {{\alpha \over \pi }} \right)^n},$$

where one uses the Feynman rules to calculate coefficients $\{A_n\}$ up to some given value of $n$ . As $n$ increases, the number of Feynman diagrams one must account for grows factorially, and the corresponding integrals become increasingly difficult to evaluate. In practice, we are therefore often limited to the first few terms in this expansion.Footnote 1 A virtuous circle of increasing precision can start with this expansion as follows. First, one must have a zeroth-order estimate for the value of the coupling constant.Footnote 2 This is used to determine the $n=1$ order expression for quantity $Q$ and an error estimate for truncating the series. This first-order prediction is compared to a measured value; if the two agree within the relevant margins of error, the prediction is successful. Further refinements come by designing more sophisticated measuring apparatuses (whose designs often become more theory-predicated) and by calculating the perturbative expansion to higher order. Because of the complexity of integral expressions for the $\{A_n\}$ , further approximation techniques must be used as we move to higher $n$ , introducing further errors beyond simple truncation of the expansion. The increased precision in both measured value and theoretical prediction creates an ever-stricter mutual constraint on the allowed value of the quantity.

When disagreement persists between theory and experiment, there are several ways to proceed. First, note that the expression in equation (1) is a perturbative expansion that includes effects from only the QED sector of the Standard Model. In general, one will have to include effects from the other major forces—the weak and strong forcesFootnote 3 —because these will also contribute to the quantity of interest. Depending on the energy scales one is probing, one uses a similar perturbative expansion in powers of the weak and strong couplings to account for their respective effects. However, these are generally much harder to calculate, so further error is introduced in numerical estimation techniques. At low energies, one cannot use perturbative methods for the strong force, whose couplings are too large to be treated perturbatively.

The calculation of $a_{\mu }$ —and other quantities in the Standard Model—is best understood as a process of mutual refinement as described previously. To first order in QED, one predicts a departure from $a_{\mu } = 0$ with a precision of 3 significant figures. Similarly approximate measurements find agreement and encourage the inclusion of more detailed effects from the side of theory. Advances in experimental precision allow a virtuous feedback loop, with higher-precision measurements often steeped in background assumptions from the very theoretical framework being tested. When viewed as a buildup of knowledge over time, this theory-ladenness turns from potential vice into virtue. Without the mutual reinforcement of theory and experiment, we could not claim some of the most precise predictions in the history of science, nor would we discover meaningful tensions allowing us to move beyond the current theoretical framework.

To a first approximation, one can separate the effects on $a_{\mu }$ from different sectors of the Standard Model and predict each component individually:

(2) $${a_\mu }({\rm{theory}}) = {a_\mu }({\rm{QED}}) + {a_\mu }({\rm{hadronic}}) + {a_\mu }({\rm{EW}}),$$

where $a_{\mu }({QED})$ is the contribution coming from the “pure” QED sector, $a_{\mu }({hadronic})$ is from the strong sector, and $a_{\mu }({EW})$ is from the weak sector. The hadronic effects are further split into two dominant factors: hadronic vacuum polarization (HVP) and hadronic light-by-light scattering (HLbL), each of which is calculated in a different manner. The current best calculation of $a_{\mu }$ includes a fifth-order calculation of $a_{\mu }({QED})$ , second-order loop contributions from the electroweak and Higgs bosons, and virtual hadronic loop contributions from the strong sector (cf. Aoyama etal. Reference Aoyama, Asmussen, Benayoun, Bijnens, Blum, Bruno, Caprini, Carloni Calame, Cè and Colangelet2020). The magnitude of these effects depends on having precise values of the relevant couplings for each force.

Aoyama etal. (Reference Aoyama, Asmussen, Benayoun, Bijnens, Blum, Bruno, Caprini, Carloni Calame, Cè and Colangelet2020) report the best-supported value as

(3) $${a_\mu }({\rm{theory}}) = 116{\kern 1pt} 591{\kern 1pt} 810(43) \times {10^{ - 11}},$$

with the largest source of error coming from the hadronic contributions: $a_{\mu }({HVP}) = 6,845(40) \times 10^{-11}$ and $a_{\mu }({HLbL}) = 92(18) \times 10^{-11}$ . It is important to note that various modeling assumptions, approximation techniques, and phenomenological form factors need to be fixed and improved upon to reach predictions at this level of precision. The hadronic contributions are especially difficult to approximate because of the nature of the strong interaction. First, hadrons are composite particles whose residual strong interaction is ultimately grounded in the interactions between gluons and quarks, quantum chromodynamics (QCD). Confinement ensures that at low energies, quarks and gluons can only exist as composites. The relationship between the quark and gluon fields and the hadronic fields is complicated. At the state of the art, we have numerical models of QCD that predict the emergence of hadronic composites with the correct mass one would expect for the observed hadron spectrum (Hansen and Sharpe Reference Hansen and Sharpe2019). Virtual hadronic interactions are emergent in the Standard Model, and thus advanced approximation techniques are often required to describe them. Second, as the name implies, the strong interaction has a large coupling constant at low energies, and thus the usual perturbative methods employed for QED and low-energy weak interactions no longer work. This means that even if one could use QCD directly, perturbative methods would not suffice.

Control of these approximation techniques is heavily informed by and dependent on the Standard Model and the framework of quantum field theory (QFT). When one calculates perturbative effects order by order, the error introduced by truncation is estimated using renormalization group methods, assuming the validity of the underlying framework. Errors on hadronic contributions are estimated by testing the data-driven models insituations where comparison with experiment is more direct. This relationship between experiment and theory for prediction is hard to square with a simple deductive-nomological account of prediction, confirmation, or explanation. We also see how error analysis hints at future advances in increasing theoretical precision. In the words of Smith (Reference Smith, Biener and Schliesser2014), the Standard Model provides a systematic framework to account for the details that make a difference, as well as just how much of a difference we should expect those details to make.

Just as experiment plays a large role in the construction of precise prediction, so, too, does theory play a role in the design of experiments. First, the promise that $a_{\mu }$ will be more informative about new physics than the much more precisely measured electron $a_e$ is informed by theoretical calculations showing that the relative contribution from higher-energy virtual particles scales as $(m_{\mu }/m_e)^2$ . The design of the superconducting magnetic storage ring, the timing of muon beam bunch pulses, the magnetic pulse to direct muons to the storage ring, and the controls to eliminate noise and complicated contributions to the anomalous precession frequency all rely heavily on the background theoretical framework provided by QFT and the Standard Model (cf. Albahri etal. Reference Albahri, Anastasi, Anisenkov, Badgley, Baeßler, Bailey, Baranov, Barlas-Yucel, Barrett and Basti2021). These experiments are conducted in a context where the scientific community already expects the Standard Model to include most of the correct dynamically relevant effects. Therefore one can design experiments that rely on principles within the Standard Model: both global principles and more local principles directly related to the muon are needed to control for sources of error to the degree required.

The most recent tests conducted at Fermilab, combined with previous measurements, provide an overall experimental value for $a_{\mu }$ :

(4) $${a_\mu }({\rm{exp}}) = 116{\kern 1pt} 592{\kern 1pt} 061(41) \times {10^{ - 11}},$$

which leads to a $4.2\sigma$ tension with $a_{\mu }({theory})$ . This increases the previous tension of $3.4\sigma$ , hinting that this is a persistent effect. Although standards in the field of particle physics prevent the claim of a discovery until a discrepancy with expectations at or above $5\sigma$ , the recent results have caused waves in the community, with the results being heralded as the first hint of new physics in decades.

I will highlight two major features of this story. First, a note on how to understand uncertainty estimates. On the theoretical side, uncertainty arises in a few different areas. There is uncertainty associated with truncating perturbative expansions in QED and the weak sector. These uncertainties are not related to approximation techniques, and in principle, they can be estimated directly. They presuppose that the background theoretical framework is correct, but this is unproblematic in the context of predictions. Uncertainties associated with approximation techniques are often harder to quantify. Staley (Reference Staley2020) argues that systematic uncertainty is best understood as an estimate of how much a faulty premise affects the quantitative prediction and is based on variability across models of the process under consideration. This is relevant to the case of $a_{\mu }$ because the two contributions with the largest error sources are hadronic effects. These require a great deal of approximation and hence introduce the most systematic uncertainty. Because comparison across different techniques is only just becoming possible, these estimates could be drastically over- or under-reported. Just before the Fermilab announcement, an article was published claiming that refined lattice-QCD techniques allow for a better calculation of the leading-order HVP contribution to $a_{\mu }$ (Borsanyi etal. Reference Borsanyi, Fodor, Guenther, Hoelbling, Katz, Lellouch, Lippert, Miura, Parato and Szabo2021). These techniques are developed and refined based on experimental results in other domains of hadronic physics. They find a contribution that is $144 \times 10^{-11}$ larger than the result used to compute $a_{\mu }({theory})$ , which would significantly reduce the tension with $a_{\mu }({exp})$ . If correct, this result would indicate one way that experiments can lead to improvements in approximation techniques on the side of theory.

Second, $a_{\mu }$ illustrates the epistemic importance of theory-predicated searches for new physics. Supposing that the current tension is not explained away by the new lattice-QCD methods, such a small discrepancy between theory and experiment ( $\Delta a_{\mu } \sim 2.5 \times 10^{-9}$ ) would never have been discovered had theory-predicated tests not been conducted. The virtuous feedback loop between more precise predictions and measurements has served two important purposes. First, it has shown just how well the Standard Model can account for the dominant functional dependencies behind $a_{\mu }$ . Over the last several decades, we have discovered more details and quantified the exact difference those details make. These are codified in functional relationships governed by the three major forces. Thus, the fundamental constants play a central role in organizing and unifying precision knowledge in particle physics. Second, this precise tension both constrains any future theory beyond the Standard Model and provides an empirical window into the low-energy effects of exotic physics. The constraint is that future theory must match the Standard Model up to 9 significant figures, whereas the window provides evidence that the theory on which searches have been successfully predicated must fail to include all physically relevant details. Consider the analogy with Newtonian astronomy as detailed in Smith (Reference Smith, Biener and Schliesser2014). Mercury’s missing 43 arc seconds of precession per century were just a small component of the overall rate of 575 arc seconds. Increased precision within the Newtonian framework was able to account for most of the precession, leaving only a small fraction elusive. Without the precision guidance of Newtonian astronomy, Einstein’s prediction would have been only a small fraction of the observed precession and would not have served as conclusive evidence in favor of general relativity. In the same way, until we learn of a persistent tension between the Standard Model and experimentation, a new theory predicting a $10^{-9}$ effect would similarly not be evidence in favor of the new model. Without theory-predicated measurements, we would therefore have less secure, precise knowledge, as well as little guidance toward new physics.

3. Effective field theory and generalization

Knowledge generated via precision testing is encoded in functional relationships describing the magnitude of relevant physical effects, and these relationships constrain the construction of future theories. The fundamental constants at the heart of these knowledge claims, however, may turn out, by the lights of future theory, to be neither truly fundamental nor even constant. I turn attention in this section to the EFT framework and describe it as a generalization away from the fundamental principles of QFT.

At a first pass, the EFT framework generalizes the QFT framework by dropping the requirement that only renormalizable terms are allowed in the Lagrangian (Weinberg Reference Weinberg1979). This is only possible as a result of a better understanding of the scaling behavior of QFTs via Wilson (Reference Wilson1975) renormalization group. As long as one deals with energies that are low relative to the point at which we expect a theory to break down, these terms are small and controllable. At low energies, nonrenormalizable terms are suppressed by the powers of a high-energy cutoff $\Lambda$ . In general, all coupling terms will vary with energy. By dropping the requirement of renormalizability, one has greatly expanded the space of possible theories under consideration.

One can reconceive of the Standard Model as an EFT by including the infinite number of nonrenormalizable interactions between its fields, consistent with its known symmetries. This Standard Model EFT is enormously complicated, but it provides a unifying framework for candidates beyond the Standard Model to be directly compared at low energies. In principle, precision tests like the $a_{\mu }$ test described previously place constraints on the low-energy values of the nonrenormalizable couplings. These constraints rule out portions of the parameter space that may be covered by candidate future theories. We thus have a so-called “model-independent” means for systematizing constraints on new physics.

Using the family of renormalization group methods on the EFT framework, we see that the dominant coupling “constants” at low energies encode the low-energy causal dependencies, but we should expect new dependencies to grow in importance as we probe systems at higher and higher energy scales. If the successor theory was known—and found to fit within the EFT framework—its couplings would suffice to fix the low-energy values currently treated as empirical inputs. Thus, the EFT framework provides reason to think that the “fundamental” constants in the Standard Model are emergent and variable, despite their epistemic security and privileged status for organizing functional dependencies within the Standard Model. Similarly, the gravitational forces contributing to Mercury’s orbit were discovered to be approximate and emergent. This does not affect the epistemic stability of the functional relationships between celestial bodies, which remains secure.

This strategy of parameterization and generalization from a known framework has become common elsewhere at the frontiers in physics. One can see this approach in gravitational physics in the parameterized generalizations of weak field and cosmological spacetimes (Patton Reference Patton2020) and in the new operational frameworks used for reconstructing quantum theory (Koberinski and Müller Reference Koberinski, Muller, Michael and Samuel2018). Although the exact methods used in each discipline differ, all relax the core assumptions from known theory to produce a larger parameterized theory space. The method of generalization here provides a means for operationally converting precision tests of important parameters in the known theory to exploratory experiments ruling out competitors in some specified theory space. Thus, theory-predicated precision tests are repurposed in the generalized framework to serve as explorations constraining the possible parameter values in the theory space. Systematic errors in theory-predicated measurements can also be reduced with help from the generalized framework.

In a generalized framework, standard tests become exploratory tests in the parameter space. Early accounts of exploratory experiments claim that exploration is characterized by a minimal dependence on any background theoretical framework. Steinle (Reference Steinle1997), in particular, contrasts exploratory experimentation with theory-driven experimentation, which includes precision determination of constants (69–70). Karaca (Reference Karaca2017) argues that exploratory experimentation in high-energy physics is characterized by methods that seek to expand the range of possible outcomes of an experiment. Importantly, he argues that this exploratory methodology is often theory-laden. However, Karaca still emphasizes that precision measurements, like that of $a_{\mu }$ , are paradigm cases of nonexploratory experimentation.

If we focus strictly on the experimental design and methodology of precision tests, then it seems clear that they are not exploratory in the way Karaca claims. However, by placing precision tests within the theoretical context of the EFT framework, they can be repurposed for the goal of exploration by elimination. The EFT framework itself expands the range of possible outcomes by relaxing constraints from the original QFT framework. Precision tests narrow this range again by placing the constraints of possible future models beyond the Standard Model. By switching focus to the interplay between theory and experiment, keeping in mind both the methods and goals of each, we can see a new method of exploration using theory-predicated precision tests and a generalized theoretical framework.

The EFT framework provides an additional means to assess and potentially reduce systematic error in theory-predicated measurements like the measurement of $a_{\mu }$ . Staley (Reference Staley2020) argues that one of the major roles of systematic error in particle physics is as a minimal form of robustness analysis. We can think of the quantification of systematic error as a measure of the variation of a measurement result when one changes some subset of the assumptions that go into a model of the measurement and its underlying physical processes. This in itself is a theory-predicated process: in order to model the effects of varying assumptions, one must have some theoretical understanding of what counts as a reasonable variation, as well as what sorts of quantities are subject to variation. When a measurement is heavily theory-predicated, having more control over variability within the theoretical framework can provide information on the systematic error within the experiment. We can replace modeling assumptions that adhere strictly to the Standard Model with those relaxed to fit the EFT framework. If this new base of assumptions results in greater variability in the possible measurement outcomes, then one must increase the systematic errors accordingly. By better understanding the sources of systematic error, the EFT framework provides a better guide to further reducing those uncertainties.

As a caveat, one should note that the method of generalization of theoretical frameworks does not completely avoid problems of unconceived alternatives. Because generalizations start from the principles and formalism of the known theory and relax them, there is a strong sense in which the generalizations are still closely tied to the principles of the original theory. Future theoretical developments that radically alter concepts will not be captured by generalized frameworks such as the EFT framework. Ruetsche (Reference Ruetsche2018) uses the example of a generalized Newtonian gravitational theory missing the crucial insights needed for general relativity. Similarly, Koberinski and Smeenk (forthcoming) argue that the EFT framework is ill-suited to cosmological contexts in quantum gravity. Despite these worries, the method of generalization widens the space of possibilities and influences the use of precision tests of current frameworks. The functional dependencies determined from a generalized framework will also continue to hold, regardless of what a successor theory looks like. One must simply be cautious of the fact that these generalizations are not fully assumption-free or model independent.

4. Conclusions

One cannot gain the secure, detailed knowledge that precision tests offer without a virtuous feedback loop connecting theory and experiment. Recent literature in the philosophy of experimentation has recognized the necessity of theory-laden experimentation but tries to defend against or explain away the potential circularity issues. If we think of science as a process through time, building up more secure knowledge, as in Smith (Reference Smith, Biener and Schliesser2014) and Stein (Reference Stein1995), we can make a broader claim about theory-ladenness in physics. We thus recast potential vicious circles as virtuous; as theories mature and develop, experimentalists can design new tests predicated on those theories to discover new knowledge that would otherwise be inaccessible.

I have briefly outlined how this process works in the precision tests of the Standard Model of particle physics. Beyond securing new knowledge, theory-predicated precision tests can be used to constrain and inform future theory. Although this knowledge is theory predicated, the stable functional relationships revealed survive theory change and push the framework to its breaking point. Within the correct domains, and up to a degree of precision determined by these tests, the functional relationships will continue to be stable and meaningful in a new framework.

I have also discussed the generalization from renormalizable QFTs to the EFT framework for particle physics. Within the context of the EFT framework, precision measurements of quantities like $a_{\mu }$ are repurposed for exploration by ruling out values of the parameter space. Precise measurements constrain the magnitude of deviation from the renormalizable Standard Model. This generalized framework can provide a better understanding of the space of possible background assumptions, allowing for a better handle on systematic error. Because the method of generalization marks a first step beyond the known theoretical framework, it allows us to see important epistemic details in a new light. For the Standard Model, the EFT framework shows us that—despite their epistemic importance—the “fundamental” “constants” are neither fundamental nor constant.

By emphasizing that scientific disciplines evolve and interact through time, the line between theoretical and experimental knowledge is blurred, and concerns of problematic theory-ladenness are replaced by a detailed understanding of how a mature theoretical framework can serve as the background against which we can discover the details that make a dynamical difference. Because of this theory predication, the resulting knowledge is stable against theory change and constrains future theory development.

Acknowledgments

I am grateful to Doreen Fraser, Jessica Oddan, Alistair Isaac, and the Lichtenberg Group for the History and Philosophy of Physics for helpful comments on earlier drafts. This work was supported by the Social Sciences and Humanities Research Council of Canada Postdoctoral Fellowship and the University of Bonn’s Heinrich Hertz Fellowship.

Footnotes

1 Additionally, there exists some $n=N$ beyond which the convergence of the series is spoiled. The radii of convergence of perturbative expansions in quantum field theory are strictly zero; the perturbative expansion is therefore thought to be an asymptotic expansion. For QED, this breakdown is thought to occur around $N=\alpha ^{-1}$ , so this limitation is only a limitation in principle, not practice.

2 In QED, this comes from the relationship $\alpha = e^2/ (4\pi \epsilon _0\fbox h c)$ , where $e$ is the charge of the electron, $\epsilon _0$ is the permittivity of free space, $c$ is the speed of light, and $\fbox h$ is the reduced Planck constant. These constants were known to some level of precision prior to the development of QED, and thus the coupling $\alpha$ could be input from the start.

3 In general, gravity would also have to be included as another known force. However, gravitational contributions in particle physics are estimated to be so small as to be irrelevant in practice. The hard problem is in domains where particle physics and gravity are both expected to be relevant; there, we await a consensus theory of quantum gravity.

References

Albahri, T., Anastasi, A., Anisenkov, A., Badgley, K., Baeßler, S., Bailey, I., Baranov, V. A., Barlas-Yucel, E., Barrett, T., Basti, A., etal. 2021. “Measurement of the Anomalous Precession Frequency of the Muon in the Fermilab Muon g – 2 Experiment.” Physical Review D 103 (7):072002.10.1103/PhysRevD.103.072002CrossRefGoogle Scholar
Aoyama, T., Asmussen, N., Benayoun, M., Bijnens, J., Blum, T., Bruno, M., Caprini, I., Carloni Calame, C. M., , M., Colangelet, G., etal. 2020. “The Anomalous Magnetic Moment of the Muon in the Standard Model.” Physics Reports 887: 1166. https://doi.org/10.1016/j.physrep.2020.07.006.CrossRefGoogle Scholar
Beauchemin, Pierre-Hugues. 2017. “Autopsy of Measurements with the ATLAS Detector at the LHC.” Synthese 194 (2):275312.10.1007/s11229-015-0944-5CrossRefGoogle Scholar
Borsanyi, Sz., Fodor, Z., Guenther, J. N., Hoelbling, C., Katz, S. D., Lellouch, L., Lippert, T., Miura, K., Parato, L., Szabo, K. K., etal. (2021). “Leading Hadronic Contribution to the Muon Magnetic Moment from Lattice QCD.” Nature 593:5155.10.1038/s41586-021-03418-1CrossRefGoogle Scholar
Hansen, Maxwell T., and Sharpe, Stephen R.. 2019. “Lattice QCD and Three-Particle Decays of Resonances.” Annual Review of Nuclear and Particle Science 69:65107.10.1146/annurev-nucl-101918-023723CrossRefGoogle Scholar
Karaca, Koray. 2017. “A Case Study in Experimental Exploration: Exploratory Data Selection at the Large Hadron Collider.” Synthese 194 (2):333–54.10.1007/s11229-016-1206-xCrossRefGoogle Scholar
Koberinski, Adam, and Muller, Markus P.. 2018. “Quantum Theory as a Principle Theory: Insights from an Information-Theoretic Reconstruction.” In Physical Perspectives on Computation, Computational Perspectives on Physics, edited by Michael, E. Cuffaro and Samuel, C. Fletcher, 257–80. Cambridge: Cambridge University Press. https://doi.org/10.1017/9781316759745.013.CrossRefGoogle Scholar
Koberinski, Adam, and Smeenk, Chris. 2020. “Q.E.D., QED.” Studies in the History and Philosophy of Modern Physics 71:1–13.10.1016/j.shpsb.2020.03.003CrossRefGoogle Scholar
Koberinski, Adam, and Smeenk, Chris. Forthcoming. “Λ and the Limits of Effective Field Theory.” Philosophy of Science. https://doi.org/10.1017/psa.2022.16.Google Scholar
Patton, Lydia. 2020. “Expanding Theory Testing in General Relativity: LIGO and Parametrized Theories.” Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 69:142–153.10.1016/j.shpsb.2020.01.001CrossRefGoogle Scholar
Ruetsche, Laura. 2018. “Renormalization Group Realism: The Ascent of Pessimism.” Philosophy of Science 85 (5):1176–89.CrossRefGoogle Scholar
Smith, George E. 2014. “Closing the Loop.” In Newton and Empiricism, edited by Biener, Zvi and Schliesser, Eric, 262352. New York: Oxford University Press.10.1093/acprof:oso/9780199337095.003.0011CrossRefGoogle Scholar
Staley, Kent W. 2020. “Securing the Empirical Value of Measurement Results.” British Journal for the Philosophy of Science 71 (1):87113.10.1093/bjps/axx036CrossRefGoogle Scholar
Stein, Howard. 1995. “Some Reflections on the Structure of Our Knowledge in Physics.” Studies in Logic and the Foundations of Mathematics 134:633–55.10.1016/S0049-237X(06)80067-4CrossRefGoogle Scholar
Steinle, Friedrich. 1997. “Entering New Fields: Exploratory Uses of Experimentation.” Philosophy of Science 64 (S4):S65S74.10.1086/392587CrossRefGoogle Scholar
Weinberg, Steven. 1979. “Phenomenological Lagrangians.” Physica A 96 (1–2):327–40.10.1016/0378-4371(79)90223-1CrossRefGoogle Scholar
Wilson, Kenneth G. 1975. “The Renormalization Group: Critical Phenomena and the Kondo Problem.” Reviews of Modern Physics 47 (4):773.10.1103/RevModPhys.47.773CrossRefGoogle Scholar