[go: up one dir, main page]

Friday, November 1, 2024

More Evidence That There Is No Muon g-2 Anomaly

Once again, it has become clear that the "data driven" means of estimating the Standard Model prediction for the anomalous magnetic moment of the muon was flawed and the the experimental result is actually consistent with the Standard Model prediction in a very global tests of the completeness and correctness of the Standard Model at high precision that strongly disfavors a variety of new physics models.
An accurate calculation of the leading-order hadronic vacuum polarisation (LOHVP) contribution to the anomalous magnetic moment of the muon (aμ) is key to determining whether a discrepancy, suggesting new physics, exists between the Standard Model and experimental results. 
This calculation can be expressed as an integral over Euclidean time of a current-current correlator G(t), where G(t) can be calculated using lattice QCD or, with dispersion relations, from experimental data for e+e−→hadrons. The BMW/DMZ collaboration recently presented a hybrid approach in which G(t) is calculated using lattice QCD for most of the contributing t range, but using experimental data for the largest t (lowest energy) region. Here we study the advantages of varying the position t=t1 separating lattice QCD from data-driven contributions. The total LOHVP contribution should be independent of t1, providing both a test of the experimental input and the robustness of the hybrid approach. 
We use this criterion and a correlated fit to show that Fermilab/HPQCD/MILC lattice QCD results from 2019 strongly favour the CMD-3 cross-section data for e+e−→π+π− over a combination of earlier experimental results for this channel. 
Further, the resulting total LOHVP contribution obtained is consistent with the result obtained by BMW/DMZ, and supports the scenario in which there is no significant discrepancy between the experimental value for aμ and that expected in the Standard Model. 
We then discuss how improved lattice results in this hybrid approach could provide a more accurate total LOHVP across a wider range of t1 values with an uncertainty that is smaller than that from either lattice QCD or data-driven approaches on their own.
C. T. H. Davies, et al., "Utility of a hybrid approach to the hadronic vacuum polarisation contribution to the muon anomalous magnetic moment" arXiv:2410.23832 (October 31, 2024).

A Novel And Intriguing Explanation For The CKM And PMNS Matrixes

I've never seen anyone reach this remarkable insight before and it is indeed very tantalizing. This is huge if true. The authors note in the body text that: 
To our knowledge, this is the first time the differing CKM and PMNS structures have arisen from a common mechanism that does not invoke symmetries or symmetry breaking.

The paper and its abstract are as follows: 

The Cabibbo-Kobayashi-Maskawa (CKM) matrix, which controls flavor mixing between the three generations of quark fermions, is a key input to the Standard Model of particle physics. In this paper, we identify a surprising connection between quantum entanglement and the degree of quark mixing. Focusing on a specific limit of 2→2 quark scattering mediated by electroweak bosons, we find that the quantum entanglement generated by scattering is minimized when the CKM matrix is almost (but not exactly) diagonal, in qualitative agreement with observation. 
With the discovery of neutrino masses and mixings, additional angles are needed to parametrize the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix in the lepton sector. Applying the same logic, we find that quantum entanglement is minimized when the PMNS matrix features two large angles and a smaller one, again in qualitative agreement with observation, plus a hint for suppressed CP violation. 
We speculate on the (unlikely but tantalizing) possibility that minimization of quantum entanglement might be a fundamental principle that determines particle physics input parameters.
Jesse Thaler, Sokratis Trifinopoulos, "Flavor Patterns of Fundamental Particles from Quantum Entanglement?" arXiv:2410.23343 (October 30, 2024).

The paper's literature review does note one prior paper making a similar analysis:
Entanglement is a core phenomenon in quantum mechanics, where measurement outcomes are correlated beyond classical expectations. In particle physics, entanglement is so ubiquitous that we often take it for granted, but every neutral pion decay to two photons (π0 → γγ) is effectively a mini Einstein–Podolsky–Rosen experiment. In the context of SM scattering processes, though, the study and quantification of entanglement in its own right has only begun relatively recently [12–26]. In terms of predicting particle properties from entanglement, the first paper we are aware of is Ref. [27] which showed that maximizing helicity entanglement yields a reason but every neutral pion decay to two photons (π0 → γγ) is effectively a mini Einstein–Podolsky–Rosen experiment. In the context of SM scattering processes, though, the study and quantification of entanglement in its own right has only begun relatively recently [12–26]. In terms of predicting particle properties from entanglement, the first paper we are aware of is Ref. [27] which showed that maximizing helicity entanglement yields a reasonable prediction for the Weinberg angle θ(W), which controls the mixing between electroweak bosons.

References 12-26 are from 2012 to 2024.

Ref. [27] is A. Cervera-Lierta, J. I. Latorre, J. Rojo and L. Rottoli, Maximal Entanglement in High Energy Physics, SciPost Phys. 3 (2017) 036, [1703.02989].

Footnote 6 of the main paper is also of interest and addresses the fact that using only a one-loop calculation the get a value of 6º for a parameter whose measured value is 13º:
We happened to notice that in the limit where we neglect photon exchange, the exact valueθmin C =13◦ is recovered. However, we do not have a good reason on quantum field theoretic grounds to neglect the photon contribution. Because of the shallow entanglement minimum in Fig.2a, a 10% increase in the charged current process over the neutral-current one would be enough to accomplish this shift, which is roughly of the expected size for higher-order corrections.
A somewhat similar prior analysis that is not cited is Alexandre Alves, Alex G. Dias, Roberto da Silva, "Maximum Entropy Principle and the Higgs boson mass" (2015) (cited 42 times) whose abstract states:
A successful connection between Higgs boson decays and the Maximum Entropy Principle is presented. Based on the information theory inference approach we determine the Higgs boson mass as M(H) = 125.04 ± 0.25 GeV, a value fully compatible to the LHC measurement. 
This is straightforwardly obtained by taking the Higgs boson branching ratios as the target probability distributions of the inference, without any extra assumptions beyond the Standard Model. Yet, the principle can be a powerful tool in the construction of any model affecting the Higgs sector. We give, as an example, the case where the Higgs boson has an extra invisible decay channel within a Higgs portal model.
I would argue that all three of the papers linked in this post are not just "numerology" papers, as they suggest a plausible physical mechanism or theoretical principal by which the values of the SM physical constants in question can be determined.

Wednesday, October 30, 2024

The Hubble Constant Or The Fine Structure Constant Can Vary

According to this paper's analysis, if the fine structure constant (i.e. the coupling constant of the electromagnetic force), does not vary, the Hubble constant must change over time, but if the Hubble constant is truly constant over time, the fine structure constant must have been different in the distant past than it is today. 

A varying Hubble constant is vastly more plausible than a varying fine structure constant.
Testing possible variations in fundamental constants of nature is a crucial endeavor in observational cosmology. This paper investigates potential cosmological variations in the fine structure constant (α) through a non-parametric approach, using galaxy cluster observations as the primary cosmological probe. We employ two methodologies based on galaxy cluster gas mass fraction measurements derived from X-ray and Sunyaev-Zeldovich observations, along with luminosity distances from type Ia supernovae. We also explore how different values of the Hubble constant (H(0)) impact the variation of α across cosmic history. When using the Planck satellite's H(0) observations, a constant α is ruled out at approximately the 3σ confidence level for z≲0.5. Conversely, employing local estimates of H(0) restores agreement with a constant α.
Marcelo Ferreira, Rodrigo F. L. Holanda, Javier E. Gonzalez, L. R. Colaço, Rafael C. Nunes, "Non-parametric reconstruction of the fine structure constant with galaxy clusters" arXiv:2410.21542 (October 28, 2024) (Accepted by the The European Physical Journal C).

Another Possible Non-Particle Explanation For Dark Matter Phenomena

This is another possible explanation for dark matter phenomena. It makes some big claims. It will take time and some examination to determine if these claims about this novel approach hold up to further scrutiny (it probably doesn't). It also purports to explain dark energy phenomena.
We first briefly review the adventure of scale invariance in physics, from Galileo Galilei, Weyl, Einstein, and Feynman to the revival by Dirac (1973) and Canuto et al. (1977). 
We then gather concrete observational evidence that scale-invariant effects are present and measurable in astronomical objects spanning a vast range of masses (0.5 M⊙< M <10^14 M⊙) and an equally impressive range of spatial scales (0.01 pc < r < 1 Gpc). 
Scale invariance accounts for the observed excess in velocity in galaxy clusters with respect to the visible mass, the relatively flat/small slope of rotation curves in local galaxies, the observed steep rotation curves of high-redshift galaxies, and the excess of velocity in wide binary stars with separations above 3000 kau found in Gaia DR3. 
Last but not least, we investigate the effect of scale invariance on gravitational lensing. We show that scale invariance does not affect the geodesics of light rays as they pass in the vicinity of a massive galaxy. However, scale-invariant effects do change the inferred mass-to-light ratio of lens galaxies as compared to GR. As a result, the discrepancies seen in GR between the total lensing mass of galaxies and their stellar mass from photometry may be accounted for. This holds true both for lenses at high redshift like JWST-ER1 and at low redshift like in the SLACS sample. 
Of note is that none of the above observational tests require dark matter or any adjustable parameter to tweak the theory at any given mass or spatial scale.
Andre Maeder, Frederic Courbin, "A Survey of Dynamical and Gravitational Lensing Tests in Scale Invariance: The Fall of Dark Matter?" arXiv:2410.21379 (October 28, 2024) (accepted in "Symmetry").

Sabine Hossenfelder, at her blog, give the paper a thumbs down:
Several people have informed me that phys.org has once again uncritically promoted a questionable paper, in this case by André Maeder from UNIGE. This story goes back to a press release by the author’s home institution and has since been hyped by a variety of other low-quality outlets.

From what I gather from Maeder’s list of publications, he’s an astrophysicist who recently had the idea to revolutionize cosmology by introducing a modification of general relativity. The paper which now makes headlines studies observational consequences of a model he introduced in January and claim to explain away the need for dark matter and dark energy. Both papers contain a lot of fits to data but no consistent theory. Since the man is known in the astrophysics community, however, the papers got published in ApJ, one of the best journals in the field.

For those of you who merely want to know whether you should pay attention to this new variant of modified gravity, the answer is no. The author does not have a consistent theory. The math is wrong.

For those of you who understand the math and want to know what the problem is, here we go.

Maeder introduces a conformal prefactor in front of the metric. You can always do that as an ansatz to solve the equations, so there is nothing modified about this, but also nothing wrong. He then looks at empty de Sitter space, which is conformally flat, and extracts the prefactor from there.

He then uses the same ansatz for the Friedmann Robertson Walker metric (eq 27, 28 in the first paper). Just looking at these equations you see immediately that they are underdetermined if the conformal factor (λ) is a degree of freedom. That’s because the conformal factor can usually be fixed by a gauge condition and be chosen to be constant. That of course would just give back standard cosmology and Maeder doesn’t want that. So he instead assumes that this factor has the same form as in de Sitter space.

Since he doesn’t have a dynamical equation for the extra field, my best guess is that this effectively amounts to choosing a weird time coordinate in standard cosmology. If you don’t want to interpret it as a gauge, then an equation is missing. Either way the claims which follow are wrong. I can’t tell which is the case because the equations themselves just appear from nowhere. Neither of the papers contain a Lagrangian, so it remains unclear what is a degree of freedom and what isn’t. (The model is also of course not scale invariant, so somewhat of a misnomer.)

Maeder later also uses the same de Sitter prefactor for galactic solutions, which makes even less sense. You shouldn’t be surprised that he can fit some observations when you put in the scale of the cosmological constant to galactic models, because we have known this link since the 1980s. If there is something new to learn here, it didn’t become clear to me what.

Maeder’s papers have a remarkable number of observational fits and pretty plots, which I guess is why they got published. He clearly knows his stuff. He also clearly doesn’t know a lot about modifying general relativity. But I do, so let me tell you it’s hard. It’s really hard. There are a thousand ways to screw yourself over with it, and Maeder just discovered the one thousand and first one.

Please stop hyping this paper.
Her concerns are duly noted and are probably correct.