Robust
Robust
Fermi-LAT observations have strongly constrained dark matter annihilation through the joint-likelihood anal-
ysis of dwarf spheroidal galaxies (dSphs). These constraints are expected to be robust because dSphs have
measurable dark matter content and produce negligible astrophysical emission. However, each dSph is dim,
with a predicted flux that typically falls below the accuracy of the background model. We show that this signif-
icantly diminishes the reliability of previous joint-likelihood algorithms, and develop an improved analysis that
arXiv:1905.11992v1 [astro-ph.HE] 28 May 2019
directly accounts for the effect of background mismodeling. This method produces more robust limits and de-
tections of dark matter in both real and mock data. We calculate improved limits on the dark matter annihilation
cross-section, which differ by nearly a factor of two from previous analyses — despite examining identical data.
Over the last decade, Fermi-LAT observations have In Ref. [32, hereafter, L16] (see also Appendix A) we showed
strongly constrained the dark matter annihilation cross- that these issues become acute for sources detected at the ∼2σ
section. In many cases, these observations have met, or ex- level. In this regime, the density of sub-threshold sources ap-
ceeded, the thermal annihilation cross-section — at which proaches the Fermi-LAT angular resolution, and the best-fit
dark matter particles in thermal equilibrium in the early uni- model produces a distribution of negative and positive residu-
verse would freeze out to produce the observed dark matter als stemming from over- and under-subtraction.
abundance [1]. Intriguingly, several excesses are consistent The Fermi-LAT Collaboration’s joint-likelihood analysis of
with a ∼30-100 GeV dark matter particle annihilating near dSphs is the quintessential example of an algorithm that oper-
this cross-section [2–6], most notably the γ-ray excess ob- ates in this low-significance regime [16–18]. In calculating a
served surrounding the Milky Way’s galactic center [7–12]. 2σ exclusion limit evaluated over an ensemble of dSphs, the
These facts place us in a new “precision” regime, where the analysis sets constraints based on low-significance informa-
accuracy of dark matter annihilation constraints is critical. tion about each object. While the joint-likelihood algorithm
The most robust annihilation constraints stem from the is well-defined when each dSph likelihood profile can be in-
Milky Way’s dwarf spheroidal galaxies (dSphs) [13–19]. terpreted statistically, the standard algorithm does not account
While dSphs are unlikely to be the brightest annihilation for the possibility that mismodeling induces significant posi-
sources, three factors produce strong constraints. First, stel- tive or negative residuals coincident with dSph locations. The
lar dispersion measurements constrain their dark matter con- Fermi-LAT collaboration examines these uncertainties by re-
tent [20–22]. Second, they lie far from the complex Milky peating their analysis on thousands of “blank sky” locations
Way plane. Third, they produce negligible astrophysical emis- that do not contain dSphs, finding that the calculated 2σ limit
sion [23]. These factors are unique to dSphs and validate their can vary by an order of magnitude at 95% confidence [18].
status as the “silver bullet” target for indirect detection. This provides an indication for the level of uncertainty pro-
However, there are two complications in setting annihila- duced by diffuse mismodeling, but does not reveal how the
tion constraints with dSphs. First, while their dark matter con- true dSph constraint has been affected.
tent is measured, their relative annihilation rate is uncertain
due to variations in the dark matter radial profile, as well as Several data-driven methods have been proposed to con-
effects such as asphericity and triaxiality. These complexities strain the dSph flux without relying on a particular back-
are translated into statistical errors in the dSph “J-factor”, cal- ground model [13, 15, 19, 31, 33–37]. In general, these tech-
culated as the square of the dark matter density integrated over niques choose numerous “blank sky locations” to predict (ei-
the line of sight. Significant efforts are underway to quantify ther through averaging or machine learning) the γ-ray pho-
and model these uncertainties [20–22, 24–28]. ton count or flux within a given ROI surrounding each dSph.
Second, the γ-ray emission near each dSph is not modeled Comparing the predicted flux with the true dSph flux, they
to the level of Poisson noise. Errors stem from inaccura- constrain any additional dark matter signal. These analyses
cies in diffuse Galactic emission models and the large pop- benefit because they directly integrate the uncertainty in as-
ulation of point sources with fluxes too small to be identified. trophysical mismodeling into the predicted dSph flux – ac-
Studies have focused on the latter effect by correlating γ-ray counting for the increased prevalence of low-significance ex-
hotspots with multi-wavelength catalogs [29–31]. However, cesses in real data. However, they lose sensitivity in two ways.
likelihood-fitting algorithms inexorably link these uncertain- First, while they utilize γ-ray data to predict the astrophysical
ties by allowing a combination of point sources and diffuse flux near each dSph, they do not model known features (e.g.
emission to fit the data over a large region-of-interest (ROI). gas clouds or bright point-sources) that have been observed
(and integrated into) astrophysical models. Second, they typi-
cally do not take into account the full information provided by
the energy-dependent point-spread function in differentiating
∗ linden.70@osu.edu source events from background.
2
FIG. 1. The standard (top) and improved (bottom) joint-likelihood analyses, applied to a two dSph stack of Ursa Major II (blue) and Willman
1 (red). Results are shown for 100 GeV dark matter annihilating to bb̄. The key difference is the use of blank sky positions to re-calculate the
likelihood profile of each dSph before the joint-likelihood analysis is performed. In this case, the 2σ limit is improved by ∼30%. Uncertainty
bands can also be calculated in the improved analysis, however the information from blank sky positions is already included to set the limit.
In this paper, we develop an improved analysis that This definition of P0bg includes contributions from both as-
combines the advantages of the Fermi-LAT joint-likelihood trophysical mismodeling and Poisson fluctuations in the pho-
analyses with the accurate treatment of fluctuations provided ton count. In order to isolate the effect of mismodeling, we
by data-driven methods. The key conceptual improvement calculate the expected flux distribution φbg,P oisson , expected
is as follows: while the standard joint-likelihood analysis from statistical fluctuations, and then deconvolute the two ef-
tests the null-hypothesis that: the distribution of observed fects in order to isolate a function Pbg , which describes the
dSph fluxes (weighted by J-factors) is consistent with 0, additional systematic uncertainty. We then use Pbg for the re-
our analysis tests the null hypothesis: the distribution of mainder of the analysis. Removing the Poisson noise term
observed dSph fluxes (weighted by J-factors) is consistent produces a ∼10% change in our results. In Appendices A and
with the distribution of “fake” point-source fluxes imparted B we describe this procedure in detail and show both P0bg and
by astrophysical mismodeling. This integrates the “blank sky” Pbg for a 100 GeV dark matter particle annihilating to bb̄.
results directly into the calculated limit. We show that this The true γ-ray flux from a dSph is given by:
method produces more resilient detections and constraints on
dark matter annihilation, and the results of the new analysis hσviφχ 10J
match statistical expectations. Intriguingly, this method can φs,i = (2)
8πm2χ
affect the annihilation constraint by a factor of 2 compared to
previous work, even when analyzing identical data. We use where hσvi is the annihilation cross-section, J is the loga-
this model to robustly constrain dark matter annihilation. rithmic J-factor, and φχ is the model-specific energy flux per
annihilation. For a given hσvi, the probability that a dSph
Joint-Likelihood Analysis — Our modeling proceeds similarly emits a flux φs,i is given by the offset between the calculated
to L16. Figure 1 illustrates each step in our analysis. For J-factor and photometric J-factor:
each dark matter model, a probability distribution of point
−(J − Ji )2
1
source fluxes induced by astrophysical mismodeling is calcu- P (φs,i , hσvi, Ji ) = P (Ji ) = √ exp
lated from an ensemble of blank sky locations as: 2πσi 2σi2
0 1 X (3)
Pbg (φbg ) = Ai Li (φbg ) (1)
N i where Ji and σi are the photometric J-factors and uncertain-
where Li is the likelihood function of a single source evalu- ties for dSph i, expressed logarithmically. Given this defini-
ated at flux φbg , and Ai normalizes the integral of the likeli- tion, the joint-likelihood of an ensemble of dSphs is calculated
hood function over all fluxes. In Bayesian language, P 0 can be by first integrating out the background contribution, and then
described as a conjugate prior to the likelihood function. We maximizing the resulting likelihood for some value of P(φs,i ):
average over 1000 blank sky positions located at |b| >30◦ , and
∞
>1◦ from any 3FGL source. The flux can be either positive Y Z
P (hσvi) = arg max L(φs,i + φbg )Pbg (φbg ) ×
or negative, because these blank skies account for the impact φs,i −∞
i
of mismodeling. The function P0bg is spectrum specific, and is
computed for each dark matter mass and final state. × Ps,i (φs,i , hσvi, Ji , σi ) dφbg (4)
3
where each dSph flux is a combination of a true source Results — We reproduce two Fermi collaboration studies,
flux (φs,i ) and a flux induced by background mismodeling comparing the standard joint-likelihood analysis with our
(φbg ). The source flux is non-negative (it represents a method. We first focus on the 15 dSphs examined by [16, 17]
physical source), while the background flux can be negative and then the 45 dSphs studied by [18]. We utilize the reported
(it represents the flux induced by mismodeling). The results locations, distances and J-factors from previous work, but use
are weighted by the probabilities (Pbg and Psi ) that each 9.5 yr of data. We make these choices to isolate the effect of
flux is consistent with the blank sky and dark matter fluxes, the joint-likelihood analysis on the annihilation constraint. We
respectively. The joint-likelihood is the product of each dSph stress that any change in the constraint stems directly from the
likelihood. In this model, the probability is maximized at the analysis method. The datasets are identical, thus the results
best-fit value of φs,i . In Appendix D we show results obtained cannot be considered to be “statistically consistent.”
by marginalizing over φs,i . At their core, these models both In Figure 2, we constrain the annihilation cross-section us-
integrate the information from blank sky positions into ing the standard (left) and improved (right) methods, and show
the dSph flux calculation before the joint-likelihood is per- 1σ and 2σ error bands derived from the blank sky analyses.
formed. We now demonstrate the advantages of this approach. We note three effects. First, the uncertainty in the expected
limit decreases significantly in the improved analysis, demon-
Fermi-LAT Analysis — We analyze 9.5 yr of Fermi-LAT strating the resilience of the method. In previous work, the
data in a 14◦ ×14◦ ROI around each dSph. We divide limits varied based on the diffuse mismodeling near individual
P8R2 SOURCE V6 photons into four bins based on the their dSphs. Our model accounts for these fluctuations, decreasing
angular reconstruction (event types: PSF0–PSF3), and 24 log- the uncertainty from outlier dSphs.
arithmic energy bins spanning 500 MeV to 500 GeV. We an- Second, while the limits are similar between ∼50–500 GeV,
alyze each bin separately and sum the results to obtain the we find weaker constraints (by a factor of 2) at low masses.
total log-likelihood. We additionally analyze 1000 “blank sky This is primarily due to the oversubtraction of low-energy
locations” at latitude |b| >30◦ and separated by >1◦ from γ-rays near Segue I, which is the highest J-factor dSph in the
any 3FGL source. We assume that each dSph is represented sample. The best-fit flux for 5 GeV dark matter in Segue I
by a point-source – rather than a spatially extended profile. is -7×10−14 erg cm−2 s−1 . Because the standard model does
Ref. [16] showed that this only marginally affects the calcu- not allow mismodeling to induce this flux, it adopts a likeli-
lated flux – and we stress that the goal of this letter is to focus hood function that is skewed towards 0. This forces the low-
on the likelihood algorithms. Thus, we designed this analysis mass constraint to fall nearly 2σ below expectations. In the
to be similar to Fermi-LAT collaboration studies when possi- improved analysis, the negative flux from Segue I is absorbed
ble (a few exceptions are noted in Appendix C). into φbg . Our analysis still obtains a strong limit from Segue
Unlike previous work, we evaluate both positive and neg- I, because φs,i cannot be too big without φbg becoming unrea-
ative fluxes in each energy bin to calculate the dSph likeli- sonably negative. Indeed the improved constraint is still ∼2σ
hood profile (∆LG(L)). In this work, we utilize a two-step fit- below expectations, but the amplitude of the shift is smaller.
ting algorithm. First, the background diffuse and point source Third, the high-mass limits are also a factor of ∼2 weaker
emission is fit to data over the ROI following the standard for- in our analysis. This is not a fluctuation, but a robust fea-
malism. We subsequently fit the dSph to the data, allowing ture (note that the expected error bands predict weaker con-
only the dSph normalization to float. We have tested alterna- straints). The energy flux from TeV dark matter peaks at
tive methods, and found that they negligibly affect our results ∼100 GeV, where few photons are present. It is likely that
when the dSph flux is small, as is the case here. Further anal- 0 photons are observed near the dSph, while the background
ysis details are provided in L16 and Appendix C. model predicts O(0.1) events. Thus, the average dSph has a
4
slightly negative best-fit flux. In the standard analysis, this this prevents the improved algorithm from overreacting to
significantly strengthens the combined limit. However, the the excess. This test illuminates two lessons. First, the
improved analysis is normalized to blank-sky positions that improved analysis allows the statistical uncertainties in each
also skew towards negative fluxes, correcting for this effect. dSph J-factor to be directly compared to the significance of
In Figure 3 (blue), we show constraints from the 45 dSphs any γ-ray excess. Second, the analysis can accurately treat
analyzed in [18]. In this case, we find that our analysis pro- outliers in the Fermi-LAT data, whereas the standard method
duces stronger constraints (by ∼25%) below ∼500 GeV. The would require manual intervention to remove Aquarius II.
stronger limits derive from the small excesses found in sev-
eral dSphs (e.g. Reticulum II), which cause the standard anal- Mock Signal — Thus far, one should worry that our analysis
ysis to prefer the annihilation of 150 GeV dark matter to bb̄ at can only constrain dark matter, and may misinterpret a true
TS = 8.9 (similar to Ref. [18]). Nominally, this would trans- signal as a fluctuation in φbg . Here, we show that the analysis
late to a 3σ detection of dark matter. However, an excess of accurately recovers signals that are injected into γ-ray data.
this significance is relatively common in analyses of blank-sky We produce 50 simulations of 15 blank sky locations at lati-
locations, decreasing our confidence in this result. The im- tude |b| >30◦ lying >1◦ from any 3FGL source. We assign
proved analysis also finds a slight excess at 150 GeV (because each blank sky to be one of the 15 dSphs analyzed in [16] and
these dSphs have real upward fluctuations). However, after use gtobssim to inject a signal corresponding to the best-fit
accounting for the probability that background mismodeling J-factor of each dSph and a 100 GeV dark matter model that
induces similar upward fluctuations, it finds the significance annihilates to bb̄ with a cross-section of 3×10−26 cm3 s−1 .
of this excess to be only TS=0.4. This lies near the 2σ constraint shown in Figure 2 for both
In Figure 3 (red-dashed), we show a scenario that analyses. Thus, we expect a typical detection at ∼2σ.
challenges the standard analysis, induced by adding the In Figure 4, we show that standard analyses produce re-
Aquarius II dSph into the stack. Aquarius II has a bright, soft- sults that vary from a -1.4σ exclusion of the signal to a 7.9σ
spectrum γ-ray excess. Modeling this excess with 5 GeV dark detection. The hypothesis that this dispersion is consistent
matter (to b̄b) improves the log-likelihood by 10.7, statistically with Gaussian fluctuations around the mean signal signifi-
representing a 4.6σ detection of dark matter. However, Aquar- cance is rejected at 5.1σ (still 3.3σ even when the worst out-
ius II is not expected to be bright, its logarithmic J-factor is lier is removed). Scanning the 2D mass/cross-section space,
only 18.27±0.62 [38]. The standard algorithm resolves this we find that the incorrect mass and cross-section are preferred
discrepancy by treating the J-factor measurement and likeli- at TS>4 in >30% of cases. While blank skies analyses could
hood function equivalently, assuming that the excess is sta- characterize this uncertainty, the standard method would have
tistical in nature. Thus, the algorithm increases the Aquarius difficulty indicating the correct parameters.
II J-factor to 19.96, paying a log-likelihood cost of 6.9. This Our analysis produces a distribution of significances
produces a TS=14 detection of 5 GeV dark matter in the joint- following a χ2 distribution centered near
√ the predicted value
likelihood analysis and weakens the limit by a factor of 2. of ∼2σ. While the distribution of T S is slightly more
The improved algorithm includes the possibility that the clustered than expected, this is significant at only 0.9σ. In
Aquarius II flux is produced by φbg . Blank sky analyses the two-dimensional reconstruction, the variance between
indicate that there is a ∼0.7% chance that a background the injected and recovered parameters also follows a χ2
fluctuation would induce an excess compatible with 5 GeV distribution. We note that the dark matter mass and the γ-ray
dark matter at this significance. Thus, the Aquarius II result energy are scanned coarsely, and the mass and cross-section
is significant at < ∼2.4σ. Combined with its low J-factor, parameters are not linearly independent. We thus expect
5
FIG. 4. (Left) - The recovered statistical significance from 50 trials including an injected 100 GeV dark matter signal with cross-section of
3×10−26 cm3 s−1 . In the default analysis (top left), the statistical significance includes several significant upward and downward fluctuations
that would not be expected from a χ2 distribution surrounding the average significance (such a distribution is rejected at 5.1σ). The improved
analysis (bottom left) closely follows a normal distribution. (Right) - The cumulative distribution function of the ∆TS between the injected
dark matter mass and cross-section and the recovered best-fit dark matter mass and cross-section in the standard (blue) and improved (red)
analyses. Results are compared χ2 -distributions with 1 and 2 d.o.f. (as the mass is only scanned coarsely). The improved analysis follows this
expectation, while the standard analysis prefers an incorrect mass and cross-section at a significance TS>4 in >30% of cases.
to observe a χ2 distribution with between 1—2 d.o.f. We the statistical significance of the J-factor measurement, with-
conclude that our method is sensitive to dark matter signals, out biasing results towards the Fermi-LAT measurements that
and moreover, that the significance of any discovered excess are not represented by Poisson statistics, (3) the technique
can be interpreted using standard statistical methods. accurately measures the uncertainty in both the cross-section
and mass of an injected dark matter signal, and does not pro-
Dark Matter Constraints — We now evaluate the annihilation duce high precision (but low accuracy) measurements.
constraints shown in Figures 2 and 3, finding that our analysis We note that this method integrates seamlessly with exist-
rules out (at 95% confidence) dark matter masses below ing analyses conducted by the Fermi-LAT collaboration, only
120 GeV and 87 GeV annihilating to b̄b at a cross-section requiring the use of Eq. 4 to evaluate the joint-likelihood func-
of 3×10−26 cm−3 s−1 . The constraints do weaken after the tion. We thus recommend its integration in future dSph anal-
inclusion of 30 additional dSphs (and some J-factor changes yses. Moreover, as noted in L16, the result can be trivially
between [16] and [18]). This is a real effect, based on the adapted to any search for low-significance excesses in Fermi-
low-significance detection of excesses in Reticulum II and a LAT data, potentially improving our sensitivity to new phe-
handful of other dSphs. Our results produce maximum TS nomena on the threshold of detection. In Appendix E, we
values of ∼0.04 at a mass of 5 TeV with 15 dSphs, and 1.06 discuss several technical considerations required in the imple-
at a mass of ∼31 GeV with 45 dSphs. These TS values can mentation of these analysis techniques.
be interpreted as true statistical preferences for dark matter, Finally, the result of this analysis provides of the most ro-
though at a very low significance. We argue that these results, bust constraints on annihilating dark matter. Using 9.5 yr of
utilizing 9.5 yr of data and an improved statistical method, set Fermi-LAT data, we rule out the thermal annihilation cross-
robust constraints on dark matter annihilation. section up to masses of ∼87 GeV. However, several simplifi-
cations have been made (due to our focus on statistical tech-
Conclusion — In this paper we introduced a simple, but sig- niques), which will be improved in an upcoming publication.
nificantly more powerful, method to evaluate the constraints Most notably, we have not utilized the most up to date photo-
on annihilating dark matter through the joint-likelihood anal- metric J-factors produced by numerous groups, instead choos-
ysis of dwarf spheroidal galaxies. We have demonstrated ing values consistent with previous studies. Second, we have
several advantages to this technique using real Fermi-LAT modeled dSphs as point sources, rather than carefully consid-
data: (1) the technique is resilient to systematic uncertain- ering their spatial extension. Third, we have not used the very
ties in Fermi-LAT background modeling and does not produce recently released 4FGL catalog, which appeared subsequent
overly strong limits or unreasonably strong excesses due to to the computations shown in this paper. The inclusion of
diffuse over- or under-subtraction, (2) the technique allows the these improvements into the current framework is straightfor-
statistical significance of excesses to be directly compared to ward, and will be considered in an upcoming publication.
6
[1] G. Steigman, B. Dasgupta, and J. F. Beacom, Phys. Rev. D86, [22] L. E. Strigari, Rept. Prog. Phys. 81, 056901 (2018),
023506 (2012), arXiv:1204.3622 [hep-ph]. arXiv:1805.05883 [astro-ph.CO].
[2] D. Hooper, A. V. Belikov, T. E. Jeltema, T. Linden, S. Pro- [23] M. Winter, G. Zaharijas, K. Bechtol, and J. Vandenbroucke,
fumo, and T. R. Slatyer, Phys. Rev. D86, 103003 (2012), Astrophys. J. 832, L6 (2016), arXiv:1607.06390 [astro-ph.HE].
arXiv:1203.3547 [astro-ph.CO]. [24] V. Bonnivard, C. Combet, D. Maurin, and M. G. Walker, MN-
[3] B. Bertoni, D. Hooper, and T. Linden, JCAP 1605, 049 (2016), RAS 446, 3002 (2015), arXiv:1407.7822 [astro-ph.HE].
arXiv:1602.07303 [astro-ph.HE]. [25] J. L. Sanders, N. W. Evans, A. Geringer-Sameth,
[4] A. Cuoco, M. Krmer, and M. Korsmeier, Phys. Rev. Lett. 118, and W. Dehnen, Phys. Rev. D 94, 063521 (2016),
191102 (2017), arXiv:1610.03071 [astro-ph.HE]. arXiv:1604.05493.
[5] M.-Y. Cui, Q. Yuan, Y.-L. S. Tsai, and Y.-Z. Fan, Phys. Rev. [26] A. Chiappo, J. Cohen-Tanugi, J. Conrad, L. E. Strigari, B. An-
Lett. 118, 191101 (2017), arXiv:1610.03840 [astro-ph.HE]. derson, and M. A. Sanchez-Conde, Mon. Not. Roy. Astron.
[6] I. Cholis, T. Linden, and D. Hooper, (2019), arXiv:1903.02549 Soc. 466, 669 (2017), arXiv:1608.07111 [astro-ph.CO].
[astro-ph.HE]. [27] K. Hayashi, K. Ichikawa, S. Matsumoto, M. Ibe, M. N. Ishigaki,
[7] L. Goodenough and D. Hooper, (2009), arXiv:0910.2998 [hep- and H. Sugai, MNRAS 461, 2914 (2016), arXiv:1603.08046.
ph]. [28] A. B. Pace and L. E. Strigari, ArXiv e-prints (2018),
[8] K. N. Abazajian and M. Kaplinghat, Phys. Rev. D86, arXiv:1802.06811.
083511 (2012), [Erratum: Phys. Rev.D87,129902(2013)], [29] E. Carlson, D. Hooper, and T. Linden, Phys. Rev. D91, 061302
arXiv:1207.6047 [astro-ph.HE]. (2015), arXiv:1409.1572 [astro-ph.HE].
[9] C. Gordon and O. Macias, Phys. Rev. D88, 083521 (2013), [Er- [30] D. Hooper and T. Linden, JCAP 1509, 016 (2015),
ratum: Phys. Rev.D89,no.4,049901(2014)], arXiv:1306.5725 arXiv:1503.06209 [astro-ph.HE].
[astro-ph.HE]. [31] A. Geringer-Sameth, S. M. Koushiappas, M. G. Walker, V. Bon-
[10] T. Daylan, D. P. Finkbeiner, D. Hooper, T. Linden, S. K. N. nivard, C. Combet, and D. Maurin, (2018), arXiv:1807.08740
Portillo, N. L. Rodd, and T. R. Slatyer, Phys. Dark Univ. 12, 1 [astro-ph.HE].
(2016), arXiv:1402.6703 [astro-ph.HE]. [32] T. Linden, Phys. Rev. D96, 083001 (2017), arXiv:1612.03175
[11] M. Ajello et al. (Fermi-LAT), Astrophys. J. 819, 44 (2016), [astro-ph.HE].
arXiv:1511.02938 [astro-ph.HE]. [33] M. N. Mazziotta, F. Loparco, F. de Palma, and N. Giglietto,
[12] M. Ackermann et al. (Fermi-LAT), Astrophys. J. 840, 43 Astropart. Phys. 37, 26 (2012), arXiv:1203.6731 [astro-ph.IM].
(2017), arXiv:1704.03910 [astro-ph.HE]. [34] A. Geringer-Sameth, S. M. Koushiappas, and M. G. Walker,
[13] A. Geringer-Sameth and S. M. Koushiappas, Phys. Rev. Lett. Phys. Rev. D 91, 083535 (2015), arXiv:1410.2242.
107, 241303 (2011), arXiv:1108.2914 [astro-ph.CO]. [35] A. Geringer-Sameth, M. G. Walker, S. M. Koushiappas, S. E.
[14] M. Ackermann et al. (Fermi-LAT), Phys. Rev. Lett. 107, Koposov, V. Belokurov, G. Torrealba, and N. W. Evans, Phys-
241302 (2011), arXiv:1108.3546 [astro-ph.HE]. ical Review Letters 115, 081101 (2015), arXiv:1503.02320
[15] A. Geringer-Sameth and S. M. Koushiappas, Phys. Rev. D86, [astro-ph.HE].
021302 (2012), arXiv:1206.0796 [astro-ph.HE]. [36] K. Boddy, J. Kumar, D. Marfatia, and P. Sandick, Phys. Rev.
[16] M. Ackermann et al. (Fermi-LAT), Phys. Rev. D89, 042001 D97, 095031 (2018), arXiv:1802.03826 [hep-ph].
(2014), arXiv:1310.0828 [astro-ph.HE]. [37] F. Calore, P. D. Serpico, and B. Zaldivar, (2018),
[17] M. Ackermann et al. (Fermi-LAT), Phys. Rev. Lett. 115, arXiv:1803.05508 [astro-ph.HE].
231301 (2015), arXiv:1503.02641 [astro-ph.HE]. [38] A. B. Pace and L. E. Strigari, (2018), arXiv:1802.06811 [astro-
[18] A. Albert et al. (DES, Fermi-LAT), Astrophys. J. 834, 110 ph.GA].
(2017), arXiv:1611.03184 [astro-ph.HE]. [39] F. Acero et al. (Fermi-LAT), Astrophys. J. Suppl. 218, 23
[19] S. Hoof, A. Geringer-Sameth, and R. Trotta, (2018), (2015), arXiv:1501.02003 [astro-ph.HE].
arXiv:1812.06986 [astro-ph.CO]. [40] M. Ajello, M. S. Shaw, R. W. Romani, C. D. Dermer,
[20] A. Charbonnier, C. Combet, M. Daniel, S. Funk, J. A. Hinton, L. Costamante, O. G. King, W. Max-Moerbeck, A. Readhead,
D. Maurin, C. Power, J. I. Read, S. Sarkar, M. G. Walker, and A. Reimer, J. L. Richards, and M. Stevenson, ApJ 751, 108
M. I. Wilkinson, MNRAS 418, 1526 (2011), arXiv:1104.0412 (2012), arXiv:1110.3787.
[astro-ph.HE]. [41] A. Berlin and D. Hooper, Phys. Rev. D89, 016014 (2014),
[21] V. Bonnivard, C. Combet, D. Maurin, M. G. Walker, and arXiv:1309.0525 [hep-ph].
A. Geringer-Sameth, Proceedings, 14th International Confer-
ence on Topics in Astroparticle and Underground Physics
(TAUP 2015): Torino, Italy, September 7-11, 2015, J. Phys.
Conf. Ser. 718, 042005 (2016).
7
100
Appendix A: The Density of Sub-Threshold Sources as a
Function of TS log10 [∆LG(L)] = 1.88 log10 (|f|) + 24.11
∆LG(L)
on the TS distribution of γ-ray point sources. We also re-
fer the reader to L16, which provides additional information 1
regarding our statistical methods. The influence of uniden-
tified point-sources is significant in Fermi-LAT analyses due
a combination of its high sensitivity (∼10−12 erg cm−2 s−1 ) 0.1
and modest angular resolution (∼1◦ for individual 1 GeV pho-
tons). This combination results in the detection of a handful of
sources observed at the ∼2σ level within every angular resolu- Energy Flux (erg cm−2 s−1 )
(LG(L)∼3). At higher-values, there is a positive excess, ow- Appendix C: Fermi Data Analysis
ing to brighter point sources that contribute at higher-flux
values. Furthermore, we note that for fluxes smaller than In this section, we provide further details regarding the
∼10−13 erg cm−2 s−1 , the probability distribution is roughly Fermi-LAT data analysis. We examine events recorded
flat, and symmetric for positive and negative values. Figure 5 between MET times 239557417–548377407 that pass the
(bottom), we show the same flux distribution, but multiplied P8R2 SOURCE V6 event class, were recorded at a zenith an-
by the flux in order to illustrate the regions of parameter space gle below 90◦ , and an energy between 500 MeV–500 GeV.
where sources are most likely to appear. We note that, given These events are divided into four classes based on the relia-
a probability distribution that is flat over many decades in bility of their angular reconstruction (evtype=4,8,16,32), and
flux, random positions are highly likely to have fluxes near all analyses proceed independently for each PSF class. The
the value of ∼10−13 erg cm−2 s−1 , compared to fluxes with source likelihood function is calculated from the individual
much smaller absolute values. In the next section we describe likelihoods of each PSF class. Each set of events is binned
the deconvolution of our observed data (black solid lines) into into 140x140 pixels with a bin size of 0.1◦ , centered on the
components relating to statistical and systematic errors. position of a dSph (or blank sky) source. Events are binned
into 24 logarithmically equal energy bins. The ROI is slightly
larger than that chosen for the Fermi-LAT analysis, which uti-
Appendix B: Separation of Poissonian and Systematic Errors
lized a 100x100 grid with identical 0.1◦ pixels. This differ-
Because the blank sky analysis operates on real Fermi-LAT ence potentially explains the different sensitivity of our low-
data, the likelihoods that enter into Equation 1 include both energy Monte Carlo constraints compared to the Fermi-LAT
systematic errors (which we would like to isolate and account results [17]. However, it is also possible that differences in
for in Pbg ) and Poisson fluctuations (which should not be in- the dark matter spectral models at low-masses affect the re-
cluded in the calculation of Pbg , because we want the calcu- sults. The instrumental exposure is calculated in 0.5◦ incre-
lated fluxes of dSphs to include statistical fluctuations). To ments across the spatial map in the same 24 energy bins uti-
deconvolve these two terms, we note that the Poissonian er- lized for the analysis. The exposure is calculated using the
ror nearly follows a Gaussian distribution, and we calculate EDGES option, which calculates the instrumental exposure at
the distribution of fluxes induces by Poissonian fluctuations both edges of the energy bin and interpolates the result.
using the correspondence between the γ-ray flux and the log- We first calculate the likelihood fit of a background
likelihood fit to the γ-ray data (Figure 5 (top)). In this pa- model in each sky region. This fit includes all 3FGL
per, we build a model for the Poissonian fluctuations by cal- sources, as well as the gll iem v06.fits diffuse model and the
culating the average Poisson error in all blank sky locations. iso P8R2 SOURCE v6 PSFX v06.txt isotropic background
We note that this technique could be slightly improved by in- model, where X represents the PSF class of each analysis. We
stead calculating the individual Poisson errors in each blank calculate the likelihood function (and flux) of each source in-
sky location (since the photon counts differ slightly near each dependently in each energy bin. As such, we allow only the to-
blank sky position). However, because we use blank sky loca- tal normalization of each source to vary, with the spectrum of
tions that lie at high Galactic latitude and far from any point each source fixed to the best-fit 3FGL properties. Because we
sources, the background fluxes for all sources are similar. are using small energy bins, this only marginally affects our
In theory, many deconvolution algorithms can be used to analysis. In this stage, we do not include a source at the ten-
separate these two components – but preventing numerical is- tative position of the dSph or blank sky position. The source
sues can be tricky (due to the large dynamic range in Pbg ). In model is produced with gtsrcmaps and the likelihood fit
this paper, we utilize forward modeling. We first guess at the evaluated with gtlike, with energy dispersion disabled.
systematic distribution Pbg and then convolve this distribution We then evaluate the expected flux from a point source
with the Poissonian noise term to fit the measured value of added at each dSph or blank sky position. We utilize
P0bg . We iteratively improve this fit and approach a “good” so- gtmodel to generate the expected number of counts as a
lution. After each iteration we use a high-pass filter to remove function of position and energy for a given source normal-
high-frequency noise. We obtain solutions that fit to within ization, and then calculate the improvement to the total log-
5% (usually within 1%) over the range where the Pbg is more likelihood as a function of the source flux in each energy bin.
than 10−5 of its maximum value. This fit could be improved During this stage the background model is fixed. We allow
with additional iterations. The result for 100 GeV dark matter the flux of the putative point source to be either positive or
is shown in the middle and lower panels of Figure 5. We find negative, representing the additional contribution to the point-
that the amplitude of the excess is much larger for positive flux source flux from background mismodeling.
values – representing the appearance of real sources that pro- We note that Poisson likelihoods cannot be used when the
duce fluctuations above the expected level of Poisson noise. total model prediction (including both the dSph and the back-
However, there are important contributions to Pbg at negative ground model) is negative. This scenario is very rare, and
fluxes as well. In particular, we note that the similarity be- present only in the highest energy bins, which contribute neg-
tween the Poisson expectation of P0bg and the observed value ligibly to the total LG(L). However, to prevent numerical is-
of P0bg is incidental. If only positive systematic errors existed sues in likelihood scans, we treat this scenario as follows. If
in the Fermi-LAT data, we would expect P0bg to have fewer the model predicts a negative number of counts in a specific
negative fluctuations than expected by our Poisson model. angular and energy bin, we first check the number of observed
9
10 -26
10 -27 Minimized J-factor Marginalized J-factor
10 100 1000 10 100 1000
Mass (GeV) Mass (GeV)
FIG. 6. The 1σ uncertainties in the expected constraint on dark matter annihilation obtained from the stacked analyses of 45 dSphs described
in [18]. Results are shown using two different methods for including J-factor uncertainties into the joint-likelihood analysis, including the
analysis described by Equation 4 (left), and an analysis that marginalizes over the J-factor uncertainty following Equation D1 (right). In both
cases, we show the default limits (black), as well as limits from scenarios where we multiply the J-factor uncertainty in each individual dSph by
2 (red), or divide the J-factor uncertainty by 2 (green) and 4 (blue). In our default analysis, the limit behaves as expected, producing stronger
constraints as the uncertainty on the J-factor of individual dSphs decreases. In the marginalized analysis, the limit demonstrates perverse
behavior, becoming stronger as the J-factor uncertainty increases. This is due to the very strong constraints on large upward fluctuations in the
dSph J-factor provided by γ-ray analyses.
counts in the data. If the number of observed counts is non- true source that must be non-negative. This is identical to a
zero, we assign an infinite likelihood due to continuity (the constraint that the dSph J-factor must be non-negative.
log-likelihood of predicting 0 counts would also be infinite). This technique potentially improves the sensitivity of dark
If the number of observed counts is 0, we allow the predicted matter searches, as it includes additional information regard-
flux to be negative, and assign a log-likelihood based on the ing the distribution of J-factor uncertainties. For example,
absolute value of the predicted model counts. This maintains while Equation 4 will not impose any costs to a scenario where
a local minimum at the best-fit prediction of zero. We stress the J-factor of each dSph falls exactly at the predicted value,
that this scenario rarely arises near the best-fit values of the Equation D1 imposes likelihood costs for any scenario where
flux, and is important only for the stability of algorithms. a portion of the J-factor parameter space that is consistent
with optical velocity dispersion measurements begins to be
ruled out. On the other hand, this means that Equation D1
Appendix D: Marginalizing Over J-Factor Uncertainties assumes detailed knowledge of not only the best-fit J-factors,
but also of the uncertainty in each J-factor calculation. More-
In our default analysis, we determine the flux from each over, Equation D1 assumes that the uncertainties in each dSph
individual dSph by integrating over the possible background J-factor are uncorrelated, which may or may not hold depend-
contributions to the total flux to determine the best-fit value of ing on the analyses utilized in the J-factor calculations.
the source flux φs,i . However, we then minimize to find the In Figure 6, we show the constraints from each method, ap-
best-fit value of φs,i . Because we are testing a specific value of plied to the Monte Carlo analyses of the 45 dSphs analyzed
hσvi in a specific dSph, this is equivalent to finding a best-fit in [18] (the differences in the cross-section constraint in the
J-factor in each dSph (where best-fit includes the penalty from case of 15 dSphs are significantly smaller, with an amplitude
choosing a J-factor that deviates from the observed value). In of ∼20%). We find that utilizing Equation D1 produces a
this sense, the method is pseudo-Bayesian, as it builds a prior constraint on the annihilation cross-section that is nearly a
for the systematic errors, but minimizes the fit for some values factor of three stronger than our standard analysis (given by
of the dSph J-factors. Alternatively, we could institute a more Equation 4). This is similar to the recent result of [19], who
Bayesian approach, where we marginalize our results over all found that Bayesian analyses of dSphs produced significantly
possible J-factors that could contribute to the fit, as follows: stronger limits. This result is due to the fact that the J-factor
∞ ∞
uncertainties are logarithmic, and significant upward fluctua-
YZ Z
P (hσvi) = L(φs,i + φbg )Pbg (φbg ) × tions in the J-factor are strongly constrained by the γ-ray anal-
i −∞ 0 ysis. Intriguingly, such a method would rule out the thermal
annihilation cross-section for dark matter annihilation to bb̄
× Ps,i (φs,i , hσvi, Ji , σi ) dφs,i dφbg (D1)
final states up to masses of more than 200 GeV.
where the likelihood and probability distributions are defined However, marginalizing over the J-factor uncertainties in-
as in the main text, while the integral over φs,i is assumed troduces peculiar statistical properties into the calculated
to have a logarithmically-flat prior. The integral over φs,i in- limit. It is clear that in the case where the J-factor uncer-
cludes only non-negative values, as the emission represents a tainty goes to 0, the two limits must agree. In Figure 6, we
10
show the calculated dark matter annihilation limit in our de- the order of a few tenths of a percent (since a handful of dSphs
fault analysis behaves as expected – as the J-factor uncertainty dominate the result in any dwarf stacking analysis). This was
decreases, the limits become stronger as downward fluctua- done to save computational time, and will be tested in detail
tions of the dSph J-factors become inconsistent with the data. in future analyses.
However, in the marginalized case, the limits become weaker Third, in our analysis of the expected limits for 45 dSphs,
as the uncertainty in the J-factor calculation decreases. This is we have doubled the number of Monte Carlo trials by re-
due to the fact that the strength of the marginalized constraint calculating the expected limit when each blank sky location
derives primarily from the strong exclusion of upward fluctu- is re-assigned from representing the highest J-factor dSph to
ations in the J-factor of numerous dSphs. As the fraction of representing the lowest J-factor dSph (and vice versa). This
the total J-factor probability that includes large upward fluctu- decreases the statistical independence of our results by a small
ations increases, the constraint on the dark matter annihilation amount — because the 22 dSphs with the lowest J-factors con-
cross-section becomes stronger. tribute negligibly to the limit. This was done to double the
The legitimacy of each interpretation is open for debate. number of trials in the dSph Monte Carlo, since more than
It appears reasonable that the marginalized model should be 22 trials were necessary to generate reasonable 1σ and 2σ er-
used in scenarios where the uncertainty in each individual rors. We used an entirely independent set of Fermi-LAT sky
dSph is understood and is statistically independent from the positions to calculate the expected signal when dark matter is
uncertainty in other dSphs. In this case, the stacked analy- injected into the data, keeping that analysis independent.
sis of 45 dSphs should rule out scenarios where none of the
45 dSphs in the analysis were consistent with upward fluctua- We additionally note several technical details for those
tions in the observed J-factor. However, these assumptions are wishing to replicate these results. First, the utilization of nega-
unlikely to hold – as the calculation of the J-factor depends on tive values for the point source flux can produce mathematical
the careful evaluation of a number of systematic uncertain- issues whenever the expected counts from the entire model
ties that are likely to be correlated between individual dSphs. (source + astrophysical background) becomes negative, as the
Thus, we find the performance of our default J-factor model to Poisson statistic is no longer well-defined. This happens very
provide a more accurate representation of our best constraints rarely in this analysis, and only at energies above ∼100 GeV.
on the dark matter annihilation cross-section. These bins contribute negligibly to the total limit. To address
these issues, we adopt the following two-prong approach. If
the number of observed counts in the energy and angular bin is
Appendix E: Further Technical Details non-zero, we set the resulting log-likelihood to infinity. This
is straightforward, since the log-likelihood expectation of pro-
In this section, we communicate several more technical de- ducing 0 counts when ≥1 count is observed is also infinity. If
tails from our analysis method. This includes detailing sev- the number of observed counts in the bin is instead 0, we set
eral simplifications utilized to conserve computational time the number of predicted accounts to be equal to its absolute
during this study, as well as several caveats and warnings to value. This is sensible, as it produces a best-fit point at the
researchers who may wish to utilize this technique. prediction of 0 counts. We note that these issues contribute
We first note several simplifications in this analysis. In all very negligibly to the calculated limits, as energy bins above
cases, we believe that these negligibly affect our results. First, 100 GeV contribute negligibly compared to the lower-energy
following the choice made by the Fermi-LAT collaboration, photon data.
we have only investigated blank sky positions with latitude Finally, while the techniques described in this letter are
|b| >30◦ , despite the fact that the population of 45 dSphs an- widely applicable to investigations of low-significance ex-
alyzed in [18] includes several objects that are closer to the cesses throughout the Fermi-LAT data, their interpretation
Galactic plane. We have made this choice in order to produce may be tricky in scenarios where they are applied to studies
an “expected constraint” plot which is similar to the Fermi- of Fermi-LAT blazars. This is due to the fact that blazars are
LAT results, but note that the correct choice of blank sky lo- responsible for a significant fraction of the low-significance
cations is somewhat more important in our analysis — as the (TS∼5) excesses observed in the Fermi-LAT data. Thus the
blank sky locations are directly utilized in order to calculate calculation of Pbg will include substantial contributions from
the dark matter annihilation limit. In future work, we will se- the source population it is attempting to constrain, and will
lect an ensemble of blank sky locations which is more closely associate these sources with background fluctuations. This is
matched to the analyzed dSph population. unlikely to affect other source classes, such as star-forming
Second, in generating the expected dark matter limits, we galaxies (as studied in L16), or dSphs, though we note that
have utilized the same blank sky population that was em- dark matter annihilation from subhalos can potentially pro-
ployed to produce the distribution Pbg in the joint-likelihood duce a few percent of the underlying low-statistics fluctua-
analysis. This technically removes some statistical indepen- tions [41]. We note that in the case of blazars, variations in
dence from the blank sky locations used to produce the ex- the standard analysis may be utilized to chose blank sky lo-
pected dark matter constraints, since each dwarf contributes cations that significantly differ from the class of blazars being
at the 0.1% level to the Pbg calculation. Thus, this is expected analyzed in the stacking analysis – but care must be employed
to narrow the range of uncertainties in our calculated limits at when utilizing this method.