Correcting Type Ia Supernova Distances for Selection Biases and Contamination in Photometrically Identified Samples

and

Published 2017 February 8 © 2017. The American Astronomical Society. All rights reserved.
, , Citation R. Kessler and D. Scolnic 2017 ApJ 836 56 DOI 10.3847/1538-4357/836/1/56

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

0004-637X/836/1/56

Abstract

We present a new technique to create a bin-averaged Hubble diagram (HD) from photometrically identified SN Ia data. The resulting HD is corrected for selection biases and contamination from core-collapse (CC) SNe, and can be used to infer cosmological parameters. This method, called "BEAMS with Bias Corrections" (BBC), includes two fitting stages. The first BBC fitting stage uses a posterior distribution that includes multiple SN likelihoods, a Monte Carlo simulation to bias-correct the fitted SALT-II parameters, and CC probabilities determined from a machine-learning technique. The BBC fit determines (1) a bin-averaged HD (average distance versus redshift), and (2) the nuisance parameters α and β, which multiply the stretch and color (respectively) to standardize the SN brightness. In the second stage, the bin-averaged HD is fit to a cosmological model where priors can be imposed. We perform high-precision tests of the BBC method by simulating large (150,000 event) data samples corresponding to the Dark Energy Survey Supernova Program. Our tests include three models of intrinsic scatter, each with two different CC rates. In the BBC fit, the SALT-II nuisance parameters α and β are recovered to within 1% of their true values. In the cosmology fit, we determine the dark energy equation of state parameter w using a fixed value of ${{\rm{\Omega }}}_{{\rm{M}}}$ as a prior: averaging over all six tests based on 6 × 150,000 = 900,000 SNe, there is a small w-bias of $0.006\pm 0.002$. Finally, the BBC fitting code is publicly available in the SNANA package.

Export citation and abstract BibTeX RIS

1. Introduction

The discovery of the accelerating expansion of the universe (Riess et al. 1998; Perlmutter et al. 1999) using Type Ia supernovae (SN Ia) has motivated increasingly large transient searches in broadband imaging surveys. Approximately 1000 spectroscopically confirmed SNe Ia (Conley et al. 2011; Betoule et al. 2014; Rest et al. 2014; Scolnic et al. 2014a) have been combined with measurements of the cosmic microwave background (Hinshaw et al. 2013; Planck Collaboration XVI 2014) to measure the dark energy equation of state parameter (w) and today's matter density (${{\rm{\Omega }}}_{{\rm{M}}}$). There are not enough spectroscopic resources to dramatically increase the sample size of confirmed SNe Ia, and therefore several programs are aiming to acquire very large samples of photometrically identified SNe Ia to measure dark energy properties with increased precision. These SN programs include the recently completed Pan-STARRS1  (Kaiser et al. 2002), the ongoing Dark Energy Survey Supernova Program (DES-SN: Bernstein et al. 2012), and the Large Synoptic Survey Telescope (LSST: Ivezic et al. 2008; LSST Science Collaboration et al. 2009), expected to begin in the next decade.

These photometric samples are expected to include contamination from core-collapse (CC) SNe, and this contamination must be accounted for in the inference of cosmological parameters. Biases from selection effects and light-curve fitting must also be accounted for. In this paper we enhance BEAMS 4 and present a new Hubble diagram (HD) fitting method, called "BEAMS with Bias Corrections" (BBC), for extracting bias-corrected cosmological parameters from a photometrically identified SN Ia sample. We test BBC on high-quality simulations of the DES-SN program. The SALT-II framework (Guy et al. 2010) is used for light curve fitting, and a nearest neighbor (NN) method is used for photometric classification. We assume that an accurate spectroscopic redshift is obtained from the host galaxy, and ignore the small fraction of wrong SN-host matches described in Gupta et al. (2016). BBC is not sensitive to the particular method of photometric classification, but is sensitive to how well the resulting contamination is modeled by simulations. Although BBC is designed to treat photometrically identified samples, it is also applicable to spectroscopically confirmed samples by simply leaving out the CC likelihood term.

An ideal likelihood approach would fit for all relevant parameters (cosmology, nuisance, color and stretch population, redshift dependences, etc.) and at each fitting step where parameters are varied, a Monte Carlo simulation would be run to evaluate the impact from selection effects. Since the color and stretch uncertainties are comparable to the width of the parent distribution, an ideal likelihood should use parent distributions and avoid approximate ${\chi }^{2}$ likelihoods that incorrectly assume symmetric Gaussian uncertainties. The practical reality, however, is that with current techniques the computing resources increase as one approaches the ideal likelihood, and all fitting implementations so far use approximations that speed up the fitting implementation.

Strategies to achieve an ideal principled likelihood have been developed within the Bayesian hierarchical model (BHM) framework (March et al. 2011; Rubin et al. 2015; Mandel et al. 2016; Shariff et al. 2016). While BHM methods rigorously address the likelihood issues, practical approximations are made on the bias corrections. March et al. (2011) and Mandel et al. (2016) do not include bias corrections. Rubin et al. (2015) do not use simulations and instead describe the selection efficiency with an ad hoc function that includes additional fitted parameters. Shariff et al. (2016) use a redshift-dependent distance bias computed from simulations in Betoule et al. (2014), but this simulation is not updated as parameters are varied in the cosmology fit, nor does it account for biases in the individual SALT-II parameters. In our BBC approach we continue with the fundamentally flawed ${\chi }^{2}$  likelihood, but incorporate a more accurate simulation technique to correct for selection biases and CC contamination. BBC requires modest computing resources, allowing rapid iterations for systematics studies, and it results in small biases that are an order of magnitude below current uncertainties.

Since all SN-cosmology likelihoods include approximations, we do not claim that our BBC method is superior over BHM methods, but rather complementary in what is sacrificed for computational efficiency. For any given method, it is important to accurately measure biases on high-statistics ($\gt {10}^{5}$) samples of realistic simulated SN light curves that are fit with the SALT-II model. We naively expect that smaller biases require more computing time, and for a particular science analysis there should be enough information about each method to select the appropriate compromise. For the BBC method presented here, we report biases from nearly a million simulated supernovae.

Given the approximate nature of the likelihood approach, it is worth mentioning an alternative likelihood-free approach under development, called approximate Bayesian computation, or ABC (Weyant et al. 2013; Jennings et al. 2016). For each set of fitted parameters, this method relies solely on an accurate simulation to predict observations. A key challenge with the ABC method is to define an optimal metric that quantifies the consistency between the data and simulation.

Recent SN Ia cosmology analyses are based on a light curve fit for each event using the SALT-II model, which determines the best-fit values for the overall amplitude (x0), stretch (x1), and color (c). The stretch and color describe rest-frame properties of the SN Ia that are needed to standardize the brightness, and the amplitude describes the observed SN brightness and dimming from the distance modulus.

The ensemble of fitted $\{{x}_{0},{x}_{1},c\}$ values are passed to a second stage of HD fitting. The HD fit simultaneously determines the cosmological parameters (e.g., w and ${{\rm{\Omega }}}_{{\rm{M}}}$) and standardization coefficients, α and β, which multiply the stretch and color, respectively, in order to standardize the brightness and determine a distance for each event. Marriner et al. (2011, hereafter M11) introduced another HD fitting method to determine α and β without simultaneously fitting for the cosmological parameters, and to compute a cosmology-independent distance modulus for each event. This SALT2mu method fixes the cosmological parameters and fits for a distance modulus offset (${{\rm{\Delta }}}_{\mu ,z}$) in multiple redshift bins. The extraction of cosmological parameters can be obtained by fitting the distance moduli versus redshift, or fitting ${{\rm{\Delta }}}_{\mu ,z}$ versus redshift. The advantage of the SALT2mu output is that a wide variety of cosmology models and priors can be employed in subsequent analyses without repeatedly fitting for the nuisance parameters $\alpha \,\mathrm{and}\,\beta $.

Hubble diagram chi-squared fitting based on fitted SALT-II parameters is fundamentally flawed because it does not account for biases from selection effects, light curve fitting, and CC contamination. With spectroscopically confirmed SN Ia samples, the effect from biases has been evaluated in recent analyses (Conley et al. 2011; Betoule et al. 2014; Scolnic et al. 2014a) by running the same flawed procedure on both data and a simulation, and using the simulated sample to measure the average distance modulus bias in redshift bins. This bias-versus-redshift was applied as a correction to the data in a second-iteration HD fit.

This iterative correction is conceptually flawed for three reasons. First, Scolnic & Kessler (2016, hereafter SK16) have shown that correcting biases as a function of redshift is not a full description, and that the proper correction requires a 3D function of redshift, stretch, and color. Second, the standardization parameters α and β were determined without bias corrections, and then used to simulate bias corrections versus redshift. The third issue is more of an implementation flaw than a conceptual flaw: simulations in previous analyses were generated with approximate stretch and color populations that were not rigorously determined as in SK16.

The basic concept of BBC is to analyze the fitted SALT-II parameters (${x}_{0},{x}_{1},c$) by maximizing a posterior probability that combines elements from the (1) BEAMS method from Kunz et al. (2007) and Hlozek et al. (2012, hereafter H12), to form a likelihood from the Ia and CC likelihoods, (2) SALT2mu program (M11), to fit for a distance modulus offset in redshift bins, and (3) SNANA simulations (Kessler et al. 2009), to determine biases. The simulation is also used to determine the shape of the CC probability map as a function of distance modulus and redshift, which should be more accurate than the analytical approximation used in H12.

One of the original goals of the SALT2mu program is to convert the fitted SALT-II light curve parameters into a distance modulus for each SN Ia, so that the resulting HD can be fit with arbitrary cosmology-fitting programs. This strategy does not work with CC contamination because cosmology-fitting programs implicitly assume that all events are genuine SNe Ia. Using BBC, however, the fitted distance modulus offset in each redshift (z) bin is properly corrected for CC contamination. The resulting ${{\rm{\Delta }}}_{\mu ,z}$-versus-z function can be fit with arbitrary cosmology models, maintaining the original spirit of the SALT2mu program.

As part of developing BBC, we address a long-standing paradox about the HD-fit ${\chi }^{2}$ formalism. Within the SALT-II framework, the uncertainty on the distance modulus (${\sigma }_{\mu }$) depends on fitted nuisance parameters ($\alpha ,\beta $) and thus a Gaussian normalization term, $-2\mathrm{log}({\sigma }_{\mu })$, should be added to the ${\chi }^{2}$ function that is minimized. It has been long recognized, however, that adding the Gaussian normalization term results in large biases on the fitted parameters (e.g., see Appendix B of Conley et al. 2011). For a spectroscopically confirmed SN Ia sample, we verify that adding the Gaussian normalization term does indeed result in large biases if bias corrections are ignored. Including both the Gaussian normalization term along with bias corrections results in good parameter estimates.

The analysis presented here includes only statistical uncertainties, in order to check the precision of the BBC method. We therefore assume that our simulation correctly predicts CC contamination, measurement noise, SN Ia intrinsic scatter, and selection biases. An analysis of real data, however, must characterize the accuracy of the simulations and include sources of inaccuracy in the calculation of systematic uncertainties.

An overview of the paper is as follows. We begin with a review of the SALT-II framework in Section 2. Simulations are described in Section 3. The photometric analysis and NN method are described in Section 4. The BBC method is presented in Section 5, with results in Sections 6 and 7. A comparison with other HD fitting methods is given in Section 8, and we conclude in Section 9. All simulation and analysis codes used in this paper are publicly available in the SNANA package,5 ,6

2. Review of SALT-II Framework

Here we briefly describe key aspects of the SALT-II model and SALT2mu program, which are critical components of BBC. As described in Section 1, the fitted light-curve parameters for each SN are epoch of peak brightness (t0), amplitude (x0), stretch (x1), and color ($c\simeq B-V$ at t0). For each event the fitted parameters are used to standardize the SN brightness and determine a distance modulus using the Tripp relation (Tripp 1998),

Equation (1)

where ${m}_{B}=-2.5\mathrm{log}({x}_{0})$, ${M}_{0}$ is the rest-frame magnitude for an SN Ia with ${x}_{1}=c=0$, and $\alpha ,\beta $ are global nuisance parameters to standardize the SN Ia brightness.

After the light-curve fits, the next step is a global fit comparing each measured distance modulus (μ) to a model distance that depends on redshift and cosmological parameters, ${\mu }_{\mathrm{model}}=-2.5\mathrm{log}({d}_{L}/10\,\mathrm{pc})$, where for a flat wCDM universe (${{\rm{\Omega }}}_{{\rm{M}}}+{{\rm{\Omega }}}_{{\rm{\Lambda }}}=1$),

Equation (2)

The SALT2mu program uses MINUIT 7 to minimize the HD chi-squared function

Equation (3)

The fitted parameters are α, β, ${\sigma }_{\mathrm{int}}$ and a distance offset (${{\rm{\Delta }}}_{\mu ,z}$) in each redshift bin. In the definition of the uncertainty term, ${\sigma }_{\mu }$, the event index i has been dropped. C is the fitted covariance matrix among the $\{{m}_{B},{x}_{1},c\}$ parameters, ${\sigma }_{\mathrm{int}}$ is the intrinsic scatter term, ${\sigma }_{z}$ is the redshift uncertainty, and ${\sigma }_{\mathrm{vpec}}$ is the peculiar velocity uncertainty. The SALT2mu program can add an arbitrary 3 × 3 intrinsic scatter matrix (${\rm{\Sigma }}$) in the calculation of ${\sigma }_{\mu };$ here we use ${{\rm{\Sigma }}}_{{m}_{B},{m}_{B}}={\sigma }_{\mathrm{int}}^{2}$ and set all other ${\rm{\Sigma }}$ terms to zero. This choice can be interpreted as including only the coherent scatter model (COH model in Section 3) in the distance uncertainties, and ignoring intrinsic variations in color and stretch.

Rather than fitting for cosmological parameters ($w,{{\rm{\Omega }}}_{{\rm{M}}}$) appearing in the ${\mu }_{\mathrm{model},i}$ term, the cosmology parameters are fixed to reference values (${w}_{\mathrm{ref}},{{\rm{\Omega }}}_{{\rm{M}},\mathrm{ref}}$) and the ${{\rm{\Delta }}}_{\mu ,z}$ are fit in Nz redshift bins. For typical fits the redshift bin size is $\lt 0.1$. For all event indices i whose redshift lies in the same bin, ${{\rm{\Delta }}}_{\mu ,z}$ is fixed to the same value. The key assumption here is that within each redshift bin, the local shape of the HD is well described by the reference cosmological model, and that the difference is characterized by an offset (${{\rm{\Delta }}}_{\mu ,z}$). Thus instead of fitting for two cosmology parameters ($w,{{\rm{\Omega }}}_{{\rm{M}}}$), SALT2mu returns Nz fitted ${{\rm{\Delta }}}_{\mu ,z}$ values along with α, β, and ${\sigma }_{\mathrm{int}}$. ${M}_{0}$ in Equation (1) is defined as the average of the ${{\rm{\Delta }}}_{\mu ,z}$ values, and each ${{\rm{\Delta }}}_{\mu ,z}\,\to {{\rm{\Delta }}}_{\mu ,z}-{M}_{0}$.

We note that the ${\chi }_{\mathrm{HD}}^{2}$ in Equation (3) has no Gaussian normalization term, $-2\mathrm{ln}({\sigma }_{\mu })$. Minimizing this quantity is referred to as the traditional ${\chi }_{\mathrm{HD}}^{2}$ method.

After obtaining the fitted ${{\rm{\Delta }}}_{\mu ,z}$ from the SALT2mu program, cosmological parameters are obtained in a separate fit of ${\mu }_{\mathrm{ref},z}+{{\rm{\Delta }}}_{\mu ,z}$ versus redshift, where ${\mu }_{\mathrm{ref},z}$ are the distances computed from the reference cosmological parameters (${w}_{\mathrm{ref}},{{\rm{\Omega }}}_{{\rm{M}},\mathrm{ref}}$) used in the SALT2mu fit. The SALT2mu program could in principle report an average distance modulus (${\mu }_{z}\equiv {\mu }_{\mathrm{ref},z}+{{\rm{\Delta }}}_{\mu ,z}$) in each redshift bin, with no impact on subsequent cosmology fitting codes. However, we prefer to report the fitted ${{\rm{\Delta }}}_{\mu ,z}$ instead.

3. Simulations

We test BBC on SNANA simulations based on the cadence and observing conditions from the first DES-SN season. The SN survey component of DES is described in Kessler et al. (2015), and the Dark Energy Camera is described in Flaugher et al. (2015). The survey consists of ten 3 deg2 fields, observed roughly once per week in each griz passband. Eight of these shallow fields are observed to an average depth of 23.5; the other two deep fields are observed to an average depth of 24.5. The first DES-SN season was used to build a library consisting of sky noise, zero-point, and point-spread function (PSF) for each observation in the ten fields. This library is used to simulate realistic light curves at random times and sky locations. For each observation, the simulated magnitude is converted into a flux using the image zero-point and CCD gain. The simulated flux uncertainty is computed from the PSF, sky noise, and zero-point.

For SNe Ia, the redshift-dependent volumetric rate (R) is taken from Dilday et al. (2008), with $R{(z)\propto (1+z)}^{1.5}$. For better statistical constraints on the w-precision of the BBC method, an artificial low-redshift ($z\lt 0.08)$ sample is added, which comprises ∼10% of the total sample, as shown in Figure 1. This low-z sample is generated with the same griz passbands and depth as for the DES-SN sample, and is therefore an ideal anchor with minimal selection bias.

Figure 1.

Figure 1. Simulated redshift distribution after selection requirements described in Section 4. The low-redshift ($z\lt 0.1$) subset is ∼10% of the total.

Standard image High-resolution image

SN Ia model magnitudes are generated from the SALT-II light-curve model in Guy et al. (2010). Since this model is defined in a limited rest-frame wavelength range (2800–7000 Å), the model is undefined for i and z bands at low redshift ($z\lt 0.12$), and for the g band at higher redshifts ($z\gt 0.6$). To avoid complications from missing bands in this analysis, we have extrapolated the model into the ultra-violet and near-infrared regions (2200–9200 Å). While the extrapolated model is not calibrated at the percent-level precision needed in a real cosmology analysis, it is sufficiently accurate to model Poisson noise and to test fitting codes.

To match the observed Hubble residual dispersion, we examine three different models of intrinsic scatter from Kessler et al. (2013): (1) the COH model of 100% coherent variation at all epochs and wavelengths, with ${\sigma }_{\mathrm{coh}}=0.13$ mag, (2) model G10 with scatter from the SALT-II model, in which 70% of the contribution to the Hubble residuals comes from coherent variation and 30% comes from chromatic variation, and (3) the C11 model from Chotard et al. (2011), in which 25% of the contribution to the Hubble residuals comes from coherent variation and 75% comes from chromatic variation.

The COH model is not consistent with observations because wavelength-dependent variations in the intrinsic scatter have been observed in Guy et al. (2010) and Chotard et al. (2011). However, we include this model as a test because the HD fitting model with constant ${\sigma }_{\mathrm{int}}$ is similar to the COH model, and thus one might naively expect that simulating the COH model would yield less biased results than results obtained by simulating the more realistic G10 and C11 models. The latter two models are broadband-variation models based on observations, and were converted into spectral variation models in Kessler et al. (2013) so that intrinsic variation is simulated by varying the underlying SALT-II spectra. These models add intrinsic scatter without changing the average underlying SN Ia model; the average generated SN Ia flux after intrinsic variations are applied is the same as the SALT-II model flux with no intrinsic scatter. The simulated values of the standardization parameters (Table 1) are ${\alpha }_{\mathrm{SIM}}=0.14$ and ${\beta }_{\mathrm{SIM}}=3.2$ for the COH and G10 scatter models and ${\beta }_{\mathrm{SIM}}=3.85$ for the C11 model. The higher ${\beta }_{\mathrm{SIM}}$ value for the C11 model is taken from Scolnic & Kessler (2016) and is a result of the color population being more like a falling exponential compared to the G10 model.

Table 1.  Simulation Parameters for SN Ia Properties

    Color Parameters: Stretch Parameters:
Model ${\alpha }_{\mathrm{SIM}}$ ${\beta }_{\mathrm{SIM}}$ $\bar{c}\,{\sigma }_{-}\,{\sigma }_{+}$ ${\bar{x}}_{1}\,\,{\sigma }_{-}\,\,{\sigma }_{+}$
COHa 0.14  3.20 −0.054 0.043 0.101 0.973 1.472 0.222
G10 0.14  3.20 −0.054 0.043 0.101 0.973 1.472 0.222
C11 0.14  3.85 −0.099 0.003 0.119 0.964 1.467 0.235

Note.

a SK16 did not evaluate population parameters for the COH model, so here we use the G10 parameters.

Download table as:  ASCIITypeset image

The underlying color and stretch populations are each described by an asymmetric Gaussian distribution defined by three parameters. When we ignore the normalization factors, the color distribution is

Equation (4)

where $\bar{c}$ is the value with maximum probability, and ${\sigma }_{-}$ and ${\sigma }_{+}$ are the low- and high-sided Gaussian widths. A similar parameterization describes the stretch (x1) distribution. We use the parameters in the high-z row in Table 1 of SK16, which are shown here in Table 1 for each model of intrinsic scatter. We note that in a more realistic analysis, the redshift-dependent populations should be used.

Core-collapse SN types II and Ibc are generated using the volumetric rate from Bazin et al. (2009), with a redshift dependence of ${(1+z)}^{3.6}$. Simulated light curves are generated from spectra that have been mangled to match photometric observations of 42 CC light curves as described in Kessler et al. (2010, hereafter K10). The relative fraction and peak luminosity function (LF) for each subtype (II,Ib/c) are from Li et al. (2011, hereafter L11). In K10, the simulation selects a random template spectrum and applies a magnitude offset and random Gaussian scatter such that the generated LF (in R band) has the same mean and variance as reported in Table 6 of L11. The brightness distribution of the original CC light curves is preserved, and the LF from L11 is achieved with an additional random magnitude scatter. Here we use the same technique for the Type-II SNe, but alter the procedure for Type Ib to account for one anomalously bright event which overestimates the contribution from bright events. The brightness for each Ib template spectrum is adjusted to have the mean LF brightness, and the full LF spread is simulated with random scatter (Jones et al. 2016).

Since a spectroscopic host galaxy redshift is required in this analysis, we use the spectroscopic matching efficiency estimated in Bernstein et al. (2012).8 This efficiency drops to 0.75 at z = 0.5, and drops to 0.50 at z = 0.95.

The following effects are available options in the SNANA simulation, but have been left out for this study: peculiar velocities, weak lensing, host-galaxy correlations, and redshift-dependent SN properties (e.g., α, β, population parameters).

Because the simulation generates realistic light-curve fluxes and uncertainties, there are no assumptions about the analytical form of detection thresholds, and it properly simulates arbitrarily complex surevy-selection triggers based on the number of detections, as well as the distribution of detections over nights and passbands. It also accounts for time-dependent variations due to weather and instrumental effects. We fit each simulated light curve with the same SALT-II model (and code) used on data, and thus light-curve fitting biases are included, such as those found in SK16.

4. NN Method for Photometric Classification

Here we describe the analysis to photometrically select SNe Ia. The overall strategy is to first apply selection cut-windows, or box cuts, on several analysis variables to reduce the CC/Ia fraction to $\lt 10$%. The second step is to apply the NN method (Sako et al. 2014, hereafter referred to as NN) to further reduce the CC contamination and to determine a Type Ia probability needed for the BBC likelihood (Section 5). Numerous machine-learning (ML) methods can be applied to photometric classification as described in Lochner et al. (2016). Our choice of NN is arbitrary and adequate to test BBC; we make no claim about which ML method is best.

After fitting the light curves with the SALT-II model, the box cuts are as follows:

  • 1.  
    At least one observation with ${T}_{\mathrm{rest}}\lt -2$ days, where ${T}_{\mathrm{rest}}$ is the rest-frame epoch with respect to the epoch of peak brightness.
  • 2.  
    At least one observation with ${T}_{\mathrm{rest}}\gt +10$ days.
  • 3.  
    At least three bands have an observation with a signal-to-noise ratio (S/N) above 5.
  • 4.  
    Redshift $z\lt 1.10$.
  • 5.  
    $| {x}_{1}| \lt 3$ and $| c-0.1| \lt 0.4$.
  • 6.  
    The SALT-II light-curve fit probability (${P}_{\mathrm{fit}}$), computed from the fit-${\chi }^{2}$ and number of degrees of freedom, satisfies ${P}_{\mathrm{fit}}\gt 0.05$.

The first four requirements ensure good light-curve quality, while the last two requirements select SNIa-like light curves. At this stage of the analysis, we transition to the more sophisticated NN method to further reduce CC background.

Our NN analysis is based on the 3D space of $\{z,{x}_{1},c\}$, where z is a spectroscopic host galaxy redshift and the latter two variables are from the SALT-II light-curve fit. For a given data event,9 the basic idea is to define a sphere centered at $\{z,{x}_{1},c\}$, generate a large simulated sample, and count the number of Type Ia and CC SNe that are found inside the sphere. The event is classified to be the SN type representing the majority inside the sphere. A procedure called NN training determines the optimal size of the sphere based on maximizing the product of the efficiency and purity.

More formally, for each event in the data sample, the NNs are events from a large simulated training sample that satisfy a 3D distance constraint,

Equation (5)

where the primed quantities are from the simulated training sample. The optimal distance-metric parameters (${D}_{z},{D}_{c},{D}_{x1}$) are determined from a training procedure that maximizes the product of the SN Ia purity and the efficiency. The final selection requirement is that for simulated neighbors satisfying Equation (5), more than half are true SNe Ia with at least $1\sigma $ confidence. This means that if there are 20 neighbors, then at least 13 ($\sigma =2.1$) must be true SNe Ia to be classified as SN Ia. If only 12 ($\sigma =2.2$) of the neighbors are true SNe Ia, then the $1\sigma $-above-half requirement fails and the event is classified as unknown. The Type Ia probability for each event, ${P}_{\mathrm{NN},\mathrm{Ia}}$, is defined as the fraction of NN training events (satisfying Equation (5)) that are true SNe Ia.

After passing the box cuts and NN requirement, there are a total of 10,000 events in the simulated data sample, and 70,000 events in each of the two training samples.10 With our nominal estimate of the CC rate, the CC contamination11 is $0.054$ after the selection requirements, and drops to $0.015$ after the NN requirement. The corresponding SN Ia loss from the NN requirement is $0.003$. Since the CC contamination in a photometrically identified sample is not yet known, we test the BBC method with two different CC rates. Simulations are generated with our best estimate of the CC rate (${R}_{\mathrm{CC}}=1$), and again with $3\times $ the CC rate (${R}_{\mathrm{CC}}=3$). ${R}_{\mathrm{CC}}$ is defined here as the simulated CC rate divided by our best estimate of the rate. For each ${R}_{\mathrm{CC}}$ value, the CC contamination before and after the NN requirement is shown in Table 2. Figure 2 shows the distance modulus residuals to illustrate the contamination before and after the NN requirement.

Figure 2.

Figure 2. Distance modulus residual ($\mu -{\mu }_{\mathrm{model}}$) for DES-SN simulation with ${R}_{\mathrm{CC}}=1$: after box cuts (left) and after NN requirement (right). The legend indicates the contribution from all SN types (Ia+CC) and from types II and Ibc.

Standard image High-resolution image

Table 2.  CC Contamination Fraction and SN Ia Efficiency vs. ${R}_{\mathrm{CC}}$, Before and After NN Requirement

  CC/All CC/All Eff(SNIa)
${R}_{\mathrm{CC}}$ no NN with NN with NN
1 $0.054$ $0.015$ $0.997$
3 $0.146$ $0.035$ $0.992$

Download table as:  ASCIITypeset image

A final caveat is that the NN training has been performed on the combined deep+shallow fields, but in principle, a separate training on deep and shallow subsamples could be more optimal. However, since the same requirements are applied to the data and simulated samples, the level of NN optimization has no impact on the BBC performance.

5. BBC Method

The BBC method consists of two fitting stages. The first step is the BBC fit to determine ${{\rm{\Delta }}}_{\mu ,z}$ in each redshift bin, where ${{\rm{\Delta }}}_{\mu ,z}$ is the distance-modulus offset with respect to an arbitrary cosmological model, which we referred to as the reference cosmology. In addition to requiring light-curve fit results (${x}_{0},{x}_{1},c$) from the data, the BBC fit also requires input from a detailed simulation based on a rigorous characterization of the survey. The second step is to fit ${{\rm{\Delta }}}_{\mu ,z}$ versus redshift in order to determine the cosmological parameters. This step requires no understanding of the survey, and hence can be performed with a variety of fitting codes and priors.

5.1. Overview

Using the BEAMS formalism, we form a posterior consisting of Type Ia and CC likelihoods.12 The joint posterior for N events is $P(\theta )\times { \mathcal L }$, where $P(\theta )$ is a flat prior on the fitted parameters (θ), ${ \mathcal L }\equiv {{\rm{\Pi }}}_{i=i}^{N}{{ \mathcal L }}_{i}$, and ${{ \mathcal L }}_{i}$ is the BBC likelihood for each event i. MINUIT is used to minimize $-2\mathrm{ln}({ \mathcal L })$. The BBC likelihood for each event is

Equation (6)

where zi is the redshift and ${\mu }_{i}$ is the distance modulus. Dropping the event index i,

Equation (7)

and ${D}_{\mathrm{Ia}}$ and ${D}_{\mathrm{CC}}$ are the conditional likelihoods for SN Ia and CC, respectively,

Equation (8)

Equation (9)

${D}_{\mathrm{Ia}}$ and ${D}_{\mathrm{CC}}$ are normalized such that at any redshift the distance modulus integrals satisfy

Equation (10)

and $\int {{ \mathcal L }}_{i}d\mu =1$ for each event. ${D}_{\mathrm{Ia}}$ (Equation (8)) depends on the traditional ${\chi }_{\mathrm{HD}}^{2}$ term defined in Equation (3), and includes the fitted ${{\rm{\Delta }}}_{\mu ,z}$. ${P}_{\mathrm{NN},\mathrm{Ia}}$ is the NN probability for Type Ia, and $(1-{P}_{\mathrm{NN},\mathrm{Ia}})$ is the NN probability for CC. ${S}_{\mathrm{CC}}$ is a fit parameter allowing for an arbitrary scale of the CC probability; a correct CC simulation should result in ${S}_{\mathrm{CC}}\sim 1$.13 Although we use a specific case of NN probabilities in the likelihood, this method works using probabilities from any classification method.

The fitted parameters and flat prior ranges are summarized in Table 3. Compared to minimizing the traditional ${\chi }_{\mathrm{HD}}^{2}$, the only additional fit parameter in ${ \mathcal L }$ is ${S}_{\mathrm{CC}}$.

Table 3.  BBC Fit Parameters

  Flat Prior Range:
Fit Parameter min, max
α 0.02, 0.30
β 1.0, 6.0
${S}_{\mathrm{CC}}$ −0.1, 5.0
${{\rm{\Delta }}}_{\mu ,z}$ −5.0, 5.0

Download table as:  ASCIITypeset image

We make two improvements with respect to the BEAMS analysis in H12: (1) the SALT-II fitted parameters (${m}_{B},{x}_{1},c$) are bias-corrected based on a simulation of SNe Ia, and (2) the analytical functional form for ${D}_{\mathrm{CC}}$ in H12 is replaced by a simulated probability map to remove assumptions about the analytical form. The subsections below describe these improvements in detail.

While the traditional ${\chi }_{\mathrm{HD}}^{2}$ (Equation (3)) has no Gaussian normalization term to avoid biases, ${D}_{\mathrm{Ia}}$ (Equation (8)) must include this term to properly normalize the Ia and CC probabilities. Thus the Gaussian normalization term cannot be arbitrarily removed from the BBC likelihood, as has been done with the traditional ${\chi }_{\mathrm{HD}}^{2}$ method. The correct solution is to include bias corrections as described in Section 5.2; this issue is examined further in Section 8.1.

We refer to the simulated data sample as data to clearly distinguish the data from a large independent simulated sample used for bias corrections and the CC likelihood. Thus the descriptions are valid when replacing a simulated data sample with real data.

Finally, since the shallow and deep-field samples differ in depth by 1 mag, bias corrections and the CC likelihood are determined separately for each subsample. This separation of the corrections illustrates the more general principle of combining multiple samples from different surveys.

5.2. Bias-corrected Distance

For each data event, a bias-corrected distance is defined by replacing the fitted parameters (${m}_{B},{x}_{1},c$) in Equation (1) with bias-corrected parameters,

Equation (11)

where the star superscript indicates a bias-corrected quantity. The bias corrections (${\bar{\delta }}_{{m}_{B}},{\bar{\delta }}_{{x}_{1}},{\bar{\delta }}_{c}$) are determined from a large BiasCor simulation with 500,000 events after the requirements in Section 4. For any given event, we cannot exactly determine the true parameter bias (δ) because of variations caused by intrinsic scatter and measurement noise. We therefore interpolate the bias ($\bar{\delta }$) in a 5D space of $\{z,{x}_{1},c,\alpha ,\beta \}$. The cell sizes are $\{0.05,0.50,0.05\}$ for $\{z,{x}_{1},c\}$, respectively. The bias correction has a weak dependence on α and β, and is included by generating these two parameters on a 2 × 2 grid that extends well beyond current constraints, and interpolating the bias within the BBC fit. With current estimates of ${\alpha }_{\mathrm{SIM}},{\beta }_{\mathrm{SIM}}$ given in Table 1, the two fixed α values are ${\alpha }_{\mathrm{SIM}}\pm 0.04$ and the two fixed β values are ${\beta }_{\mathrm{SIM}}\pm 0.4$. The SN Ia parameters α and β are generated at discrete values because there is no observational information about these distributions, and thus we impose a flat prior in the BBC fit. Each 3D subcell of $\{z,{x}_{1},c\}$, however, includes a continuous distribution based on the measured SN rate versus redshift and the measured population of stretch and color.

Each $\bar{\delta }$-bias value (${\bar{\delta }}_{{m}_{B}},{\bar{\delta }}_{{x}_{1}},{\bar{\delta }}_{c}$) is determined by linear interpolation in the 5D space. The bias and grid location in each 5D cell is the weighted average of all BiasCor events in the cell, and each weight is ${\sigma }_{\mu }^{-2}$. A valid BiasCor cell requires at least three events, and at least three valid cells are required for interpolation. Data events with fewer than three cells are rejected, which reduces the CC contamination as described in Section 7.

There is a subtle interpolation issue for events in the highest redshift bin ($1.05\lt z\lt 1.10$). We recall that the bias value in each redshift bin is defined at the weighted-average grid location, ${\bar{z}}_{\mathrm{bin}}$, and for ${\bar{z}}_{\mathrm{bin}}\lt z\lt 1.10$ the bias therefore cannot be interpolated if there are no events beyond $z\gt 1.10$. While it is possible to extrapolate the bias for $z\gt {\bar{z}}_{\mathrm{bin}}$, extrapolating a rapidly varying bias function could result in a large error. To avoid extrapolating, the BiasCor simulation includes events with $1.10\lt z\lt 1.15$ to provide a reliable interpolation for events with ${\bar{z}}_{\mathrm{bin}}\lt z\lt 1.10$.

The bias corrections as a function of redshift are illustrated in Figure 3 for a few arbitrary stretch and color bins; the largest corrections are a few tenths of a mag for very blue and red colors. These corrections are similar to the 3D $(z,{x}_{1},c)$ corrections in SK16, except that here we account for the dependence on α and β, as illustrated in Figure 4.

Figure 3.

Figure 3. Bias corrections ${\bar{\delta }}_{{m}_{B}}$, $\alpha {\bar{\delta }}_{{x}_{1}}$, and $\beta {\bar{\delta }}_{c}$ are shown as a function of redshift. The pre-factors $\alpha ,\beta $ are used to show the bias in distance-modulus magnitudes. The parameter selection ranges are shown in each panel.

Standard image High-resolution image
Figure 4.

Figure 4. Bias correction ${\bar{\delta }}_{{m}_{B}}$ vs. redshift for the entire shallow-field subset. Solid circles are the same in both panels. Open circles show bias with same ${\alpha }_{\mathrm{SIM}}$ and different ${\beta }_{\mathrm{SIM}}$ (left), and with same ${\beta }_{\mathrm{SIM}}$ and different ${\alpha }_{\mathrm{SIM}}$ (right). Note that the vertical scale is $\times 5$ smaller than in Figure 3.

Standard image High-resolution image

5.3. Bias-corrected Distance Uncertainty

After applying bias corrections, simulations show that the Hubble residual scatter can be significantly reduced, and it can differ from the calculated uncertainty (${\sigma }_{\mu }$) in Equation (3). This effect is shown in Figures 5(a) and (b) for the deep- and shallow-field subsamples, respectively, for Hubble residuals with respect to the true distance ${\mu }_{\mathrm{true}}$. In the deep fields (Figure 5(a)), bias corrections result in a ∼20% reduction in the Hubble scatter at the highest redshifts, while in the shallow fields (Figure 5(b)) the reduction reaches ∼40%. The calculated uncertainty (${\sigma }_{\mu }$) is in good agreement with the bias-corrected Hubble scatter in the deep sample, but ${\sigma }_{\mu }$ is much too large in the shallow sample.

Figure 5.

Figure 5. Rms of $\mu -{\mu }_{\mathrm{true}}$ vs. redshift before bias corrections are applied (red dotted), after bias corrections are applied (blue dashed), and calculated ${\sigma }_{\mu }$ from Equation (3) (black line). The left panel (a) is for the two deep fields, and the middle panel (b) is for the eight shallow fields. The right panel (c) shows the ${R}_{\sigma }$ correction (Equation (13)) on ${\sigma }_{\mu }$ vs. fitted color in the redshift bin indicated in the panel.

Standard image High-resolution image

To ensure a robust estimate of the distance uncertainty in the likelihood, we applied a 2D ${\sigma }_{\mu }$ correction as a function of redshift and color. We found little ${\sigma }_{\mu }$ dependence on the stretch (x1) parameter and therefore did not include an x1 correction. The color dependence is illustrated in Figure 5(c). The redshift binning is the same as for the bias corrections, but there are only three color bins in order to maintain better bin statistics for measuring the rms. The form of the correction is

Equation (12)

Equation (13)

where ${R}_{\sigma }$ is determined before the BBC fit from the same BiasCor simulation used to determine the bias corrections; it is the ratio of blue and black curves in Figure 5(a), (b). The simulated intrinsic scatter term (${\sigma }_{\mathrm{int}}$) is determined from the subset with signal-to-noise ratio ${\rm{S}}/{\rm{N}}\gt 60$, and it is important to verify that ${R}_{\sigma }\simeq 1$ at low redshifts with high S/N. ${R}_{\sigma }(z)$ is determined at each $\alpha \,-\,\beta $ grid point, as well as for deep and shallow subsamples. During the minimization, ${R}_{\sigma }$ is interpolated as a function of $\alpha ,\beta ,z,c$. After the BBC fit, we verify that rms $[(\mu -{\mu }_{\mathrm{model}})/{\sigma }_{\mu }]$ is close to 1 at all redshifts. A similar test on real data would check the validity of the ${R}_{\sigma }$ correction.

An unexpected artifact of the bias correction is that the Hubble scatter at high redshift is somewhat smaller in the shallow-field subsample than in the deep fields, which contradicts our naive expectation that the scatter should be smaller in the deep fields (compare the blue dashed curves in Figures 5(a) and (b)). The reason for this paradox is that at high redshifts a narrower range of brightness is observed in the shallow fields, resulting in smaller scatter after bias corrections. We do not, however, achieve better measurements in the shallow fields for two reasons. First, there are more events per square degree and redshift bin in the deep fields. Second, the shallow-field bias corrections are more sensitive to the cosmological parameters assumed in the BiasCor simulation, resulting in a larger systematic uncertainty (Section 6.1).

5.4. CC Probability Distribution

In H12, ${D}_{\mathrm{CC}}$ in Equation (6) has the same functional form as ${D}_{\mathrm{Ia}}$, except that ${\mu }_{\mathrm{model}}\to {\mu }_{\mathrm{model}}+{\rm{\Upsilon }}(z)$ where ${\rm{\Upsilon }}(z)$ is a polynomial function of redshift with coefficients as additional fit parameters. The error term (${\sigma }_{\mu }$) has the same functional form as in the SN Ia likelihood, but has an independent ${\sigma }_{\mathrm{int}}$ term. The Gaussian form of ${D}_{\mathrm{CC}}$ trivially satisfies the normalization condition (Equation (10)), but may not be a sufficiently accurate model.

Here we replace ${\rm{\Upsilon }}(z)$ with a simulated map shown in the lower panels of Figure 6. As described in Section 3, the simulation is based on 42 CC templates and a library of observing conditions, and makes no analytical assumptions about the form of the resulting Hubble residuals. Additional motivation for using a simulated CC map is provided in Jones et al. (2016), where they show excellent agreement between their Pan-STARRS1 photometrically identified SN Ia data sample and the Ia+CC simulation. The normalization constraint in Equation (10) is imposed numerically. The CC map in Figure 6 clearly has discontinuities in the derivatives, which can cause problems with MINUIT minimization. To alleviate such fitting issues, the mean and rms of the CC map in each redshift bin are used to define a Gaussian function. Improved simulations in the future may suggest a more complex function such as an asymmetric Gaussian.

Figure 6.

Figure 6. Distribution of distance residual vs. redshift, generated from the simulation for SN Ia (top) and CC SNe (bottom). Deep and shallow subsamples are shown separately in the left and right panels. Box cuts plus the NN requirement have been applied. For the CC likelihood, the simulated CC distribution in each redshift bin is replaced with a Gaussian. The SNIa distribution is shown for comparison, but the analytic SNIa likelihood is used instead.

Standard image High-resolution image

5.5. Determination of σint

Using the traditional ${\chi }_{\mathrm{HD}}^{2}$ method (Equation (3)) for spectroscopically confirmed SN Ia samples, ${\sigma }_{\mathrm{int}}$ is defined such that ${\chi }_{\mathrm{HD}}^{2}={N}_{\mathrm{dof}}$, where ${N}_{\mathrm{dof}}$ is the number of degrees of freedom in the fit. To maintain a similar definition of ${\sigma }_{\mathrm{int}}$ in the BBC likelihood, we impose an ad hoc weighted ${\chi }^{2}$ constraint,

Equation (14)

where ${W}_{\mathrm{Ia},i}={{ \mathcal L }}_{\mathrm{Ia},i}/{{ \mathcal L }}_{i}$ is the normalized SN Ia likelihood for event i, ${{ \mathcal L }}_{\mathrm{Ia},i}$ is the first term in brackets in Equation (6), and ${N}_{\mathrm{fitPar}}$ is the number of fitted parameters. We note that we have defined ${\sigma }_{\mathrm{int}}$ only for the SN Ia likelihood, and there is no corresponding term in the CC likelihood. In the SALT2mu program, ${\sigma }_{\mathrm{int}}$ is evaluated iteratively rather than as a fit constraint. Convergence of Equation (14) to within 1% typically requires 2–3 fit iterations.

To compare ${\sigma }_{\mathrm{int}}$ of different subsamples, the uncertainty on this variance term is needed. Since our BBC fitting procedure does not return a ${\sigma }_{\mathrm{int}}$ uncertainty,14 fitting an ensemble of simulations is used to measure the rms spread in ${\sigma }_{\mathrm{int}}$, and this rms is interpreted as the uncertainty. The fitted ${\sigma }_{\mathrm{int}}$ value itself provides another crosscheck because we expect these values to be the same from a BBC fit to the data and simulation; a discrepancy would point to an error in modeling the intrinsic scatter in the simulation.

5.6. Fit for Cosmological Parameters

The BBC method consists of two fitting stages. The first step is to determine ${{\rm{\Delta }}}_{\mu ,z}$ by maximizing the BBC posterior probability in Equation (6). The second step, determining the cosmological parameters, is described here. For a spectroscopically confirmed SN Ia sample, we can perform the cosmology fit with either the distances for each SN, or with the ${{\rm{\Delta }}}_{\mu ,z}$. However, for a photometrically identified sample with CC contamination, the ${{\rm{\Delta }}}_{\mu ,z}$ are properly corrected, while the individual distances are not.

As described in Section 2, the result of the BBC fit is a set of distance offsets versus redshift bin, ${{\rm{\Delta }}}_{\mu ,z}$, using a reference set of cosmological parameters (${w}_{\mathrm{ref}},{{\rm{\Omega }}}_{{\rm{M}},\mathrm{ref}}$) that are fixed in the BBC fit. The final fitted values (${w}_{\mathrm{fit}},{{\rm{\Omega }}}_{{\rm{M}},\mathrm{fit}}$) are determined by minimizing

Equation (15)

where ${{\boldsymbol{D}}}_{\mu ,z}\equiv {{\rm{\Delta }}}_{\mu ,z}+{\mu }_{\mathrm{ref},z}-{\mu }_{\mathrm{model},z}$. In each of the 22 redshift bins, ${{\rm{\Delta }}}_{\mu ,z}$ is the BBC-fitted distance offset, ${\mu }_{\mathrm{ref},z}$ is the distance modulus computed from the reference parameters ${w}_{\mathrm{ref}}$ and ${{\rm{\Omega }}}_{{\rm{M}},\mathrm{ref}}$, and ${\mu }_{\mathrm{model},z}$ is the distance modulus computed from the floated parameters w and ${{\rm{\Omega }}}_{{\rm{M}}}$. Within each redshift bin, ${\mu }_{\mathrm{ref},z}$ and ${\mu }_{\mathrm{model},z}$ are computed at the weighted-average redshift, where each weight is ${\sigma }_{\mu }^{-2}$.

${ \mathcal C }$ is the total covariance matrix, including both statistical and systematics terms. The size of ${ \mathcal C }$ is ${N}_{z}^{2}={22}^{2}=484$, and this size is fixed regardless of the size of the data sample. For our BBC-fit tests we include only the diagonal terms with statistical uncertainty ${\sigma }_{{\rm{\Delta }}\mu ,z}^{2}$. The off-diagonal reduced covariances are small (few percent) and ignored, and we do not include systematic uncertainties.

With diagonal ${ \mathcal C }$, ${\chi }_{{\rm{\Delta }}}^{2}$ is very similar to ${\chi }_{\mathrm{HD}}^{2}$ in Equation (3), except that the sum over individual events is replaced with a sum over bin-averaged distances in redshift bins. The SALT2mu program, which implements the BBC fit and produces the ${{\rm{\Delta }}}_{\mu ,z}$, can also be used to minimize Equation (15) with a suitable change in fitting options. However, it is advantageous to use more specialized cosmology fitting programs that include priors from other constraints or more diverse cosmological models.

To minimize the uncertainty on measuring w-biases, we impose a strong and unrealistic Gaussian prior on the matter density, ${{\rm{\Omega }}}_{{\rm{M}}}={{\rm{\Omega }}}_{{\rm{M}},\mathrm{ref}}\pm 0.0001$, and a flat prior on w ($-1.5\lt w\lt -0.5$). Fits on data should use a more realistic ${{\rm{\Omega }}}_{{\rm{M}}}$ prior. To illustrate the uncertainty reduction, we have run crosscheck fits with a weaker Gaussian prior of ${{\rm{\Omega }}}_{{\rm{M}},\mathrm{ref}}\pm 0.02;$ the w-bias results are consistent with the strong prior, but the uncertainties are 2–3 times larger.

6. Results I: Tests with Simplified Simulations

We begin with a simplified test of the BBC method such that any observed bias in the cosmology or nuisance parameters would point to a problem with the method. The simplifications in the simulated data and BiasCor samples are (1) exclude CC SNe, (2) generate coherent intrinsic scatter model (COH model in Section 3) so that the ${\sigma }_{\mathrm{int}}$ term in the SN Ia likelihood exactly matches the generated model of intrinsic scatter, and (3) remove Galactic reddening and peculiar velocities. In the light-curve fitting, the SALT-II model uncertainties are set to zero since there is no color variation in the intrinsic scatter model. Since there are no CC events, NN training is skipped and ${S}_{\mathrm{CC}}=0$ in the BBC likelihood (Equation (6)). In spite of these simplifications, the selection biases are still present and must be accounted for to obtain unbiased results.

To reach a sensitivity to a BBC-induced w-bias below 0.01, we simulate 15 independent data samples, each with ∼10,000 events satisfying the box cuts and NN requirements in Section 4. The BiasCor sample has ∼500,000 events and is used to correct all $15$ data samples. Distance moduli in all simulations are computed from a cosmological model with a flat universe and ${{\rm{\Omega }}}_{{\rm{M}},\mathrm{ref}}=0.3$. We set ${w}_{\mathrm{ref}}=-1$ for the NN training (Section 4), BiasCor sample, and BBC likelihood. For the data samples we set the true w-value, ${w}_{\mathrm{data}}=-1$, and also test with other values to check the BBC method when the ${w}_{\mathrm{ref}}\ne {w}_{\mathrm{data}}$. The w-bias is defined as

Equation (16)

where ${w}_{\mathrm{fit}}$ is described in Section 5.6. Figure 7 shows the fitted ${{\rm{\Delta }}}_{\mu ,z}$, averaged over the 15 samples, versus redshift. In the left panel, ${w}_{\mathrm{data}}=-1$ and the fitted ${{\rm{\Delta }}}_{\mu ,z}$ agree well with the expected curve (dashed line through zero). In the next two panels of Figure 7, ${w}_{\mathrm{data}}=-0.9$ and −0.8, which differs from ${w}_{\mathrm{ref}}=-1$ in the BBC fit. The resulting ${{\rm{\Delta }}}_{\mu ,z}$ show a significant redshift dependence, and is expected as shown by the dashed curve. At higher redshifts, however, there is a notable discrepancy between the fitted ${{\rm{\Delta }}}_{\mu ,z}$ and the prediction, and this discrepancy is due to an incorrect bias correction induced by a slightly incorrect cosmology model. The maximum ${{\rm{\Delta }}}_{\mu ,z}$ discrepancy is roughly 0.01 mag per 0.1 difference between ${w}_{\mathrm{data}}$ and ${w}_{\mathrm{ref}}$. The resulting w-bias is quantified below in Section 6.1, and a potential solution is discussed in Section 9.

Figure 7.

Figure 7. Average BBC-fitted value of ${{\rm{\Delta }}}_{\mu ,z}$ vs. redshift, and error bars show the rms spread among the 15 samples. The simulated ${w}_{\mathrm{data}}$ is indicated in each panel: $-1,-0.9,-0.8$ in the three panels, respectively. The same BiasCor sample, with ${w}_{\mathrm{ref}}=-1$, is used in all three BBC fits. The dashed curve in each panel shows the prediction based on the ΛCDM model. The discrepancies between the BBC fit and prediction (two right panels) is due to the difference between ${w}_{\mathrm{data}}$ and ${w}_{\mathrm{ref}}$.

Standard image High-resolution image

For ${w}_{\mathrm{data}}=-1$, the ${\mathtt{BBC}}$-fitted parameters for the 15 samples are illustrated in Figure 8 using an ideogram.15 The parameters α, β, and ${\sigma }_{\mathrm{int}}$ are shown as ratios with respect to the true input values, with the weighted-average ratio printed at the top of each plot. To evaluate the fitted uncertainties, ${\chi }_{\mathrm{red}}^{2}$ shows the reduced ${\chi }^{2}$ of the 15 independent fits for each parameter; good error estimates result in ${\chi }_{\mathrm{red}}^{2}\simeq 1$.16 The α bias is almost 2%, while β and ${\sigma }_{\mathrm{int}}$ are recovered to within ∼1%. The w-bias is $-0.002\pm 0.004$ (lower right panel of Figure 8), and ${\chi }_{\mathrm{red}}^{2}\simeq 0.5$ suggests that the fitted w-uncertainties may be overestimated, although the probability for such a low ${\chi }_{\mathrm{red}}^{2}$ with correct uncertainties is 6.5%.

Figure 8.

Figure 8. Results of 15 simulated data samples fit with the SN Ia likelihood, and ${w}_{\mathrm{data}}={w}_{\mathrm{ref}}=-1$. The first three plots show the ratio of the best-fit nuisance parameter ($\alpha ,\beta ,{\sigma }_{\mathrm{int}}$) to the true generated value. The last plot shows the bias on ${w}_{\mathrm{fit}}$ (w+1). The avg in each panel shows the weighted average and weighted uncertainty, except for the ${\sigma }_{\mathrm{int}}$ panel, where the uncertainty is $\mathrm{rms}/\sqrt{15}$. ${\chi }_{\mathrm{red}}^{2}$ in each panel is the reduced ${\chi }^{2}$ determined from the 15 independent fit-parameter values, their uncertainties, and the weighted average. Vertical dashed lines show avg ± weighted uncertainty.

Standard image High-resolution image

6.1. Cosmology-dependent Bias in BiasCor Simulation

Here we illustrate a subtle bias from using incorrect cosmology parameters in the BiasCor simulation. For this test, ${w}_{\mathrm{ref}}$ is fixed to −1 in the BiasCor simulation, while ${w}_{\mathrm{data}}$ takes on different values in the data samples. For the three ${w}_{\mathrm{data}}$ values shown in Figure 7, the BBC fit results and w-bias are shown in Table 4. The α bias persists at the 1%–2% level, with large variations in ${\chi }_{\mathrm{red}}^{2}$ (0.4–2). The β parameter is measured to within 1%, with a much smaller spread in ${\chi }_{\mathrm{red}}^{2}$ values.

Table 4.  BBC Fit Results and w-bias Averaged over $15$ Ideal DES-SNIa Samples without CC Contaminationa,b

${w}_{\mathrm{data}}$ $\alpha /0.14$ ${\chi }_{\mathrm{red}}^{2}$ $\beta /3.2$ ${\chi }_{\mathrm{red}}^{2}$ ${\sigma }_{\mathrm{int}}/0.13$ w-biasc ${\chi }_{\mathrm{red}}^{2}$
−1d 1.018(3) 2.1 0.994(2) 1 0.991(3) −0.003(4) 0.5
−0.9 1.016(3) 0.4 1.000(2) 0.9 0.999(3) −0.017(4) 0.4
−0.8 1.020(3) 1.4 1.006(2) 0.7 1.013(3) −0.044(4) 0.8

Notes.

aEach value is the weighted average, and the value in () is the weighted uncertainty in the last digit. b ${\chi }_{\mathrm{red}}^{2}$ is the reduced ${\chi }^{2}$ for the average of the 15 samples. c ${w}_{\mathrm{ref}}=-1$ in each BiasCor sample; w-bias can be reduced by iterating with ${w}_{\mathrm{ref}}\to {w}_{\mathrm{fit}}$. dResults for ${w}_{\mathrm{data}}=-1$ are also shown in Figure 8.

Download table as:  ASCIITypeset image

Table 4 shows a w-bias induced by the BiasCor dependence on cosmological parameters, and this bias increases nonlinearly as $| {w}_{\mathrm{data}}-{w}_{\mathrm{ref}}| $ increases. When we use the first two rows of Table 4 to estimate the local derivative, the w-bias is approximately given by $({w}_{\mathrm{ref}}-{w}_{\mathrm{data}})/7$. This bias can be reduced with an iterative procedure in which ${w}_{\mathrm{ref}}$ is updated with the previous ${w}_{\mathrm{fit}}$ value, but the proximity of ${w}_{\mathrm{ref}}$ to the true value is limited by the total statistical+systematic uncertainty, ${\sigma }_{w}$. Since ${w}_{\mathrm{ref}}$ has the same uncertainty as ${w}_{\mathrm{fit}}$, there is an additional and irreducible w-uncertainty of $\sim {\sigma }_{w}/7$ induced by the BiasCor dependence on cosmological parameters. The corresponding uncertainties in the deep- and shallow-field subsamples are ${\sigma }_{w}/12$ and ${\sigma }_{w}/5$, respectively, and illustrates that this w-uncertainty depends on the SN sample.

To gain further insight, we repeated the BBC method with ${R}_{\sigma }=1$ (Equation (13)). This test corresponds to using the black curves in Figures 5(a) and (b) instead of the blue dashed curves. We find that the bias is reduced by almost a factor of 2. With ${R}_{\sigma }=1$ the distance uncertainties are larger at higher redshift, while the resulting ${\sigma }_{\mathrm{int}}$ is 7% smaller in order to satisfy the ${\chi }_{\mathrm{HD}}^{2}$ constraint in Section 5.5. This change in uncertainties assigns greater weight to the SN Ia likelihood at lower redshifts and is thus less sensitive to the BiasCor simulation at higher redshift.

7. Results II: More Realistic Tests

Here we test the BBC method on realistic simulations that include CC contamination, NN analysis, SN Ia intrinsic scatter models with color variation (G10 and C11 in Section 3), and Galactic reddening. In addition to the flux uncertainties, all light-curve fits include the SALT-II model uncertainties, which are based on the G10 model. For the samples generated with the COH and C11 scatter models, the incorrect G10-based model uncertainties are still used in the light-curve fits.

For each of the three intrinsic scatter models, 15 independent data samples are generated, each with 10,000 events passing selection requirements. The averaged results are thus based on 150,000 events for each intrinsic scatter model. A BiasCor sample with 500,000 events is generated for each intrinsic scatter model, and in the BBC fits we use the BiasCor sample with the correct intrinsic scatter model. Assuming the correct intrinsic scatter model is an optimistic assumption since we cannot clearly distinguish among the G10 and C11 models, but we leave these systematic tests to future analyses.

The data samples are generated with ${w}_{\mathrm{data}}=-1$ ($={w}_{\mathrm{ref}}$), which is very close to the assumption that ${w}_{\mathrm{ref}}$ has been set to the first-iteration ${w}_{\mathrm{fit}}$ value: a caveat is that we have not included a statistical variation on ${w}_{\mathrm{fit}}$.

The results are shown in Table 5. The first three rows show results based on the nominal CC rate (${R}_{\mathrm{CC}}=1$) in the data and NN-training samples, and the resulting contamination is ∼1%. The next three rows show results with ${R}_{\mathrm{CC}}=3$, resulting in 2.5% contamination. Compared to ${R}_{\mathrm{CC}}=1$, the contamination with ${R}_{\mathrm{CC}}=3$ is less than $3\times $ higher because the NN training sacrifices a slightly higher SNIa loss for a lower contamination.

Table 5.  BBC Fit Results and w-bias Averaged over $15$ Realistic DES-SN(Ia+CC) Samplesa,b

Intrinsic   True CC                  
Scatter Model ${R}_{\mathrm{CC}}$ fractionc $\alpha /{\alpha }_{\mathrm{SIM}}$ ${\chi }_{\mathrm{red}}^{2}$ $\beta /{\beta }_{\mathrm{SIM}}$ ${\chi }_{\mathrm{red}}^{2}$ ${\sigma }_{\mathrm{int}}$ ${S}_{\mathrm{CC}}$ ${\chi }_{\mathrm{red}}^{2}$ w-bias ${\chi }_{\mathrm{red}}^{2}$
COH 1 0.009 1.005(4) 0.9 0.992(2) 1.1 0.109(3) 1.02(4) 1.7 0.015(4) 1.2
G10 1 0.010 0.999(3) 1.6 0.993(2) 1.6 0.077(3) 1.01(4) 1.9 0.011(4) 1.4
C11 1 0.010 1.015(3) 1.2 0.990(2) 1.1 0.100(8) 1.08(4) 1.3 −0.001(4) 0.9
COH 3 0.023 0.991(3) 0.9 0.990(2) 2.6 0.110(3) 0.99(2) 1.0 0.003(4) 1.3
G10 3 0.025 0.999(3) 0.4 0.991(2) 1.9 0.076(3) 1.01(2) 0.9 0.004(4) 0.8
C11 3 0.026 1.020(3) 0.8 0.992(2) 1.6 0.099(8) 1.02(2) 1.1 0.005(4) 0.7

Notes.

aSame as in Table 4. bSame as in Table 4. cTrue ${N}_{\mathrm{CC}}/({N}_{\mathrm{CC}}+{N}_{\mathrm{Ia}})$ after selection requirements.

Download table as:  ASCIITypeset image

The BBC-fitted α and β are recovered to within 1%, and the w-bias is constrained at the level of 0.01. The CC probability scale factors, ${S}_{\mathrm{CC}}$, are consistent with unity, as expected. Interestingly, the w-bias seems smaller with the larger CC rate, albeit with low ($\lt 2\sigma $) significance. The w-bias averaged over all six entries in Table 5, which includes 900,000 simulated SNe, is $0.006\pm 0.002$.

To assess the importance of the CC likelihood in the BBC likelihood (Equation (6)), we have run the BBC fits without the CC term by forcing ${S}_{\mathrm{CC}}=0$. For the data sample generated with ${R}_{\mathrm{CC}}=1$, the fitted α and β are biased by about 30% and the w-bias is 0.1–0.15 depending on the intrinsic scatter model. For the data sample with ${R}_{\mathrm{CC}}=3$, the w-bias is around 0.2. It is therefore essential to include an accurate CC term.

As an additional test on the impact of CC contamination, we have removed the CC subset (${R}_{\mathrm{CC}}=0$), corresponding to a spectroscopically confirmed (Ia-only) sample, and fit with only the SN Ia likelihood. For the COH and G10 scatter models, the Ia-only w-bias values are within 0.001 of those obtained from the photometric sample with CC contamination. For the C11 scatter model, the w-bias values differ by 0.004.

A subtle issue is how the BBC fit further reduces the CC contamination. In Section 4 (Table 2) we showed that the NN requirement significantly reduces the CC contamination: from 0.054 to 0.015 for ${R}_{\mathrm{CC}}=1$, and from 0.146 to 0.035 for ${R}_{\mathrm{CC}}=3$. However, the actual CC contamination values after the BBC fit (Table 5) are about 50% lower: 0.01 and 0.025, respectively, with a corresponding SN Ia loss of 2%. After the NN requirement, the BBC fit further reduces the CC contamination because the bias correction imposes an implicit selection requirement that is similar to the NN requirement, but more strict. We recall that the NN requirement rejects data events where more than half of the simulated neighbors are true CC SNe, and neighbor events are counted within a 3D sphere defined by $\{z,{x}_{1},c\}$. The bias correction rejects events for which there are not enough SN Ia events in a rectangular $\{z,{x}_{1},c\}$-cell to determine the correction. The BiasCor cell volume is $\sim 4\times $ smaller than the 3D sphere volume determined from the NN training, and thus requiring a valid bias correction is more strict.

We further check the effectiveness of the BiasCor requirement in reducing CC contamination by repeating the BBC analysis without the NN requirement, but still using the NN probabilities (${P}_{\mathrm{NN},\mathrm{Ia}}$ in Equation (6)). With ${R}_{\mathrm{CC}}=1$, the CC contamination fractions are about 10% higher than the combined NN+BiasCor requirement. With ${R}_{\mathrm{CC}}=3$, the CC fractions are 20% higher. For both ${R}_{\mathrm{CC}}$ values, the w-bias values are nearly identical with and without the NN requirement. Using only the NN requirement results in 50% more CC contamination, and therefore the BiasCor requirement is somewhat more effective than the NN requirement at reducing CC contamination, but at a cost of 2% additional loss of SN Ia. This result should not be interpreted as a general statement about the inferiority of the NN method, because the NN method can also be used to obtain lower CC contamination in exchange for reduced SN Ia efficiency. For example, replacing the purity × efficiency metric by purity${}^{p}\,\times $ efficiency ($p\gt 1$) reduces the CC contamination.

7.1. σint from BBC Fit

Before discussing the ${\sigma }_{\mathrm{int}}$ results, it is useful to recall two common methods for characterizing intrinsic scatter in the HD. First is the ${\sigma }_{\mathrm{int}}$ method used here, which describes the additional scatter needed to obtained a desired ${\chi }_{\mathrm{HD}}^{2}$. From previous cosmology analyses on real data, ${\sigma }_{\mathrm{int}}\simeq 0.1$ mag (Conley et al. 2011; Betoule et al. 2014; Shariff et al. 2016); this value underestimates the true Hubble scatter because the SALT-II light-curve fit includes model uncertainties, derived from the training process, that increase the errors on $\{{m}_{B},{x}_{1},c\}$ and thereby reduces ${\sigma }_{\mathrm{int}}$. The other method is to report the rms of the Hubble residuals for a low-redshift sample with high S/N and results in typical values of 0.15 mag (Jha et al. 2007). The rms is not used in cosmology fits, but is a useful metric for evaluating how well the SN Ia brightness has been standardized. While ${\sigma }_{\mathrm{int}}$ does not include measurement uncertainties, the residual rms does and is thus an overestimate of the Hubble scatter.

In the limit of infinite S/N and zero SALT-II model uncertainty in the light-curve fits, ${\sigma }_{\mathrm{int}}$ and the Hubble residual rms are the same, and they are both equal to the ${\sigma }_{\mathrm{coh}}$ parameter for the COH model. For the G10 and C11 intrinsic scatter models, ${\sigma }_{\mathrm{int}}$ does not capture potential dependences on color and stretch, but it can still be interpreted as the average Hubble residual rms. This physical interpretation of ${\sigma }_{\mathrm{int}}$ holds for realistic S/N provided that the light-curve flux uncertainties are correct. However, for realistic fitting with non-zero SALT-II model uncertainty, ${\sigma }_{\mathrm{int}}$ is smaller than the Hubble scatter and therefore ${\sigma }_{\mathrm{int}}$ does not have a physical interpretation.

In Table 5, ${\sigma }_{\mathrm{int}}\sim 0.1$ for the COH model: while consistent with previous analyses on real data, it is 20% smaller than the input scatter value (${\sigma }_{\mathrm{coh}}=0.13$) used in the simulation. The presence of CC contamination has no effect on ${\sigma }_{\mathrm{int}}$, as verified by a BBC fit that excludes true CC and the CC likelihood. As described above, we expect that ${\sigma }_{\mathrm{int}}\lt {\sigma }_{\mathrm{coh}}$ because of the SALT-II model uncertainties. To verify this effect, we refer to our simplified simulation test (Section 6) with the COH model and no SALT-II model uncertainties: the BBC-fitted ${\sigma }_{\mathrm{int}}$ agrees well with the input ${\sigma }_{\mathrm{coh}}$ value, as shown in the ${\sigma }_{\mathrm{int}}/0.13$ column in Table 4.

For the G10 scatter model, ${\sigma }_{\mathrm{int}}\simeq 0.076$ mag and is significantly below previous measurements; this discrepancy suggests either a problem with the underlying model or with the implementation in the simulation. For the C11 model, ${\sigma }_{\mathrm{int}}\simeq 0.1$ mag and is in good agreement with previous measurements. To check the BBC-fitted values, we have computed the true ${\sigma }_{\mathrm{int}}$ value from low-redshift (high S/N) events by adjusting the rms of $({\mu }_{i}-{\mu }_{\mathrm{true}})/{\sigma }_{\mu ,i}$ and using the true values of α and β: the computed ${\sigma }_{\mathrm{int}}$ agree well with the BBC-fitted ${\sigma }_{\mathrm{int}}$ for all intrinsic scatter models.

8. Comparison with Other Bias-correction Methods

Here we compare the BBC method with other HD-fitting methods. First, we show in Section 8.1 the impact from using incorrect fitting methods: leaving out the Gaussian normalization and/or the distance-bias corrections. Next, we discuss in Section 8.2 previous analyses that used a redshift-dependent distance-bias correction.

8.1. Incorrect Fitting Methods

Several recent SN Ia analyses are based on the traditional ${\chi }_{\mathrm{HD}}^{2}$ approach (Equation (3)) to HD fitting (Conley et al. 2011; Betoule et al. 2014; Rest et al. 2014; Scolnic et al. 2014a). While some approaches have included a redshift-dependent bias correction (e.g., Section 8.2), the Gaussian normalization term, $-2\mathrm{ln}(\sigma )$, has not been included because it results in large biases (e.g., see Appendix B of Conley et al. 2011). The issue is explained in March et al. (2011): the uncertainties on the color and stretch are comparable to the natural width of the parent distributions, particularly at higher redshifts, and therefore the Gaussian-error assumption in the ${\chi }^{2}$ likelihood is not valid. Their solution is to model the color and stretch distributions within the BHM framework. Our BBC method uses bias corrections to model the correct mean in small 5D bins.

Within the BBC framework, both the Gaussian normalization term and bias corrections are needed, and leaving out either results in a large bias, as shown in Table 6. The first two rows of Table 6 show the effect of including the Gaussian normalization term without bias corrections: the fitted α and β are ∼20% below their true values, and the w-bias is greater than +0.1. The next two rows show the effect of including bias corrections without the Gaussian normalization term: the fitted α and β are ∼30% above their true values, and the w-bias is almost −0.1. The last two rows show the effect of leaving out the Gaussian normalization and bias corrections: the fitted α and β are 5%–10% away from their true values and the w-bias is ∼0.04. It is interesting to note that leaving out either the Gaussian normalization or bias corrections results in large fit biases with opposite signs; leaving out both corrections results in some cancellation and significantly smaller biases. Using the BBC method results in much smaller biases, as shown in Table 4.

Table 6.  HD Fit Results from Incorrect Fitting Methodsa

Scatter        
Model $\alpha /{\alpha }_{\mathrm{SIM}}$ $\beta /{\beta }_{\mathrm{SIM}}$ ${\sigma }_{\mathrm{int}}$ w-bias
GNb = yes   and  BiasCor = no
G10 0.787(3) 0.798(1) 0.103 0.118(4)
C11 0.787(3) 0.653(1) 0.120 0.126(4)
GNb = no   and  BiasCor = yes
G10 1.281(4) 1.269(2) 0.066 −0.091(4)
C11 0.931(4) 1.332(3) 0.088 −0.085(4)
GNb = no   and  BiasCor = no
G10 1.069(4) 0.958(2) 0.091 0.037(4)
C11 1.083(4) 0.781(2) 0.110 0.038(4)

Notes.

aAveraged over $15$ simulated DES-SNIa samples without CC contamination. bGN = Gaussian normalization term in HD fit.

Download table as:  ASCIITypeset image

8.2. Redshift-dependent Bias Corrections

Some recent SN Ia cosmology results from large-survey teams (Betoule et al. 2014; Scolnic et al. 2014a) are based on the traditional ${\chi }_{\mathrm{HD}}^{2}$ fitting approach with no Gaussian normalization term, but they accounted for distance biases using simulations. When they recognized that their HD fitting approach results in biased results, their strategy was to measure the average μ-bias (${\bar{\delta }}_{\mu }$) as a function of redshift by analyzing simulations with their biased HD fitting procedure that has no Gaussian normalization term and no bias corrections. Each distance modulus in the data is then corrected as a function of redshift, $\mu \to \mu -{\bar{\delta }}_{\mu }(z)$, which we call a 1D BiasCor to distinguish from the 5D BiasCor in the BBC method. The final cosmology fit is applied to the 1D-corrected distances.

With an unrealistic assumption of using the correct α and β in the BiasCor simulation, the 1D BiasCor works as well as the 5D BBC method, but with 9% larger uncertainty on cosmological parameters. An important caveat is that previous analyses did not have a robust method for determining α and β, which are needed as inputs to the BiasCor simulation. Betoule et al. (2014) determined α and β from the traditional ${\chi }_{\mathrm{HD}}^{2}$ fit and thus they may have biases, as illustrated for DES in the last two rows of Table 6. Scolnic et al. (2014a, 2014b) used a similar approach and analyzed additional simulations to correct for the large β-bias in the C11 model.

To illustrate the impact of incorrect α and β, we generated a BiasCor simulation with $\alpha =0.15$ and $\beta =3.0$, which corresponds roughly to the biased values using the traditional ${\chi }_{\mathrm{HD}}^{2}$ approach (last two rows in Table 6). We use this BiasCor simulation to apply a 1D correction to the simulated data samples from Section 6 that were generated with the true parameter values ($\alpha =0.14$, $\beta =3.2$): the resulting w-bias is 0.016 ± 0.004, significantly larger than the 5D BBC result in the first row of Table 4.

The fundamental issue is not so much the 1D-versus-5D correction, but rather that previous analyses bias-corrected the distance ($\mu \to \mu -{\bar{\delta }}_{\mu }$) instead of correcting each fitted parameter (${m}_{B},{x}_{1},c$) shown in Equation (11) for the BBC method. The ${\bar{\delta }}_{\mu }$ correction requires α and β to be predetermined, while the BBC method explicitly updates ${\bar{\delta }}_{\mu }$ at each minimization step as α and β are varied.

It would be interesting to apply the BBC method to existing data samples in order to measure α and β, and to check for potential errors on the 1D distance-bias corrections and cosmological parameters. However, evaluating the $\alpha ,\beta $ bias in previous analyses is beyond the scope of this work.

9. Discussion and Conclusion

The BBC fit has been implemented within the existing SALT2mu program; it is publicly available in SNANA, and ready to use on real data samples for which reliable simulated samples are available. This new program has been rigorously tested on large simulated data samples (∼150,000 events) with different models of intrinsic scatter and different CC rates. The CPU time for each BBC fit is less than 10 minutes for 10,000 events, and this time scales with the number of events. A BiasCor simulation sample of 500,000 events ($z\lt 1.2$) takes about 50 CPU hours to create,17 and a few hours are needed to generate a Ia+CC sample and perform the NN training. Below we describe some unresolved issues and areas for improvement.

The first unresolved issue is how to determine subsamples for which the BiasCor simulation is run. The deep and shallow fields in DES-SN, with a 1 mag difference in search depth, are a rather obvious choice for separating the samples. However, for a survey with a fixed exposure time there are weather variations that change the search depth and edge effects that truncate light curves; it is thus not clear if or how the BiasCor simulation should be split into multiple samples. Our current method for splitting samples is essentially trial-and-error. In this analysis, for example, we started with shallow- and deep-field samples combined into a single BiasCor simulation and found w-biases in the range of 0.01–0.02; generating separate BiasCor simulations for shallow and deep fields reduced the w-bias to below 0.01.

Most previous cosmology likelihoods are based on summing a contribution from each event where the size of covariance matrix (${ \mathcal C }$) grows as ${N}_{\mathrm{data}}^{2}$. As data samples increase into the tens of thousands, there would likely be computational challenges in both memory and fitting speed. The BBC approach results in a z-binned cosmology likelihood (Equation (15)) that sums over ${N}_{z}$ redshift bins. The corresponding covariance matrix (${ \mathcal C }$) has a manageable and fixed size of ${N}_{z}\times {N}_{z}$ and should not introduce computational problems for large data sets. A z-binned cosmology fit, including systematic uncertainties, has recently been demonstrated in Appendix E of Betoule et al. (2014). However, further testing is needed to validate a z-binned ${ \mathcal C }$ within the BBC framework.

Perhaps the most worrisome long-term issue is whether intrinsic scatter can be adequately described by the ${\sigma }_{\mathrm{int}}$ term, or even a more complex 3 × 3 intrinsic scatter matrix (${\rm{\Sigma }}$) in the space of $\{{m}_{B},{x}_{1},c\}$. The BBC and BHM methods can be enhanced to fit for additional ${\rm{\Sigma }}$ parameters, but the sensitivity is unclear, nor do we know what range of wavelength-dependent and time-dependent variations can be reasonably captured by ${\rm{\Sigma }}$. In our simulation tests, the G10 and C11 scatter models are implemented with spectral variations, with no analytical assumption about ${\rm{\Sigma }}$, and the ${\sigma }_{\mathrm{int}}$ description in the BBC fit works surprisingly well, resulting in a w-bias below 0.01. Continued tests of this nature will be essential to validate the simplified intrinsic scatter model in the Hubble fitting likelihood. It is worth noting here that the ABC method (Weyant et al. 2013; Jennings et al. 2016) has no likelihood and can therefore accommodate an arbitrarily complex model of intrinsic scatter.

Another issue is that uncertainties on the assumed cosmology in the BiasCor simulation introduce an additional w-uncertainty of ${\sigma }_{w}/7$ for the DES-SN sample (Section 6 and Figure 7). This finding raises the prospect that the highest redshift events could add a BiasCor uncertainty that exceeds the reduced statistical uncertainty. This bias has been ignored in all previous analyses that use a BiasCor simulation, although our analysis suggests that the impact is small compared to the total uncertainty.

An obvious solution is to expand the dimensionality of the BiasCor grid to include cosmology parameters, and to include these cosmology parameters in the BBC fit. The downside of this solution is that external HD fitting codes cannot be used, and as explained above, a large (${N}_{\mathrm{data}}^{2}$) covariance matrix would be needed for systematic uncertainties. To preserve the feature of creating a binned HD for external fitting codes, another solution is to generate BiasCor samples for a grid of cosmological parameters (${\boldsymbol{C}}$) such as ${\boldsymbol{C}}=\{w,{{\rm{\Omega }}}_{{\rm{M}}}\}$ or ${\boldsymbol{C}}=\{{w}_{0},{w}_{a},{{\rm{\Omega }}}_{{\rm{M}}}\}$, where $w(a)={w}_{0}+{w}_{a}(1-a)$ and a is the scale factor for the expanding universe. Generating a BiasCor grid with two values per cosmology parameter requires 22 and 23 BiasCor simulations for the two models, respectively, which only requires a few hundred CPU hours. Arbitrary cosmology fitting programs could interpolate the BBC-fitted ${{\rm{\Delta }}}_{\mu ,z}$ as a function of ${\boldsymbol{C}}$, with the limitation of using only the cosmological model defined by ${\boldsymbol{C}}$. While a truly model-independent output from BBC is desired, it is not clear how to achieve this.

Using a BHM framework, we mention another possible solution to the cosmology-dependent bias correction. In the UNITY method of Rubin et al. (2015), the authors avoid simulations and instead characterize the efficiency as an ad hoc function of SALT-II parameters. The efficiency dependence on mB should include the cosmology dependence. It is not clear, however, what uncertainties are introduced by the efficiency function.

A number of enhancements to BBC are feasible with minor code changes. Additional fitted parameters in the CC component would add flexibility, such as including a redshift-dependent ${\sigma }_{\mathrm{int}}$ parametrization, and allowing the Gaussian mean and width in each redshift bin (Figure 6) to be adjusted by a polynomial function of redshift. Fitted polynomial coefficients different from zero would indicate a discrepancy between the data and CC simulation. Instead of using the simulated CC map, redshift-dependent parametrizations in Jones et al. (2016) can also be implemented. The SN Ia likelihood could be enhanced with additional parameters to describe host-galaxy correlations, redshift-dependent SN properties, and quadratic stretch and color terms in the Tripp equation. Last, the interpolated bias corrections are linear between the nearest 5D grid nodes where the biases are defined. Spline interpolation may work better, as long as the CPU time and memory are not significantly increased.

The population parameters describing the asymmetric color and stretch distributions have been determined in a separate analysis (SK16). However, it may be possible to include these parameters in the BBC fit by multiplying the likelihood by a simulated probability characterizing the number of events in stretch and color bins. The downside of this enhancement is that the SALT-II parameter biases cannot be determined before the fit and thus could result in a significant increase in CPU time. While a simultaneous fit of all parameters is an attractive goal, it may therefore be more computationally efficient to iteratively evaluate the population parameters separately from the BBC fit.

In summary, we have presented a new SN Ia cosmology fitting method, BBC, which uses a large simulation to account for biases from sample selection, light-curve fitting, and CC contamination. Analyzing nearly a million simulated supernova, we find that the BBC method introduces a w-bias below 0.01. A proper cosmology analysis, however, should characterize and account for uncertainties on the BiasCor and CC simulations.

This work was supported in part by the Kavli Institute for Cosmological Physics at the University of Chicago through grant NSF PHY-1125897 and an endowment from the Kavli Foundation and its founder Fred Kavli. R.K. is supported by DOE grant DE-AC02-76CH03000. D.S. is supported by NASA through Hubble Fellowship grant HST-HF2-51383.001 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555.

We gratefully acknowledge support from NASA grant 14-WPS14-0048. This manuscript is based upon work supported by the National Aeronautics and Space Administration under Contract No. NNG16PJ34C issued through the WFIRST Science Investigation Teams Program.

While thanking the anonymous referee is good common practice, we are exceptionally grateful to our referee and feel fortunate to have received valuable contributions.

Footnotes

  • BEAMS: "Bayesian Estimation Applied to Multiple Species" (Kunz et al. 2007).

  • $SNDATA_ROOT/sample_input_files/KS2016

  • See Table 18, column with $m\lt 24$ and ${\kappa }_{\mathrm{Ia}}=0.5$.

  • We use the term data here even though it is a simulated data sample, since the NN procedure is the same with real data.

  • 10 

    One of the training samples serves as an independent data sample to avoid statistical anomalies from self-training.

  • 11 

    CC contamination is defined as the fraction of events that are true CC.

  • 12 

    See Equation (2) in H12.

  • 13 

    Note that ${S}_{\mathrm{CC}}\ne 1$ implies that ${P}_{\mathrm{tot}}\ne 1$ (Equation (7)), but the overall normalization, $\int {{ \mathcal L }}_{i}d\mu =1$, is always satisfied.

  • 14 

    We note that BHM methods determine the ${\sigma }_{\mathrm{int}}$ uncertainty as part of the fit, and Betoule et al. (2014) used a restricted log-likelihood technique to include uncertainties in their measurement of ${\sigma }_{\mathrm{int}}$ as a function of redshift (see their REML discussion).

  • 15 
  • 16 

    Be aware that ${\chi }_{\mathrm{red}}^{2}$ is not computed from the BBC likelihoods.

  • 17 

    The BiasCor CPU breakdown is 10 hr for SN Ia generation, and almost 40 hr for light-curve fitting.

Please wait… references are loading.
10.3847/1538-4357/836/1/56