Issue 
A&A
Volume 631, November 2019



Article Number  A161  
Number of page(s)  24  
Section  Cosmology (including clusters of galaxies)  
DOI  https://doi.org/10.1051/00046361/201935370  
Published online  15 November 2019 
Strongly lensed SNe Ia in the era of LSST: observing cadence for lens discoveries and timedelay measurements
^{1}
MaxPlanckInstitut für Astrophysik, KarlSchwarzschild Str. 1, 85741 Garching, Germany
email: shuber@mpagarching.mpg.de
^{2}
PhysikDepartment, Technische Universität München, JamesFranckStraße 1, 85748 Garching, Germany
^{3}
Institute of Astronomy and Astrophysics, Academia Sinica, 11F of ASMAB, No.1, Section 4, Roosevelt Road, Taipei 10617, Taiwan
^{4}
Munich Re, IT1.6.1.1, Königinstraße 107, 80802 München, Germany
^{5}
Institute of Physics, Laboratory of Astrophysics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Observatoire de Sauverny, 1290 Versoix, Switzerland
^{6}
Department of Electrical Engineering and Computer Science, University of California, 253 Cory Hall, Berkeley, CA 947201770, USA
^{7}
Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854, USA
^{8}
Zentrum für Astronomie der Universität Heidelberg, Institut für Theoretische Astrophysik, Philosophenweg 12, 69120 Heidelberg, Germany
^{9}
Heidelberger Institut für Theoretische Studien, SchlossWolfsbrunnenweg 35, 69118 Heidelberg, Germany
^{10}
Kavli Institute for Particle Astrophysics and Cosmology, PO Box 20450, MS29 Stanford, CA 94309, USA
^{11}
Research Center for the Early Universe, University of Tokyo, 731 Hongo, Bunkyoku, Tokyo 1130033, Japan
^{12}
Department of Physics, University of Tokyo, Tokyo 1130033, Japan
^{13}
Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU, WPI), The University of Tokyo, Chiba 2778582, Japan
^{14}
LSST, 933 N. Cherry Ave., Tucson, AZ 85721, USA
Received:
26
February
2019
Accepted:
26
August
2019
The upcoming Large Synoptic Survey Telescope (LSST) will detect many strongly lensed Type Ia supernovae (LSNe Ia) for timedelay cosmography. This will provide an independent and direct way for measuring the Hubble constant H_{0}, which is necessary to address the current 4.4σ tension in H_{0} between the local distance ladder and the early Universe measurements. We present a detailed analysis of different observing strategies (also referred to as cadence strategy) for the LSST, and quantify their impact on timedelay measurement between multiple images of LSNe Ia. For this, we simulated observations by using mock LSNe Ia for which we produced mockLSST light curves that account for microlensing. Furthermore, we used the freeknot splines estimator from the software PyCS to measure the time delay from the simulated observations. We find that using only LSST data for timedelay cosmography is not ideal. Instead, we advocate using LSST as a discovery machine for LSNe Ia, enabling time delay measurements from followup observations from other instruments in order to increase the number of systems by a factor of 2–16 depending on the observing strategy. Furthermore, we find that LSST observing strategies, which provide a good sampling frequency (the mean internight gap is around two days) and high cumulative season length (ten seasons with a season length of around 170 days per season), are favored. Rolling cadences subdivide the survey and focus on different parts in different years; these observing strategies trade the number of seasons for better sampling frequency. In our investigation, this leads to half the number of systems in comparison to the best observing strategy. Therefore rolling cadences are disfavored because the gain from the increased sampling frequency cannot compensate for the shortened cumulative season length. We anticipate that the sample of lensed SNe Ia from our preferred LSST cadence strategies with rapid followup observations would yield an independent percentlevel constraint on H_{0}.
Key words: gravitational lensing: strong / gravitational lensing: micro / supernovae: general / surveys / cosmological parameters / cosmology: observations
© S. Huber et al. 2019
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Open Access funding provided by Max Planck Society.
1. Introduction
The Hubble constant (H_{0}) is one of the key parameters to describe the Universe. Current observations of the cosmic microwave background (CMB) imply H_{0} = 67.36 ± 0.54 km s^{−1} Mpc^{−1}, assuming a flat Λ cold dark matter (ΛCDM) cosmology and the standard model of particle physics (Planck Collaboration I 2019). This is in tension to H_{0} = 74.03 ± 1.42 km s^{−1} Mpc^{−1}, which is measured from the local distance ladder (Riess et al. 2016, 2018, 2019). In order to verify or refute this 4.4σ tension, independent methods are needed.
One such method is lensing timedelay cosmography, which can determine H_{0} in a single step. The basic idea is to measure the time delays between multiple images of a strongly lensed variable source (Refsdal 1964). This time delay, in combination with reconstructions of the lens mass distributions and lineofsight mass structure, directly yields a “timedelay distance” which is inversely proportional to H_{0} (i.e., ). While the timedelay distance primarily constrains H_{0}, it also provides information about other cosmological parameters (e.g., Linder 2011; Jee et al. 2016; Shajib et al. 2018; Grillo et al. 2018). Applying this method to four lensed quasar systems, the H0LiCOW collaboration^{1} (Suyu et al. 2017) together with the COSMOGRAIL collaboration^{2} (Eigenbrod et al. 2005; Courbin et al. 2018; Bonvin et al. 2018) measured km s^{−1} Mpc^{−1} in flat ΛCDM (Birrer et al. 2019), which is in agreement with the measurements using, a local distance ladder, but larger than CMB measurements.
Another promising approach goes back to the initial idea in Refsdal (1964) that uses lensed supernovae (LSNe) instead of quasars for timedelay cosmography. So far only two LSNe systems with resolved multiple images have been observed. The first one, called SN “Refsdal” discovered by Kelly et al. (2016a,b), was a 1987Alike Type II SN, which was strongly lensed by the galaxy cluster MACS J1149.5+222.3. As shown in Grillo et al. (2018), with SN Refsdal one can measure H_{0} with a 1σ statistical error of 7%. The second LSNe with resolved images is iPTF16geu reported by Goobar et al. (2017) from the intermediate Palomar Transient Factory (iPTF). The system is a SNe Ia at redshift 0.409 and strongly lensed by an intervening galaxy at a redshift of 0.216. Strong lens mass models of the system from More et al. (2017) yield SN image fluxes that are discrepant with the observations, which might be partly an effect of microlensing (Yahalomi et al. 2017; FoxleyMarrable et al. 2018; Dhawan et al. 2019). Additionally, Mörtsell et al. (2019) show that the flux anomalies are within stellar microlensing predictions for certain values of the slope of the projected surface density of the lens galaxy. The models in More et al. (2017) and Goobar et al. (2017) also predict very short time delays (≈0.5 d) that can thus be significantly biased by a microlensing time delay (Bonvin et al. 2019a). Therefore it is important to include microlensing in LSNe studies.
Even though the number of LSNe is a factor of approximately 60 (Oguri & Marshall 2010) lower than the number of lensed quasars, there are important advantages in using LSNe when measuring time delays. First, if they are observed before the peak, the characteristic SN light curves make timedelay measurements easier and possible on shorter time scales in comparison to stochastically varying quasars. Second, supernova images fade away with time, which facilitates measurements of lens stellar kinematics and therefore enables the combination of dynamics (Barnabè et al. 2011; Yıldırım et al. 2017; Shajib et al. 2018) and lens mass modeling. This helps to overcome degeneracies like the masssheet degeneracy (Falco et al. 1985; Schneider & Sluse 2014). The intrinsic luminosity of the source can also be another way in avoiding masssheet degeneracy. Since SNe Ia are standardizable candles, LSNe Ia are very promising in breaking the model degeneracies in two independent ways.
Even though only two LSNe with resolved images are currently known, the Large Synoptic Survey Telescope (LSST) will play a key role in detecting many more LSNe. From investigations done by Oguri & Marshall (2010) assuming detections based on image multiplicity, we expect to find 45 LSNe Ia over the ten year survey. A different approach, using strong lensing magnification for detection (Goldstein & Nugent 2017; Goldstein et al. 2018), leads to 500 − 900 LSNe Ia in ten years (see also Quimby et al. 2014). The differences in the expected number of LSNe Ia arise from different assumptions about the limiting magnitude and cumulative season length, as pointed out by Wojtak et al. (2019). A remaining question, however, is how many of the detected systems are valuable for measuring time delays and whether it will be possible to measure time delays with just the LSST data. The LSST cadence strategy (Marshall et al. 2017) will be defined soon and the goal of this paper is to evaluate different cadences for our science case of measuring time delays in LSNe Ia. For this purpose, we have investigated 20 different observing strategies. We used mock LSNe Ia from the Oguri and Marshall (OM10) catalog (Oguri & Marshall 2010) to simulate observations, and produced the light curves for the mock SNe images based on synthetic observables calculated with Applied Radiative Transfer In Supernovae (ARTIS; Kromer & Sim 2009) for the spherically symmetric SN Ia model W7 (Nomoto et al. 1984). Furthermore, we employed magnifications maps from GERLUMPH (Vernardos et al. 2015) to include the effects of microlensing, similar to the approach followed by Goldstein et al. (2018). We then simulated data points for the light curves following the observational sequence from different cadences and uncertainties according to the LSST science book (LSST Science Collaboration 2009). We used the freeknot splines estimator from Python Curve Shifting (PyCS; Tewes et al. 2013; Bonvin et al. 2016) to measure the time delay from the simulated observation.
The structure of the paper is as follows. In Sect. 2 we present a theoretical calculation of microlensing on LSNe Ia. In Sect. 3 we introduce relevant information about LSST and different observing strategies investigated in this work. In Sect. 4, mock light curves of LSNe Ia are simulated and the timedelay measurement to quantify different LSST observing strategies is described in Sect. 5. The results are presented in Sect. 6 before we conclude in Sect. 7. Throughout this paper, magnitudes are given in the AB system.
2. Microlensing on type Ia Supernovae
In this section we describe the calculation of microlensed SNe Ia light curves combining magnifications maps and a theoretical SNe Ia model. The relevance of microlensing on LSNe Ia has been shown theoretically by Dobler & Keeton (2006), Goldstein et al. (2018), and Bonvin et al. (2019a) and, as mentioned before, the first detected LSNe Ia shows discrepancies between models and observation which might be partly due to microlensing (More et al. 2017; Yahalomi et al. 2017; FoxleyMarrable et al. 2018). Therefore to simulate more realistic light curves of LSNe Ia we included microlensing in our studies. In Sect. 2.1 magnifications maps are described and Sect. 2.2 explains the radiative transfer code ARTIS used to calculate synthetic observables. In addition the projection of the 3D simulation output to 1D is discussed including the geometrical delay as described by Bonvin et al. (2019a). In Sect. 2.3 a comprehensive derivation of microlensed light curves of SNe Ia is presented.
2.1. Magnification maps for microlensing
Microlensing is the effect of additional magnification or demagnification caused by stars, or other compact objects with comparable properties, of the lensing galaxy. We used magnification maps based on GERLUMPH (Vernardos et al. 2015; Chan et al., in prep.) to model the effect of microlensing on a SN Ia. These maps are created using the inverse rayshooting technique (e.g., Kayser et al. 1986; Wambsganss et al. 1992; Vernardos & Fluke 2013) and are pixellated maps containing magnification factors μ at the source plane. The three main parameters for the maps are the convergence κ, the shear γ, and the smooth matter component s which is defined as the ratio of the smooth matter convergence κ_{s} to the total convergence κ. For simplicity, we assumed s = 0.6 in our investigation. Estimated s values at image positions of galaxyscale lenses typically vary between 0.3 and 0.8 (e.g., Schechter et al. 2014; Chen et al. 2018; Bonvin et al. 2019b) and therefore cover a much broader range. Nevertheless Goldstein et al. (2018) investigated a few different s values and found that the effect of microlensing on LSNe Ia depends more on the spatial distribution of the radiation than on the precise s value. Even though we over or underestimate the microlensing effect slightly (depending on the mock lens system) by fixing s in our work, this is done in the same way for all cadence strategies investigated in this work, thus leaving the overall message unchanged. A further investigation of different s values will be presented in Huber et al. (in prep.). The Einstein radius R_{Ein} is the characteristic scale of the map at the source plane, defined as
We assume a Salpeter initial mass function (IMF) with a mean mass of the point mass microlenses of ⟨M⟩ = 0.35 M_{⊙}. Details of the IMF are not relevant for our studies (Chan et al., in prep.). The angular diameter distances D_{s}, D_{d}, and D_{ds} are measured from us to the source, from us to the lens, and between the lens and the source, respectively. If we assume a flat ΛCDM cosmology and neglect the contribution of radiation, we can calculate the angular diameter distance via
Our maps have a resolution of 20 000 × 20 000 pixels and the total size of the maps is set to 10R_{Ein} × 10R_{Ein}. Therefore the size of one square pixel of the magnification map is
For the simulated LSST LSNe Ia in Sect. 4, the size of these microlensing maps ranges from 4.12 × 10^{−2} pc to 2.70 × 10^{−1} pc with a median of 1.02 × 10^{−1} pc. As an example, a magnification map for κ = 0.6 and γ = 0.6 is shown in Fig. 1, where R_{Ein} = 7.2 × 10^{−3} pc = 2.2 × 10^{16} cm assuming an iPTF16geu like configuration.
Fig. 1.
Example magnification map for κ = 0.6, γ = 0.6 and s = 0.6. The color scheme illustrates the different magnification factors μ at the source plane depending on the x and y coordinate. Many micro “caustics” are visible separating regions of high and low magnification. 

Open with DEXTER 
2.2. Theoretical SNe Ia model and the 1D projection
To combine magnification maps with SNe Ia, we adopt a similar approach as Goldstein et al. (2018) where the spherically symmetric W7 model (Nomoto et al. 1984) and the Monte Carlobased radiative transfer code SEDONA (Kasen et al. 2006) were used.
For our analysis, we also rely on the W7 model, but calculate synthetic observables with the radiative transfer code ARTIS (Kromer & Sim 2009), which stands for Applied Radiative Transfer In Supernovae and is a Monte Carlo based code to solve the frequency and timedependent radiative transfer problem in 3D. Thus, ARTIS is not a deterministic solution technique, where the radiative transfer equation is discretized and solved numerically, but a probabilistic approach in which the radiative transfer process is simulated by a large number of Monte Carlo packets, whose propagation is tracked based on the methods developed by Lucy (1999, 2002, 2003, 2005). In this procedure, γray photon packets from the radioactive decay of ^{56}Ni to ^{56}Co and the successive decay of ^{56}Co to ^{56}Fe are converted into UVOIR (ultravioletopticalinfrared radiation) packets which are then treated with the full Monte Carlo radiative transport procedure. In the propagation of UVOIR packets, boundfree, freefree, and especially boundbound processes are taken into account. Once a packet escapes from the SN ejecta and the computational domain (which we refer to as a simulation box), the position x where it escapes the simulation box, the time t_{e} when it leaves and the propagation direction n are stored in addition to the energy and frequency. For the spherically symmetric ejecta the interaction of a photon packet stops after leaving the ejecta surface so in general before hitting the simulation box. For an illustration of two photonpackets leaving the simulation box in the same direction, see Fig. 2.
Fig. 2.
Slice through spherically symmetric SN enclosed in 3D simulation box in order to explain 1D projection of SN and definition of observer time defined in Eq. (5). 

Open with DEXTER 
Typically one is interested in spectra and light curves, to compare observations to theoretical models. To get this information from numerical simulations, all escaping packets have to be binned in frequency and time, alongside the solid angle for asymmetric models. Since the microlensing effect depends on the location of the source as shown in Fig. 1, spatial information of the SN is needed as well. Therefore, we have to project the 3D SN onto a 2D plane perpendicular to the observer and get the specific intensity as a function of wavelength, time, and spatial coordinates x and y. Throughout this work, we assume that SNe Ia can be treated with spherical symmetry and therefore no binning in solid angle is necessary. While this is exact for an inherent 1D model like W7 and good for multidimensional simulations that lead to nearly spherically symmetric ejecta like some delayed detonations (Seitenzahl et al. 2013) and subChandrasekhar detonations (Sim et al. 2010), this approximation is questionable for models that lead to strongly asymmetric ejecta like the violent merger (Pakmor et al. 2011, 2012).
In the 1D case, the spatial dependency of the specific intensity reduces to the dependency on the impact parameter p, that is, the projected distance from the ejecta center. To construct this, we consider a plane containing the position x, where a photonpacket has left the 3D simulation box, and the propagation direction n. This is illustrated in Fig. 2 for two packets leaving at different positions but propagating in the same direction. Because of the vast distance of the SN, the observer is defined as a plane perpendicular to n. The radial coordinate where the photon leaves the box is , and the angle between the position vector x and the propagation direction n is . Then, the impact parameter is defined as
From Fig. 2 we see that different photonpackets, leaving the box at different positions but at the same time after explosion t_{e}, will reach the observer at different times. If we assume that the orange packet from Fig. 2 reaches the observer at time t′ and the blue packet at time t we can relate both times via , where d = r cos θ and d′=x′. The time when the orange packet reaches the observer can be expressed as t′=t_{e} + C, where C is a constant defining the distance from the observer to the simulation box for the orange packet. From this we can write . Since the comparison to real observations is always performed relative to a maximum in a chosen band we are only interested in relative times. Therefore we can simplify the equation for t by defining a reference plane at the center of the SN perpendicular to the propagation direction n (red dashed line). For this reference plane which leads to the observer time
as defined in Lucy (2005) which accounts for the geometrical delay described in Bonvin et al. (2019a). We will refer to the observer time t as the time since explosion. With the definition of the time t and the impact parameter p, the energy is binned in these two quantities^{3} as well as in wavelength λ. The emitted specific intensity can then be calculated via
where the factor 4π is needed as a normalization over the unit sphere.
2.3. Microlensed flux of SNe Ia
To calculate microlensed light curves, one has to first determine the observed spectral flux for a SN, which can be calculated for a source of angular size Ω_{0} on the sky as
Here I_{λ, o} is the specific intensity at the position of the observer. In Fig. 3 a spherical source (gray disk) is placed perpendicular to the line of sight at θ_{p} = 0. The disk represents the projected emitted SN specific intensity I_{λ, e}. Since the source size is much smaller than the angular diameter distance to the source, we use the approximation for small angles and get and cos θ_{p} ≈ 1, which means that we assume parallel light rays. Therefore and the spectral flux can be expressed as
Fig. 3.
SN projected onto disk perpendicular to line of sight to observer. The center of the disk with radius p_{S} is placed at θ_{p} = 0 at an angular diameter distance of D_{A} from the observer. 

Open with DEXTER 
where p_{S} is the source radius of the projected disk. The next step is to relate the specific intensity at the observer’s position to the source position. Hereby, we have to take into account that the specific intensity is redshift dependent. According to Liouville’s theorem I_{ν}/ν^{3} is Lorentz invariant (Mihalas & Mihalas 1984, p. 414) and therefore we have I_{λ} ∝ λ^{−5}. Since the emitted wavelength λ_{e} can be related to the observed one, λ_{o}, via λ_{o} = λ_{e}(1 + z) we find that I_{λ, o} = I_{λ, e}/(1 + z)^{5}. Therefore by using D_{L} = (1 + z)^{2}D_{A} the spectral flux reduces to
To add the effect of microlensing I_{λ, e} has to be replaced with μI_{λ, e}, which is possible since lensing conserves surface brightness. The value μ is the microlensing magnification^{4} as a function of ϕ and p. Therefore we get
We note that this equation is in agreement with Hogg et al. (2002) and Goldstein et al. (2018), where in the latter the flux is calculated in the supernova frame (F_{λ, e}) instead of the observer frame (F_{λ, o}).
The projected specific intensity inferred from simulations is a discrete function in time, wavelength, and impact parameter and denoted as I_{λj, e}(t_{i}, p_{k}). Because of the spherical symmetry of W7, it has just a 1D radial dependency whereas the magnification map is obtained on a 2D cartesian grid. To combine both quantities as needed in Eq. (10), it is necessary to transform one of both discrete quantities into the other coordinate system. We choose to interpolate the specific intensity onto a 2D cartesian grid:
For this, we construct a cartesian grid with a pixel size Δx = Δy ≡ Δd_{mag}. To get accurate results, Δp ≳ Δd_{mag} is required but to save computational memory we restrict ourselves to
As the SNe Ia ejecta expand, Δp grows. Since Δd_{mag} is a fixed quantity defined by Eq. (3), we interpolate the magnification map to a finer or coarser grid to fulfill the criteria in Eq. (12) using the Python library scipy^{5} (Jones et al. 2001). To get I_{λj, e}(t_{i}, x_{l}, y_{m}) for a given time t_{i} we interpolate I_{λj, e}(t_{i}, p_{k}) in p and evaluate it for all grid points (x_{l}, y_{m}). Therefore the spectral flux at time t_{i} after explosion can be calculated via
For the calculation of fluxes and light curves for astronomical sources at redshift z we have
To calculate microlensed light curves for the six LSST filters (details about LSST in Sect. 3) we combine Eq. (13) with the transmission function S_{X}(λ) for LSST filter X. We calculate ABmagnitudes as described by Bessell & Murphy (2012) such that
for the magnitude at the ith time bin for filter X.
Light curves in absolute magnitudes are shown in Fig. 4 for the g and z bands. It is important to catch the light curve peaks of different images of a LSNe Ia to measure time delays. While we have a single peak for restframe light curves u and g we find a secondary peak in the redder bands where we could ideally catch both peaks for delay measurements. In addition to the non microlensed case (dotted black), light curves with microlensing (solid cyan and dashed violet) for two different positions (see left panel) in the magnification map from Fig. 1 are shown. The microlensed light curves are highly distorted and peaks are shifted, which adds large uncertainty to the timedelay measurement between different images based on light curves that undergo different microlensing.
Fig. 4.
Influence of microlensing on light curves g and z for two different positions (solid cyan and violet dashed) as shown in left panel at 21 days after explosion for magnification map of Fig. 1 where R_{Ein} = 7.2 × 10^{−3} pc. The case of no microlensing is shown as black dotted line in the middle and right panels. We see that microlensing can cause distortion of light curves, shift the peaks and therefore add uncertainties to timedelay measurements between images undergoing different microlensing. 

Open with DEXTER 
A more detailed investigation of microlensing is presented in Appendix A, where also spectra and color curves are discussed. We find from the investigated magnification map (Fig. 1) an achromatic phase for some color curves up to approximately 25 − 30 days, as reported in Goldstein et al. (2018); however, other color curves show a shorter or nonexistent achromatic phase. Our investigation also indicates that the achromatic phase depends highly on the specific intensity profiles and therefore the investigation of different explosion models is necessary to explore this further (Huber et al., in prep.). Furthermore, some color curves from ARTIS are different in shape from the ones of SEDONA, which is important since features like peaks are necessary to measure time delays. Even though color curves seem to be more promising for measuring time delays (as suggested by Goldstein et al. 2018, and discussed in Appendix A), we use light curves instead for our further investigation because the sparse sampling of LSST does not provide directly color curves. Since color information is more easy to obtain with triggered followup observations, it is promising to develop color curve fitting methods in the future.
3. Large Synoptic Survey Telescope (LSST)
The LSST will target about 20 000 deg^{2} of the southern hemisphere with a field of view of 9.6 deg^{2}. Observations will be taken in six broad photometric bands ugrizy and each position in the survey area will be repeatedly observed over time, where each visit is composed of one or two backtoback exposures in the observing strategies currently under consideration. About 90% of the observing time will be spent on the 18 000 deg^{2} widefastdeep survey (WFD), where the internight gap between visits in any filter is about three days (LSST Science Collaboration 2009). The rest of the time will be used for other regions like the northern Ecliptic, the south Celestial Pole, the Galactic Center, and a few “deep drilling fields” (DDFs) where single fields (9.6 deg^{2}) will be observed to a greater depth in individual visits.
The scientific goals of LSST include exploring the nature of dark energy and dark matter, exploring the outer regions of the solar system, and completing the inventory of small bodies in the solar system. These science goals restrict the cadence strategy but still leave a certain amount of freedom. For example, to detect fastmoving transients like asteroids, a revisit of an observed field within an hour is usually necessary. Such a revisit is planned if the first observation was taken in one of the bands g, r, i, or z and is done in the same filter as the first observation for most of the cadence strategies under investigation in this work. For more details, see LSST Science Collaboration (2009).
As the LSST Project is in the process of finalizing the cadence strategy, this paper investigates how different cadence strategies will influence the possibility of measuring time delays for LSNe Ia. We specifically look at what is termed as a “rolling cadence”, where the overall idea is to subdivide the WFD and focus on different subdivided parts in different years, with the final tenyear static survey performance being the same as the nominal tenyear survey. This strategy is one way to provide a better sampling but it will reduce the number of seasons. A specific case for a rolling cadence is the one with two declination bands, which subdivides the WFD (with a declination from 0 to −60 deg) into a northern region covering declination from 0 to −30 deg and a southern one with declination in −30 to −60 deg. The idea is then to visit the northern part only in odd years (year one, three, five, seven, and nine) and the southern part in even years (year two, four, six, eight, and ten) or vice versa.
We investigate 20 different observing strategies which are potential LSST cadences or of special interest for our science case. In Sect. 3.1 we present the different observing strategies. Readers who are more interested in the overall conclusions instead of specific details about the cadence strategies might directly jump to Sect. 3.2.
3.1. Specifications of observing strategies
Sixteen out of the 20 investigated cadence strategies are implemented with the OpSim scheduler^{6} and the remaining four are produced by alt_sched^{7} and the featurebased scheduler^{8}. Both the OpSim and featurebased schedulers use a greedy algorithm, where the sky location of the next visit is determined by optimizing different parameters such as seeing, time lapsed since the last visit at the location, etc. In contrast, alt_sched employs a nongreedy algorithm by observing at minimum air mass and only relaxing on that to increase season length. The following key points describe the different observing strategies very briefly, where strategies with a capital letter have a larger than nominal 18 000 deg WFD footprint (the color scheme is explained in Sect. 3.2)^{9}:
3.2. Categorization of observing strategies
From our investigation (in Sect. 6), we find that the main relevant parameters for measuring time delays in LSNe Ia are the cumulative season length (t_{eff}), mostly in terms of the total number of LSNe Ia, and the mean internight gap (t_{gap}; also referred as sampling frequency or sampling) concerning the quality of the light curves. These two parameters are defined later in this section. For categorizing different observing strategies t_{gap} and t_{eff} are shown in Fig. 5 for 20 LSST observing strategies and from this we can separate them into three different categories with respect to the current LSST baseline cadence strategy (baseline2018a):
Fig. 5.
Mean internight gap (upper panel) and mean cumulative season length (lower panel) for 20 different observing strategies to define the three categories “higher cadence and fewer seasons”, “higher cadence”, and “baseline like” as described in Sect. 3.2. 

Open with DEXTER 
Readers interested in general properties of the strategies should focus on these three categories which are highlighted by the category names and their corresponding colors. Observing strategies in blue “higher cadence and fewer seasons” are all rolling cadences. The alternating observation pattern for different years leads to a shorter cumulative season length and hence an improved sampling. Magenta strategies “higher cadence” provide a better mean internight gap than the baseline cadence by reducing the exposure time, doing the revisits of the same field within an hour in different filters or by just doing single visits of a field within a night. For this reason, these strategies provide sampling similar to rolling cadences but they leave the cumulative season length close to the baseline cadence. Rolling cadences which keep the WFD on a 25% reward level have a cumulative seasons length similar to the baseline cadence but do not provide a better mean internight gap and are therefore listed in category “baseline like”^{10}.
The mean cumulative season length and mean internight gap from a simulation of a given observing strategy are calculated by taking the mean of all fields under consideration. We look at two different cases. The first case considers 719 LSST fields from the WFD survey^{11}, which is shown as black solid line in Fig. 5, with the shaded region marking 99% of the fields. In the second case we consider for comparison all 5292 LSST fields covering the entire sky. We only take into account those fields where observations are taken, which is shown as blue dashed line. In the upper panel, cadences with the black solid line below the black dotdashed line are those with a significantly better internight gap than the baseline cadence (i.e., magenta “higher cadence” and blue “higher cadence and fewer seasons” strategies), whereas the others are baselinelike (orange “baseline like”). From the lower panel we distinguish between strategies with a cumulative season length similar to the baseline cadence (magenta “higher cadence” and orange “baseline like”) and a significantly worse cumulative season length (blue “higher cadence and fewer seasons”). The area of the WFD footprint is not plotted explicitly because relative differences in the area are smaller than those in the cumulative season length. Nevertheless cadence strategies with a capital (small) letter have a nominal WFD footprint of 24 700 (18 000) deg^{2}.
The cumulative season length is the summed up season length over all seasons. A season gap for an LSST field is defined if no observation in any filter is taken for 85 days^{12}. The mean cumulative season length of all fields under consideration is shown in the lower panel of Fig. 5. For the internight gap, shown in the upper panel of Fig. 5, the revisits of a field within hours in the same filter are summarized into a single visit. Since SNe do not typically change over such a short time scale, the data points are combined into a single detection with reduced uncertainty. For some of the observing strategies, the mean internight gap between the picked WFD fields deviates significantly from the consideration of all fields, which is due to time spent on other surveys like northern hemisphere, the southern Celestial Pole, and the Galactic Center.
4. Generating realistic LSST mock light curves of LSNe Ia
The goal of this section is to describe how mock LSST light curves for LSNe Ia are obtained for different cadence strategies. We used mock LSNe Ia from the OM10 catalog (Oguri & Marshall 2010), where we assumed the spherically symmetric SN Ia W7 model (Nomoto et al. 1984) for each image to simulate observations randomly. Synthetic light curves were produced with the radiative transfer code ARTIS (Kromer & Sim 2009) where we included the effect of microlensing via magnifications maps from GERLUMPH (Vernardos et al. 2015; Chan, in prep.) following Sect. 2.3. We then simulated data points for the light curves, following the observation pattern from different cadences and uncertainties according to the LSST science book (LSST Science Collaboration 2009). In Sect. 4.1 we describe the OM10 mock catalog for strong lenses and Sect. 4.2 illustrates how we simulated mock light curves for mock LSNe Ia from OM10.
4.1. Mock LSNe Ia from the OM10 catalog
The OM10 catalog (Oguri & Marshall 2010) is a mock lens catalog for strongly lensed quasars and supernovae for LSST. For our purpose, we focus on the LSNe Ia in the catalog. We expect about 45 spatially resolved LSNe Ia for the tenyear LSST survey, under the assumption of OM10, namely a survey area of Ω_{OM10} = 20 000 deg^{2} and a season length of three months. Additionally, the 10σ point source limiting magnitude in the i band for a single visit is assumed to be 23.3. The catalog contains LSNe Ia with two images (doubles) and four images (quads), but includes only those systems where the multiple images are resolved (minimum image separation of 0.5 arcsec) and the peak of the iband magnitude (of the fainter image for a double or the 3rd brightest image for a quad) falls in an observing season and is 0.7 mag brighter than the 10σ point source limiting magnitude. Since we used the W7 model for our mock light curves and we got random microlensing magnification, we allowed automatically for fainter systems up to 25 mag in iband^{13}, instead of the sharp OM10 cut of 22.6 mag. Applying the cut as in OM10 is not necessary, because we used the 5σ depth from simulations of the LSST observing strategies to create realistic light curves with uncertainties. Therefore, systems which are too faint will provide overall worse timedelay measurements than bright ones, making it unnecessary to exclude them in advance. Furthermore, applying no cut in magnitude allows us to draw conclusions about fainter systems not in the OM10 catalog, which are also relevant for timedelay measurements.
The mock catalog assumes as a lens mass model a Singular Isothermal Ellipsoid (SIE; Kormann et al. 1994) and the convergence for the SIE is given in Oguri & Marshall (2010) via
where (θ_{1}, θ_{2}) are the lens coordinates, θ_{Ein} is the Einstein radius in arcsec, e is the ellipticity and λ(e) the dynamical normalization defined in Oguri et al. (2012). The lens mass distribution is then rotated by its position angle.
The OM10 catalog is composed of two parts. The first part is the input for the SIE model containing properties of the source and the lens, such as redshift, velocity dispersion, source positions, and so on. This first part is used to calculate mock images using GLAFIC (Oguri 2010) and therefore predict image positions, magnifications, and time delays, which is the second part of the OM10 catalog. Furthermore, a microlensing map like the one in Fig. 1 is needed to get the macro and microlensing magnification for different images, and therefore κ and γ have to be known for each of the mock images^{14}. We calculated these parameters analytically for the SIE model following equations from Kormann et al. (1994), Oguri & Marshall (2010), and Oguri et al. (2012), and checked the consistency by comparing to magnification factors predicted by GLAFIC.
The distribution of the source redshift and the timedelay of all OM10 mock systems is shown in Fig. 6. For quad systems, the maximum of the six possible time delays (between pair of images) is shown. All 417 LSNe Ia from OM10 correspond to the blue line. To reduce the computational effort for the investigations in Sect. 6 we restrict ourselves to a subsample of 202 mock LSNe Ia (101 mock quads and 101 mock doubles) which is represented by the orange line. We find LSNe Ia for a source redshift of 0.2–1.4 where most of them are around 0.8. In terms of time delays, most of the systems have a maximum delay shorter than 20 days. There are only a few systems with very long time delays (greater than 80 days).
Fig. 6.
Source redshift (upper panel) and timedelay (lower panel) distribution of LSNe Ia from the OM10 catalog. The blue line shows the whole catalog (417 mock systems). The orange line shows the subsample of 202 mock systems (101 randomly picked quads and 101 randomly picked doubles) under investigations in Sect. 6. For the timedelay distribution, the maximum time delay is shown (just relevant for quads) and there are three systems not in the plot with time delays greater than 140 days. The highest delay of a LSNe Ia in the OM10 catalog is 290 days. 

Open with DEXTER 
4.2. Sampling of the light curves for various LSST observing strategies
To simulate observations, we randomly picked 202 mock LSNe Ia from the OM10 catalog (see orange curves in Fig. 6) and produced synthetic microlensed light curves for the mock SNe images following Sect. 2.3. As an example a mock quad system and the corresponding light curves (each image in a random position in its corresponding microlensing map) is shown in Fig. 7. Image A arrives first followed by C, D, and B. In the simulated light curves of image D (red solid line), an ongoing microlensing event is visible as additional brightening about 80 d after the peak, which is not visible in the other three images.
Fig. 7.
Synthetic iband light curves (lower panel) of a mock quad LSNe Ia (upper panel) to illustrate simulated observations. The redshift of the source is 0.71 and is taken into account. The observation sequence is for a random field in the WFD survey for the baseline2018a cadence. 

Open with DEXTER 
To get simulated data points from the theoretical light curves as shown in Fig. 7, we combined the light curves with an observing sequence of visits. This is illustrated for the baseline2018a cadence in Fig. 8 where for one field in the WFD, all observations within the 10year survey are shown. For this purpose, we picked 10 fields in the WFD survey which are listed in Table 1^{15}. That these ten fields are representative for the WFD survey is shown in Fig. 9. Here the mean internight gap (top left panel), mean cumulative season length (bottom left panel) and mean 5σ depth for bands g (top right panel) and r (bottom right panel) for our ten fields (orange), WFD fields (black) and all fields (blue) are shown, while the shaded region encloses the 99th percentile.
Fig. 8.
Illustration of Modified Julian Date (MJD) and filters when observations are taken over 10year survey for field number four from Table 1 for observing strategy baseline2018a. The y axis shows the six LSST filters and the number of observations taken in that filter. 

Open with DEXTER 
Fig. 9.
Comparison of internight gap, cumulative season length, and 5σ depth of ten fields under investigation (orange) to sample of 719 WFD (black) fields. In addition, all 5292 LSST fields where observations are taken (blue) are shown. The lines indicate the mean and the shaded area includes everything up to the 99th percentile. We see that the ten chosen fields are representative for the WFD survey but not for the whole survey. 

Open with DEXTER 
For each of the ten fields for a given cadence, we considered the following for each visit of the field: date (mjd), filter(s) observed, and 5σ pointsource depth m_{5}. The depth is needed to calculate the photometric uncertainties σ_{1} according to the LSST Science Collaboration (2009) (see Appendix B). The magnitude for each data point can then be calculated via
where r_{norm} is a random number following the normal distribution and m_{W7} is the magnitude of the data point from the theoretical W7 model. By placing the synthetic light curves (shown as solid lines in Fig. 7) randomly in one of the fields in Table 1, randomly in time following the detection criteria from the OM10 catalog, and using Eq. (17), we created simulated data points as illustrated in Fig. 7. If two or more data points are taken within one hour in the same filter we combined them into a single measurement, because SNe typically do not change on such time scales. Specifically, two data points m_{data, 1} + σ_{1} and m_{data, 2} + σ_{2} observed at time t_{1} and t_{2}, where t_{1} ≤ t_{2} ≤ t_{1} + 1 h, were combined into a single one as
where
We assigned to the combined data point the time t_{combined}= (t_{1} + t_{2})/2.
5. Timedelay measurements
In this section we describe how we estimate timedelays from the simulated observations to quantify different observing strategies. We investigate 202 mock LSNe Ia (already mentioned in Sect. 4) for each cadence strategy to have sufficient statistics, where we pick 50% doubles and 50% quads. We define a system with “good” time delay measurement as a systems where the accuracy is below 1% and the precision is below 5%. To estimate accuracy and precision we investigate for each of the mock systems, 100 random starting configurations. A starting configuration corresponds to a random position in the microlensing map and a random field from Table 1, where it is placed randomly in one of the observing seasons such that the peak of the iband magnitude of the fainter image for a double or the 3rd brightest image for a quad falls in the observing season. We used the same random positions in the microlensing map for each mock image for all observing strategies investigated here, to avoid uncertainties due to different microlensing patterns. For each of these starting configurations, we then draw 1000 different noise realizations of light curves following Eq. (17). For each of these realizations we have to estimate the time delay and compare it to the true value.
To get a measured time delay from the mock data we used the freeknot splines estimator from PyCS (Python Curve Shifting; Tewes et al. 2013; Bonvin et al. 2016). As a spline, a piecewise polynomial function of degree three is used. The polynomial pieces are connected by knots, where for the optimization process, the initial number of knots has to be specified. The polynomial coefficients and the knot positions are free variables to optimize. To avoid clustering of the knots a minimum knot separation is also defined in advance (Molinari et al. 2004). The basic idea of the optimizer is to fit a single intrinsic spline to two light curves from different images and shift the data iteratively in time and magnitude, and modify the spline parameters, to get a timedelay measurement. We show in Fig. 10 an example of the fitting of the spline to two light curves, with one light curve timeshifted by the time delay to increase overlap with the other. Both the spline parameters and the time delay between the two curves are optimized by reducing the residuals in the fit of the spline to the two light curves. Even with noiseless data, we would get a spread of delays from PyCS due to the range of splines that could fit to the data equally well. Densely sampled light curves with little microlensing would restrict the range of delays. We do not explicitly include additional spline components to model the microlensing variation. An analysis that models separately the intrinsic and microlensing variability is deferred to future work.
Fig. 10.
Illustration of spline fitting technique for a double mock LSNe Ia at redshift 0.27 for the iband light curve. The black line corresponds to the spline fit of the data (blue and orange), where the knots positions (small vertical ticks on the black lines) as well as the magnitude and time shifts have been iteratively optimized to minimize a chisquare term, resulting in the measured delay indicated in the topright. 

Open with DEXTER 
PyCS was initially developed to measure time delays in strongly lensed quasars, and is not yet optimized for LSNe Ia, such as fitting simultaneously multiple filters and using SN template light curves. Nonetheless, Rodney et al. (2016) used the tools of PyCS to measure the time delays between the multiple images of SN Refsdal as one of the approaches, and also fit SN templates to the light curves as another approach. The resulting delays from both approaches were consistent with each other. While both methods did not explicitly include the effects of microlensing, the residuals of the light curves of SN Refsdal suggested that no major microlensing event occurred in the case of SN Refsdal (Rodney et al. 2016). The templatefitting approach was also used by Goldstein et al. (2018) to fit to mock light curves and color curves, although in an idealized scenario without noise and highcadence sampling. Goldstein et al. (2018) found the fitting of templates to light curves yielded timedelay uncertainties of approximately 4%, limited by microlensing distortion of light curves, whereas the fitting to color curves in the achromatic phase provided approximately 1% uncertainties in the delays. For our LSST light curves, we opt to use PyCS on light curves given that (1) color curves are not available from LSST data given the sampling cadence, and (2) there is currently no publicly available templatefitting software accounting for microlensing, an effect that can significantly distort the light curves as shown in Sect. 2.
Applying PyCS to individual filter’s light curves, we get a single independent time delay for each filter. This means that we have for the given LSST filter f, the jth starting configuration and the kth noise realization a deviation from the true time delay:
For each observing strategy and double LSNe Ia, we have thus 1 (delay for the one pair of images) ×6 (filters) ×100 (starting configurations) ×1000 (noise realisations) timedelay deviations as in Eq. (20). For the six pairs of images for a quad system, we have a sample of 6 × 6 × 100 × 1000.
To exclude starting configurations which are completely wrong in comparison to most of the investigated systems we calculated separately for each starting configuration the median τ_{d, 50,f, j} and the error as δ_{f, j} = (τ_{d, 84,f, j} − τ_{d, 16, f, j})/2, where τ_{d, 50,f, j}, τ_{d, 84,f, j} and τ_{d, 16,f, j} are the 50th, 84th, and 16th percentile from the 1000 noise realizations. Furthermore, we combined the six filters via the weighted mean into a single timedelay deviation τ_{d, 50, j} ± δ_{j}, where
This is possible since the distribution of the timedelay deviation for each filter is approximately Gaussian. From this we exclude “catastrophic failures” which are starting configurations with or , which occur for about 10% of the starting configurations independent of the observing strategy. The bar indicates the mean, that is,
The failures are likely due to a bad starting time of the supernova in the season (such as at the beginning or end of season, where some of the light curves of the multiple images would be incomplete due to seasonal gap) and strong microlensing distortions. These effects could be easily identified in real lens systems, and provide advance warning of potentially problematic delay inference. In addition, simulations of light curves mimicking those of real lens systems could be used to identify catastrophic failures of problematic systems and avoid the use of their time delays for further analysis such as cosmography.
After excluding catastrophic failures we are left with about 90 of the 100 initial starting configurations leading to approximately 90 × 1000 ≈ 90 000 timedelay deviations τ_{d,f, j, k} for each filter f. From these we define accuracy as the median τ_{d, 50,f} and precision as δ_{f} = (τ_{d, 84,f} − τ_{d, 16, f})/2, where τ_{d, 84,f} is the 84th and τ_{d, 16,f} the 16th percentile of the 90 000 starting configuration and noise realizations, that is, over the j and k indexes. Since the timedelay deviations from the six filters are independent, we combined them into a single timedelay deviation. This means that in the end, we have for one strategy and a mock LSNe Ia a single τ_{d, 50} ± δ per pair of images, where
To use the weighted mean here is possible since the timedelay distributions for different filters are approximately Gaussian.
6. Results: cadence strategies for LSNe
In this section, we present the results of the investigation of the different cadence strategies presented in Sect. 3. We distinguish between two different cases: (1) using LSST data only for measuring time delays, and (2) using LSST just as a discovery machine for LSNe Ia and getting the time delay(s) from followup observations.
Given that , where Δt_{true} is the time delay between two images, we aim for accuracy (τ_{d, 50} in Eq. (23)) smaller than 1% and precision (δ in Eq. (23)) smaller than 5%. We refer to systems fulfilling these requirements as systems with good time delays. A quad system is counted as successful if at least one of the six delays fulfills these demands. The accuracy requirement is needed for measuring H_{0} with 1% uncertainty, and the precision requirement ensures that the delay uncertainty does not dominate the overall uncertainty on H_{0} given typical mass modeling uncertainties of about 5% (e.g., Suyu et al. 2018).
6.1. Number of LSNe Ia
Before comparing cadence strategies based on the timedelay measurements, we first estimate the total number of LSNe Ia for different observing strategies. Since different observing strategies have different survey areas and different cumulative season lengths, the number of LSNe Ia deviates from the predicted number from OM10. We approximate the total number of LSNe Ia as
where N_{LSNe Ia, OM10} = 45.7, Ω_{OM10} = 20 000 deg^{2} and t_{eff, OM10} = 2.5 yr from Oguri & Marshall (2010). The effective respectively cumulative season length for a given cadence strategy is given via , where we have averaged over the sample of 719 WFD fields. The survey area for a given observing strategy is Ω_{cad}. Instead of taking the nominal values (24 700 deg^{2} for large footprint strategies and 18 000 deg^{2} for rest) we calculated the area from fields represented by our study, which are the fields with a mean cumulative season length and internight gap similar or even better than the 719 WFD fields, that means, cumulative season length (t_{eff}) longer than the lower 99th percentile and internight gap (t_{gap}) shorter than the upper 99th percentile. Additionally we take into account the 5σ depth (m_{5}), where we consider only the main relevant bands g, r, i, and z. Here we consider all fields with (m_{5} + 0.2mag) greater than the lower 99th percentile of the 719 WFD fields. The relaxed 5σ depth is necessary in order to represent the wider areas as suggested by the nominal values^{16}. The area can then be calculated from the number of fields fulfilling the above defined criteria (N_{cad, criteria}), multiplied with the field of view of 9.6 deg^{2}, taking into account the overlap factor of the fields:
where
The total number of fields is 5292, which cover the entire sky, as noted in Sect. 3 and the numerator corresponds to the surface area of a sphere in deg^{2}. Therefore, Ω_{cad} is equivalent to 4πN_{cad, criteria}/5292 in units of rad^{2}. The results from Eq. (24) for the 20 investigated cadences are shown in Table 2. We find that mainly the cumulative season length sets the order of the table and therefore for rolling cadences with a lower number of observing seasons (blue “higher cadence and fewer seasons” strategies) many LSNe Ia will not be detected, because of the alternating observation scheme.
6.2. LSST data only
Here, we quantify the 20 investigated cadences for the case of using LSST data only for measuring time delays. We have investigated 101 randomly picked quads and 101 randomly picked doubles. The distribution of the source redshifts and time delays are shown as orange lines in Fig. 6. The 202 systems are used to determine the fraction f_{a} of systems with good time delays:
where N_{Δt, a} is the number of systems with good time delays and N_{a} = 101 for a = double, quad. Since we have picked the same amounts of doubles and quads, whereas the real ratio between doubles and quads in the OM10 catalog is 69:31, the total fraction can be calculated as
The fractions of doubles f_{double} and quads f_{quad} as well as the total fraction f_{total} are shown in Table 3. It becomes clear that the fraction of systems with good delays depends mostly on the internight gap, where strategies with better sampling (blue “higher cadence and fewer seasons” and magenta “higher cadence” strategies) provide higher fractions.
Fraction of systems (in %) of 202 investigated mock systems (101 doubles and 101 quads) where time delay has been measured with accuracy smaller than 1% and precision smaller than 5% for using LSST data only.
We determined the value of a given cadence strategy for our science case, by combining Tables 2 and 3. The results for the 10year survey are shown in Fig. 11. One sees that the key for obtaining a high number of LSNe Ia with good delays is short internight gap while keeping the cumulative season length baselinelike (magenta “higher cadence” strategies). Only for the strategy alt_sched_rolling, the much better sampling can compensate for the short cumulative season length.
Fig. 11.
Number of LSNe Ia for 10year survey where time delay has been measured with accuracy < 1% and precision < 5% for using only LSST data. 

Open with DEXTER 
From the upper panel of Fig. 12, it becomes clear that only nearby systems (z ≲ 0.9) with long time delays (Δt ≳ 25 d) are measured successfully. High redshift systems are overall fainter and the larger photometric errors make delay measurements more uncertain. Shorter time delays are not accessible because of the sparse sampling and microlensing uncertainties. Looking at the total number in Fig. 11, we find that even the best strategies provide just a handful of systems and therefore using just LSST data for measuring time delays is not ideal. Therefore we investigate the prospects of using followup observations in combination with LSST data.
Fig. 12.
Timedelay and sourceredshift distribution for 202 investigated mock LSNe Ia for “LSST only” (upper panel) and “LSST + followup” (lower panel) for observing strategy kraken_2044. For a quad system, just a single delay is shown, either the first successful measured timedelay or the maximum of the six possible time delays. The blue circles show all 202 investigated systems and the orange filled dots correspond to systems where the time delay has been measured with accuracy better than 1% and precision better than 5%. Comparing the two panels we see significant improvement going from “LSST only” to “LSST + followup”, which we find for most of the observing strategies as suggested by Table 4. 

Open with DEXTER 
Fraction of systems (column two, three & four in %) of 202 investigated mock systems (101 doubles and 101 quads) where time delay has been measured with accuracy smaller than 1% and precision smaller than 5% for using LSST as a discovery machine and getting time delays from followup observations.
6.3. LSST and followup observation
Here, we investigate 20 different LSST observing strategies for using LSST just as a discovery machine. For the time delay measurement we assumed followup observation in the three filters g, r, and i, going to a depth of m_{5, g} = 24.6 mag, m_{5, r} = 24.2 mag and m_{5, i} = 23.7 mag, which are similar to the depth of the baseline cadence. These depths correspond to an observing time of approximately 6 min per filter and night on a 2 m telescope, which is despite diameter assumed to be identical to LSST (e.g., detector sensitivity). We adopt a realistic scenario where followup starts two days after the third data point exceeds the 5σ depth in any filter^{17}. The followup is assumed to take place every second night in all three filters. Alternative followup scenarios are investigated in Sect. 6.4.
Assuming a 2m telescope is a conservative assessment of the followup resources. Observing with larger telescopes would be quite reasonable, which would significantly reduce the exposure time or enable greater depth. The prospects of deeper followup will be discussed in Sect. 6.4.
The fraction of systems with well measured time delays is calculated similar to Sect. 6.2 and summarized in Table 4 for the 20 investigated observing strategies. Applying only the accuracy requirement (τ_{d, 50} < 1%) would yield for all cadence strategies about 30% less systems from the 202 investigated ones with a slight trend for more accurate systems for cadence strategies with improved sampling. Since for the case of “LSST + followup” accuracy is only weakly dependent on the cadence strategy, the precision requirement (δ < 5%) sets mostly the order of Table 4. Since blue (“higher cadence and fewer seasons”) and magenta (“higher cadence”) strategies perform better than orange (“baseline like”) strategies in Tables 3 and 4, we see that for a good precision a short internight gap is important. Even though the light curves for Table 4 are created via followup resources, the better internight gap is still important to detect systems earlier and get better sampled light curves, although it is less important as for “LSST only” where the ratio between the best and worst cadence strategy is about 12 instead of approximately 2 for LSST + followup. This makes clear that in terms of the fraction of systems with good delays, the sampling of the LSST observing strategy is still important but far less than if we would rely on LSST data only. From Table 4 we see that we can increase the fraction and therefore the number of LSNe Ia with good delays for “LSST + followup” in comparison to using only LSST data by a factor of 2–16, depending on the cadence strategy. For a strategy like alt_sched_rolling, the effort of triggering the above defined followup observation is questionable, but for most other strategies the improvement is significant.
In practice it is important to pick systems with good accuracy for a final cosmological sample in order to determine H_{0}. We find that the reduction due to our accuracy requirement is partly due to microlensing but also the quality of the light curve plays a role since followup with greater depth provide more systems with accurate time delays. The prospects of greater depth are investigated in Sect. 6.4 and one way to mitigate the effect of microlensing is the use of the color information as discussed in Appendix A. From Fig. 13 we see that for “LSST + followup” nearly all time delays greater than 20 days yield an accuracy within one percent, whereas going for short delays is dangerous in terms of adding bias to a final cosmological sample.
Fig. 13.
Duration distribution for all 707 possible time delays (blue) and time delays with accuracy better than 1% (orange) from 202 investigated systems for “LSST + followup” and observing strategy colossus_2667. Nearly all time delays are accurate for pairs of images which yield a time delay greater than 20 days. 

Open with DEXTER 
In the lower panel of Fig. 12, we see that similar to the case of using only LSST data, we are limited to nearby systems (z ≲ 0.9). In terms of time delays, we can reach lower values due to the much better quality of the light curve, but still, most of the short time delays are not accessible because of microlensing and our cut on precision.
By combining Tables 2 and 4, we get the total amount of LSNe Ia with good time delays as shown in Fig. 14. We note that the presented results have errors within 10% due to uncertainties in the calculated area and sampling. Another point is that we do not apply the sharp OM10 cut of 22.6 mag as mentioned in Sect. 4.1. We find that we are also able to get good time delays for fainter systems (> 22.6 mag) although in number they are a factor of at least 1.7 fewer than for bright ones (≤22.6 mag). This means that the numbers presented in Table 2 and therefore also the numbers in Fig. 14 are a conservative estimate and in reality we can expect even more systems with well measured time delays. An overly optimistic version of Fig. 14 is presented in Appendix C. While these sources of uncertainties might change the ordering presented in Fig. 14 slightly, it does not influence our overall conclusions which will be presented in the following.
Fig. 14.
Number of LSNe Ia for 10year survey where time delay has been measured with accuracy < 1% and precision < 5% by using LSST as discovery machine in combination with followup observations for measuring time delays (black bars) and using only LSST data (gray bars, see also Fig. 11). Followup is every second night in filters g, r, and i, starting two nights after third LSST detection (with brightness exceeding 5σ depth in any filter). With followup observations, we get a substantial increase in the number of LSNe Ia systems with good measured delays. The numbers shown in this figure are a conservative estimate. An optimistic approach is discussed in Appendix C, leading to the same overall conclusion about the categories of cadence strategies (magenta, orange, and blue) but providing about 3.5 times more LSNe Ia with wellmeasured delays. 

Open with DEXTER 
We see that for the current baseline strategy we would expect about 17 LSNe Ia with good delays over the 10year survey. To increase this number, the most promising strategies are those with a baselinelike cumulative season length and an enhanced sampling (magenta “higher cadence” strategies). To achieve this, the most efficient way would be to get rid of the revisit within the same night (compare colossus_2667 to baseline2018a). Because this would make the science case of fast moving objects impossible, we think a reasonable compromise is to do the revisit within the same night in a different filter (Lochner et al. 2018). This performs worse than doing single visits but still better than doing the revisit in the same filter (compare pontus_2506 to colossus_2667 and baseline2018a). In terms of the cumulative season length, it seems appropriate to stay with a baselinelike season length of about 170 days and ten seasons. Further improvement can be achieved by the replacement of the 2 × 15 s exposure by 1 × 30 s to improve efficiency (compare kraken_2042 to baseline2018a).
Although our numbers for an extended WFD area by 6700 deg^{2} (compare Kraken_2044 and colossus_2667, and Pontus_2002 and baseline2018a) are increased, we only find this for “LSST + followup”. For “LSST only”, strategies with a smaller WFD footprint perform better. Therefore we suggest to stick with the WFD footprint of 18 000 deg^{2}, as used for 16 of the 20 investigated observing strategies, but we are also fine with 24 700deg^{2}. Concerning the depth of the observing strategy most of the investigated strategies provide a similar 5σ depth as the baseline cadence (see right panels of Fig. 9). Those strategies with a slightly lower 5σ depth (alt_sched, alt_sched_rolling and pontus_2489) show no significant deviations in the results, which is related to their enhanced sampling in comparison to the baseline cadence. Another interesting scenario to investigate is the redistribution from visits in y band to more useful bands for LSNe Ia as done in alt_sched. This means going from a distribution of visits in ugrizy: (6.8, 9.4, 21.5, 21.6, 20.2, 20.4)% to (8.2, 11.0, 27.6, 18.1, 25.6, 9.5)%. Because of the many differences between alt_sched and baseline2018a, a direct comparison is impossible but we expect some improvement. A simulation implementing the redistribution with the greedy algorithm used for baseline 2018a would be helpful to quantify this.
Furthermore, a very important result: most rolling cadence strategies are disfavored for our LSNe Ia science case. For these cadence strategies, the shortened cumulative season lengths lead to an overall more negative impact on the number of LSNe Ia with delays, compared to the gain from the increased sampling frequency.
6.4. Different followup scenarios
In this section, the prospects of increasing the number of LSNe Ia by assuming different followup scenarios are discussed. For this purpose, we have investigated a sample of 100 mock LSNe Ia (50 mock quads and 50 mock doubles). The result for the standard followup case is shown in Table 5 first row for the two cadence strategies baseline2018a and alt_sched. To clarify, the standard followup scenario assumes observations in the three filters g, r, and i, going to a depth of m_{5, g} = 24.6 mag, m_{5, r} = 24.2 mag and m_{5, i} = 23.7 mag. Followup is assumed every second night in all three filters two days after the third data point exceeds the 5σ depth in any filter.
Summary of different followup strategies and prospects of an improved analysis technique concerning modeling of microlensing.
An alternative followup scenario would be to observe in bands r, i, and z. The numbers in the second row are slightly worse than those for following up in bands g, r, and i, even though high redshift SNe are well visible in the z band. The reason for this is that we have assumed a baseline like 5σ depth for the followup observations, with m_{5, z} = 22.8 mag which is 1.8 mag lower than the 5σ depth in the g band.
The more aggressive approach is to trigger followup after the second data point exceeds the 5σ depth (see row 3). The improvement of 10–21% might look promising, but also many more false positives will be detected and therefore some observing time would likely be wasted on false positives.
Of further interest is also the cadence of the followup observation. Therefore we consider two additional cases where we followup daily (see row 4) and every third day (see row 5), instead of the standard followup of every second day. While going down to observations every three days decreases the number of LSNe Ia with good delays by about 18%, daily visits improve on a level of 11–18%. Going from a twodays to a single day cadence increases the effort of followup significantly by increasing the numbers of LSNe Ia only slightly.
A more promising approach is to keep the followup observations every two days but increase the depth. To go one magnitude deeper (see row 6) than the average baseline depth a total observing time of approximately 45 min per night is needed for a 2 m telescope as in Sect. 6.3, which is feasible. For alt_sched, this leads to an improvement of 29% in comparison to the standard followup scenario and therefore slightly better than the daily followup case. For baseline2018a, the improvement is 71% and therefore definitely worth considering the effort (compare upper two panels in Fig. 15).
Fig. 15.
Timedelay and sourceredshift distribution for 100 investigated mock LSNe for baseline2018a, similar to Fig. 12. Upper two panels: standard followup observation (first panel) and the option going one magnitude deeper (second panel). Lower two panels: same followup scenarios hypothetically without microlensing. The distributions vary slightly because for a quad system just a single time delay is shown, either the first successfully measured delay or the maximum of the six possible delays. 

Open with DEXTER 
Another possibility is to go two magnitudes deeper but therefore we have to observe approximately 2 h per night to get observations in 3 filters. This seems only feasible for a twometertelescope which can observe simultaneously in three filters or by a telescope with a larger diameter. For alt_sched, this means an improvement in comparison to the standard followup scenario of 62% and for baseline2018a an improvement of 125%. Going another two magnitudes deeper does not increase the number of LSNe Ia significantly and therefore going beyond two magnitudes is in our investigation not worth the effort (compare rows seven and eight in Table 5).
A limiting factor of our analysis is the microlensing effect which is not taken into account in our timedelay measurement with PyCS and therefore we are not able to accurately measure short time delays (see Fig. 12 and the upper two panels of Fig. 15) because we do not model the bias due to microlensing magnification, which is an absolute bias in time, whereas the accuracy is relative to the length of the delay. In rows nine and ten of Table 5, we see that we could increase the number of LSNe Ia with good delays by a factor of 60%–120% in the best case scenario, where we imagine a perfect correction for microlensing deviations. This would give us access to short timedelays as visible in the comparison of the upper two panels and the lower two panels of Fig. 15 and therefore encourages the use of color curves instead of light curves to reduce the impact of microlensing on the delay measurement as suggested by Goldstein et al. (2018) and discussed in Appendix A. Also, the approach of using SNe Ia templates to fit the intrinsic light curve shape including effects of microlensing might be reasonable and produce higher fraction of good delays. Some of these are currently being explored (Pierel & Rodney 2019; Collett et al., in prep.; Collett, priv. comm.).
7. Discussion and summary
In this work, we explored different LSST cadence strategies for measuring time delays in strongly lensed SNe Ia. As illustrated in Fig. 14, we have found that using LSST just as discovery machine in combination with high cadence followup observation for the delay measurement is the best way to increase the number of LSNe Ia with good time delays. In contrast, using only LSST data is not ideal.
To estimate the resulting H_{0} constraint from a sample of LSST LSNe Ia, we assume that each LSNe Ia system with good delays yields typically an H_{0} measurement with approximately 5% uncertainty in flat ΛCDM (including all sources of uncertainties such as the timedelay uncertainty investigated in this paper, and lens mass mass modeling uncertainties). This is currently achieved with the best lensed quasar systems of the H0LiCOW sample, and serves as a reference given that we expect LSNe Ia to yield similar or better constraints than that of lensed quasars. While focussing only on LSNe Ia with good delays could potentially introduce selection bias, we suspect such biases to be small and, if present, could be corrected (e.g., Collett & Cunnington 2016). Thus, for a sample of N lenses, the uncertainty on H_{0} would scale approximately as 5%/sqrt(N), assuming Gaussian uncertainties. With LSST data only, the number of lensed SNe Ia from our investigation (Fig. 14) ranges from approximately 1 − 8, depending on the strategy. This would yield an H_{0} constraint with about 2 − 5% uncertainty from the sample. In the case of LSST with followup, the number of lensed SNe increase substantially, varying from approximately 10 − 28, translating to an H_{0} constraint with about 1 − 2% uncertainty. Therefore, with optimal LSST observing strategy and fastresponse followup, we would reach percentlevel constraint on H_{0}, which is a factor of two to five lower in uncertainty compared to the case of LSSTonly scenario.
From the investigated cadence strategies for the followup scenario, we have found that observing strategies with an improved sampling by keeping everything else baselinelike is, in general, the best observing strategy for our science case. An ideal strategy is presented in the following key points:

Ten seasons with a season length of 170 days or longer;

WFD footprint of 18 000 deg^{2} up to 24 700 deg^{2};

One revisit within a night in a different filter than the first visit;

Replacement of 2 × 15 s exposure by 1 × 30 s;

Distribution of visits like alt_sched [ugrizy as ∼(8.2, 11.0, 27.6, 18.1, 25.6, 9.5)%].
Another very important point is that most of the suggested rolling cadences are clearly disfavored for our science case because many LSNe Ia will not even be detected due to the reduced cumulative season length. The only rolling cadence which performed well is rolling_mix_10yrs_opsim, but this is most likely because the WFD stays on in the background and additionally revisits are done in different filters, which can partly compensate for the not ideal “rolling” feature.
We have assumed that followup observations starts two days after the third LSST data point exceeds the 5σ depth. The follow up is done every second night in three filters g, r, and i to a depth of m_{5, g} = 24.6 mag, m_{5, r} = 24.2 mag and m_{5, i} = 23.7 mag, which is feasible with a 2m telescope. To improve on that mainly a greater depth is of interest. Followup observations going one magnitude deeper than the baseline 5σ depth, or even two magnitude deeper, if feasible, will increase the number of LSNe Ia with good timedelays significantly. Going beyond two magnitude deeper is not worth the effort.
We would like to point out that we have only investigated LSNe Ia. Although a single lensed CoreCollapse (CC) SN is less valuable than a LSNe Ia (given the standardizable light curves of SNe Ia), the larger sample of lensed CC SNe, primarily type IIn (Goldstein et al. 2019; Wojtak et al. 2019), which will be detected by LSST makes them as well relevant for timedelay cosmography. Due to the different light curve shapes and luminosities the optimal cadence strategy for measuring time delays in CC SNe might be different from the one for LSNe Ia. At least in terms of total number of lensed CC SNe the strategies will be ordered in the same way as in Table 2 but the numbers will be a factor of 1.8 higher (Oguri & Marshall 2010). In terms of measuring time delays the improved sampling requested from our investigation of LSNe Ia will be also helpful for the case of CC SNe. To investigate the prospects of measuring time delays in lensed CC SNe similar to the case of LSNe Ia the specific intensity from a theoretical model is required.
In terms of analyzing the data it seems promising to find ways to reduce the impact of microlensing. One possibility will be the use of color curves instead of light curves. To do this, it might be worth to implement SNe template fitting instead of splines into PyCS. With the recent discovery of the very first LSNe system and the expected sample from LSST, our work demonstrates that timedelay cosmography as envisioned by Refsdal (1964) has bright prospects in the LSST era.
Technical detail: since the box expands with the SN over time, the impact parameter p is a function of time. To eliminate this time dependency, one can assume that the SN is homologously expanding (Roepke 2005) and therefore simply divide the impact parameter by the observer time as in Goldstein et al. (2018). The unit of this new impact parameter is therefore cm s^{−1} instead of cm and the unit of the new specific intensity is erg s cm^{−1} instead of erg s^{−1} cm^{−3}.
https://cadencehackathon.readthedocs.io/en/latest/current_runs.html and in addition pontus_2506 from Tiago Ribeiro.
A discussion within the Dark Energy Science Collaboration revealed that the three rolling cadences kraken_2036, mothra_2045, and pontus_2502 seem to lack some observations. Nevertheless, we investigate those cadences as all others, because we are mainly interested in the dependency on different parameters. Our statement about rolling cadences would stay the same even if we remove these three strategies from our investigation.
This leads to a few percent overestimation of the total number of LSNe Ia with good time delays for large footprints in comparison to the 18 000 deg^{2}. Nonetheless, since we find that the improvement due to wider area is too small this is not a problem and does not affect the overall conclusions of our work.
Goldstein et al. (2019) suggests that followup after three data points might be optimistic, but we would like to point out that this relies on the applied classification scheme (Goldstein et al. 2018) that does not make use of all available lensing information which would help with identification.
Δd_{mag} = 1.1 × 10^{13} cm or the interpolated value to fulfill Eq. (12) is used for Δx_{l} and Δy_{m}.
Acknowledgments
We thank W. Hillebrandt, S. Blondin, D. A. Goldstein for useful discussions, and the internal LSST DESC reviewers S. Rodney, A. Goobar, and T. E. Collett for their feedback that improved the presentation of our paper. We also thank the anonymous referee for helpful comments. SH and SHS thank the Max Planck Society for support through the Max Planck Research Group for SHS. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 771776). This research was supported in part by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Research, Innovation and Science. UMN has been supported by the Transregional Collaborative Research Center TRR33 “The Dark Universe” of the Deutsche Forschungsgemeinschaft. VB, JHHC and FC acknowledge support from the Swiss National Science Foundation and through European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (COSMICLENS: grant agreement No. 787866). DR acknowledges support from the National Science Foundation Graduate Research Fellowship under Grant No. DGE 1752814. HA has been supported by the Rutgers Discovery Informatics Institute Fellowship of Excellence in Computational and Data Science for academic years 20172018, 20182019. M. K. acknowledges support from the Klaus Tschira Foundation. This work was supported in part by World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan, and JSPS KAKENHI Grant Number JP15H05892 and JP18K03693. The DESC acknowledges ongoing support from the Institut National de Physique Nucléaire et de Physique des Particules in France; the Science and Technology Facilities Council in the United Kingdom; and the Department of Energy, the National Science Foundation, and the LSST Corporation in the United States. DESC uses resources of the IN2P3 Computing Center (CCIN2P3–Lyon/Villeurbanne  France) funded by the Centre National de la Recherche Scientifique; the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the US Department of Energy under Contract No. DEAC0205CH11231; STFC DiRAC HPC Facilities, funded by UK BIS National Einfrastructure capital grants; and the UK particle physics grid, supported by the GridPP Collaboration. This work was performed in part under DOE Contract DEAC0276SF00515.
References
 Barnabè, M., Czoske, O., Koopmans, L. V. E., Treu, T., & Bolton, A. S. 2011, MNRAS, 415, 2215 [NASA ADS] [CrossRef] [Google Scholar]
 Bessell, M., & Murphy, S. 2012, PASP, 124, 140 [NASA ADS] [CrossRef] [Google Scholar]
 Birrer, S., Treu, T., Rusu, C. E., et al. 2019, MNRAS., 484, 4726 [NASA ADS] [CrossRef] [Google Scholar]
 Bonvin, V., Tewes, M., Courbin, F., et al. 2016, A&A, 585, A88 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Bonvin, V., Courbin, F., Suyu, S. H., et al. 2018, A&A, 616, A183 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Bonvin, V., Tihhonova, O., Millon, M., et al. 2019a, A&A, 621, A55 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Bonvin, V., Millon, M., Chan, J. H. H., et al. 2019b, A&A, 629, A97 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Chen, G. C.F., Chan, J. H. H., Bonvin, V., et al. 2018, MNRAS, 481, 1115 [NASA ADS] [CrossRef] [Google Scholar]
 Collett, T. E., & Cunnington, S. D. 2016, MNRAS, 462, 3255 [NASA ADS] [CrossRef] [Google Scholar]
 Courbin, F., Bonvin, V., BuckleyGeer, E., et al. 2018, A&A, 609, A71 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Dhawan, S., Johansson, J., Goobar, A., et al. 2019, MNRAS, submitted [arXiv:1907.06756] [Google Scholar]
 Dobler, G., & Keeton, C. R. 2006, ApJ, 653, 1391 [NASA ADS] [CrossRef] [Google Scholar]
 Eigenbrod, A., Courbin, F., Vuissoz, C., et al. 2005, A&A, 436, 25 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Falco, E. E., Gorenstein, M. V., & Shapiro, I. I. 1985, ApJ, 289, L1 [NASA ADS] [CrossRef] [Google Scholar]
 FoxleyMarrable, M., Collett, T. E., Vernardos, G., Goldstein, D. A., & Bacon, D. 2018, MNRAS, 478, 5081 [NASA ADS] [CrossRef] [Google Scholar]
 Goldstein, D. A., & Nugent, P. E. 2017, ApJ, 834, L5 [NASA ADS] [CrossRef] [Google Scholar]
 Goldstein, D. A., Nugent, P. E., Kasen, D. N., & Collett, T. E. 2018, ApJ, 855, 22 [NASA ADS] [CrossRef] [Google Scholar]
 Goldstein, D. A., Nugent, P. E., & Goobar, A. 2019, ApJS, 243, 6 [NASA ADS] [CrossRef] [Google Scholar]
 Goobar, A., Amanullah, R., Kulkarni, S. R., et al. 2017, Science, 356, 291 [NASA ADS] [CrossRef] [Google Scholar]
 Grillo, C., Rosati, P., Suyu, S. H., et al. 2018, ApJ, 860, 94 [NASA ADS] [CrossRef] [Google Scholar]
 Hogg, D. W., Baldry, I. K., Blanton, M. R., & Eisenstein, D. J. 2002, ArXiv eprints [arXiv:astroph/0210394v1] [Google Scholar]
 Jee, I., Komatsu, E., Suyu, S. H., & Huterer, D. 2016, JCAP, 4, 031 [NASA ADS] [CrossRef] [Google Scholar]
 Jones, E., Oliphant, T., Peterson, P., et al. 2001, SciPy: Open Source Scientific Tools for Python [Google Scholar]
 Kasen, D., & Woosley, S. E. 2007, ApJ, 656, 661 [NASA ADS] [CrossRef] [Google Scholar]
 Kasen, D., Thomas, R. C., & Nugent, P. 2006, ApJ, 651, 366 [NASA ADS] [CrossRef] [Google Scholar]
 Kayser, R., Refsdal, S., & Stabell, R. 1986, A&A, 166, 36 [NASA ADS] [Google Scholar]
 Kelly, P. L., Rodney, S. A., Treu, T., et al. 2016a, ApJ, 819, L8 [NASA ADS] [CrossRef] [Google Scholar]
 Kelly, P. L., Brammer, G., Selsing, J., et al. 2016b, ApJ, 831, 205 [NASA ADS] [CrossRef] [Google Scholar]
 Kormann, R., Schneider, P., & Bartelmann, M. 1994, A&A, 284, 285 [NASA ADS] [Google Scholar]
 Kromer, M., & Sim, S. A. 2009, MNRAS, 398, 1809 [NASA ADS] [CrossRef] [Google Scholar]
 Linder, E. V. 2011, Phys. Rev. D, 84, 123529 [NASA ADS] [CrossRef] [Google Scholar]
 Lochner, M., Scolnic, D. M., Awan, H., et al. 2018, Optimizing the LSST Observing Strategy for Dark Energy Science: DESC Recommendations for the WideFastDeep Survey [Google Scholar]
 LSST Science Collaboration (Abell, P. A., et al.) 2009, ArXiv eprints [arXiv:0912.0201] [Google Scholar]
 Lucy, L. B. 1999, A&A, 344, 282 [NASA ADS] [Google Scholar]
 Lucy, L. B. 2002, A&A, 384, 725 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Lucy, L. B. 2003, A&A, 403, 261 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Lucy, L. B. 2005, A&A, 429, 19 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Marshall, P., Ambaum, M. H. P., Maddison, J. R., et al. 2017, Geophys. Res. Lett., 44, 286 [NASA ADS] [CrossRef] [Google Scholar]
 Mihalas, D., & Mihalas, B. W. 1984, Foundations of Radiation Hydrodynamics (New York: Oxford University Press) [Google Scholar]
 Molinari, N., Durand, J.F., & Sabatier, R. 2004, Comput. Stat. Data Anal., 45, 159 [CrossRef] [Google Scholar]
 More, A., Suyu, S. H., Oguri, M., More, S., & Lee, C.H. 2017, ApJ, 835, L25 [NASA ADS] [CrossRef] [Google Scholar]
 Mörtsell, E., Johansson, J., Dhawan, S., et al. 2019, MNRAS, submitted [arXiv:1907.06609] [Google Scholar]
 Nomoto, K., Thielemann, F.K., & Yokoi, K. 1984, ApJ, 286, 644 [NASA ADS] [CrossRef] [Google Scholar]
 Oguri, M. 2010, PASJ, 62, 1017 [NASA ADS] [Google Scholar]
 Oguri, M., & Marshall, P. J. 2010, MNRAS, 405, 2579 [NASA ADS] [Google Scholar]
 Oguri, M., Inada, N., Strauss, M. A., et al. 2012, AJ, 143, 120 [NASA ADS] [CrossRef] [Google Scholar]
 Pakmor, R., Hachinger, S., Röpke, F. K., & Hillebrandt, W. 2011, A&A, 528, A117 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Pakmor, R., Kromer, M., Taubenberger, S., et al. 2012, ApJ, 747, L10 [NASA ADS] [CrossRef] [Google Scholar]
 Pierel, J. R., & Rodney, S. A. 2019, ApJ, 876, 107 [NASA ADS] [CrossRef] [Google Scholar]
 Planck Collaboration I. 2019, A&A, in press, https://doi.org/10.1051/00046361/201833880 [Google Scholar]
 Quimby, R. M., Oguri, M., More, A., et al. 2014, Science, 344, 396 [NASA ADS] [CrossRef] [Google Scholar]
 Refsdal, S. 1964, MNRAS, 128, 307 [NASA ADS] [CrossRef] [Google Scholar]
 Riess, A. G., Macri, L. M., Hoffmann, S. L., et al. 2016, ApJ, 826, 56 [NASA ADS] [CrossRef] [Google Scholar]
 Riess, A. G., Casertano, S., Yuan, W., et al. 2018, ApJ, 861, 126 [NASA ADS] [CrossRef] [Google Scholar]
 Riess, A. G., Casertano, S., Yuan, W., Macri, L. M., & Scolnic, D. 2019, ApJ, 876, 85 [NASA ADS] [CrossRef] [Google Scholar]
 Rodney, S. A., Strolger, L.G., Kelly, P. L., et al. 2016, ApJ, 820, 50 [NASA ADS] [CrossRef] [Google Scholar]
 Roepke, F. K. 2005, A&A, 432, 969 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Schechter, P. L., Pooley, D., Blackburne, J. A., & Wambsganss, J. 2014, ApJ, 793, 96 [NASA ADS] [CrossRef] [Google Scholar]
 Schneider, P., & Sluse, D. 2014, A&A, 564, A103 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Seitenzahl, I. R., CiaraldiSchoolmann, F., Röpke, F. K., et al. 2013, MNRAS, 429, 1156 [NASA ADS] [CrossRef] [Google Scholar]
 Shajib, A. J., Treu, T., & Agnello, A. 2018, MNRAS, 473, 210 [NASA ADS] [CrossRef] [Google Scholar]
 Sim, S. A., Röpke, F. K., Hillebrandt, W., et al. 2010, ApJ, 714, L52 [NASA ADS] [CrossRef] [Google Scholar]
 Suyu, S. H., Bonvin, V., Courbin, F., et al. 2017, MNRAS, 468, 2590 [NASA ADS] [CrossRef] [Google Scholar]
 Suyu, S. H., Chang, T.C., Courbin, F., & Okumura, T. 2018, Space Sci. Rev., 214, 91 [NASA ADS] [CrossRef] [Google Scholar]
 Tewes, M., Courbin, F., & Meylan, G. 2013, A&A, 553, A120 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Vernardos, G., & Fluke, C. J. 2013, MNRAS, 434, 832 [NASA ADS] [CrossRef] [Google Scholar]
 Vernardos, G., Fluke, C. J., Bate, N. F., Croton, D., & Vohl, D. 2015, ApJS, 217, 23 [NASA ADS] [CrossRef] [Google Scholar]
 Wambsganss, J., Witt, H. J., & Schneider, P. 1992, A&A, 258, 591 [NASA ADS] [Google Scholar]
 Wojtak, R., Hjorth, J., & Gall, C. 2019, MNRAS, 487, 3342 [NASA ADS] [CrossRef] [Google Scholar]
 Yahalomi, D. A., Schechter, P. L., & Wambsganss, J. 2017, ArXiv eprints [arXiv:1711.07919v1] [Google Scholar]
 Yıldırım, A., van den Bosch, R. C. E., van de Ven, G., et al. 2017, MNRAS, 468, 4216 [NASA ADS] [CrossRef] [Google Scholar]
Appendix A: Case study
To illustrate the effect of microlensing on SNe Ia in detail we calculated microlensed spectra and light curves. For this we assumed an iPTF16geulike (Goobar et al. 2017) configuration, which means the source redshift z_{S} = 0.409 and the lens redshift z_{L} = 0.216. The redshifts are needed to calculate the size of a pixel Δd_{mag} = 3.6 × 10^{−6} pc = 1.1 × 10^{13} cm of the magnification map which corresponds to an Einstein radius of R_{Ein} = 7.2 × 10^{−3} pc = 2.2 × 10^{16} cm and an angular scale of R_{Ein}/D_{s} = 6.5 × 10^{−12} rad = 1.3 × 10^{−6} arcsec. Since we only determined absolute magnitudes and restframe fluxes in this section, we set z = 0 and D_{L} = D_{A} = 10 pc in Eq. (13)^{18} and (14).
For this case study, we looked at two specific example realizations where we placed a SNe Ia in two different positions of the magnification map from Fig. 1, corresponding to image A of iPTF16geu (More et al. 2017).
First we considered the position (x, y) = (6.0, 4.3) R_{Ein} and compared the nonmicrolensed flux F_{λj, o, cart, μ = 1}(t_{i}) with the microlensed one F_{λj, o, cart}(t_{i}) for two different instances in time as illustrated in Fig. A.1. Panels a–d correspond to t = 14.9 d and e–h to t = 39.8 d. For both times, the zoomedin magnification map (panels a and e) from Fig. 1 is provided, with the position and radius of the SN shown by a cyan circle. The radius is defined via the area of the SN, which contains 99.9% of the total projected specific intensity ∑_{j, l, m}I_{λj, e}(t_{i}, x_{l}, y_{m}). In addition, the normalized specific intensity profiles (panels b and f) are shown, where the vertical cyan line corresponds to the radius of the SN and the dashed black line marks the distance between the center of the SN and the caustic in the magnification map which separates low and high magnification regions. The normalized specific intensity of filter band X is defined as
Fig. A.1.
Effect of microlensing on restframe spectrum of SN Ia centered at (x, y) = (6.0, 4.3) R_{Ein} in microlensing map from Fig. 1 for two different restframe times since explosion t. The Einstein Radius defined in Eq. (1) is R_{Ein} = 7.2 × 10^{−3} pc = 2.2 × 10^{16} cm. For a discussion see Appendix A. 

Open with DEXTER 
which corresponds to a radial radiation distribution for a given filter X. Furthermore the fluxes for the cases with microlensing and without (panels c and g) are shown together with their relative strength (panels d and h).
For t = 14.9 d, the SN is completely in a homogeneous region of demagnification as shown in panel a of Fig. A.1 and therefore the flux is demagnified by the same amount for all wavelengths, as can be seen in panel c and more clearly in panel d^{19} independent of the specific intensity profiles. For the later time, t = 39.8 d, the SN has expanded further and crosses over a caustic as visible in panel e, such that the outer region of the SN is partly in a region of high magnification. From the specific intensity profiles in panel f we see that the outer ejecta region emits relatively stronger in the bluer bands (u and g) than in the red ones (r, i, z, and y), because for later days most of the Fe III has combined to Fe II, which is less transparent in the bluer bands than in the red ones (Kasen & Woosley 2007; Goldstein et al. 2018). This explains the overall trend that the blue part of the spectrum is more magnified than the red part, which is indeed seen in panels g and h.
For the case constructed in Fig. A.1 we see a significant impact on the light curves due to microlensing as shown in red in Fig. A.2 where the light curves are highly distorted. For the u − r color curve, the effect of microlensing cancels out up to day 25. Afterward, the crossing of the micro caustics, separating regions of low and high magnification, in combination with different spatial distributions of the radiation in u and r band becomes important. This is an example for the socalled “achromatic phase” as reported by Goldstein et al. (2018), who find that color curves up to day 20 after explosion are nearly independent of microlensing. They claim this is due to the similar specific intensity profiles for early days and more different ones at later days, as we can also see for our case comparing panel b and f in Fig. A.1.
Fig. A.2.
Influence of microlensing on two light curves and corresponding color curve for the SN shown in Fig. A.1. As long as demagnification due to microlensing is similar in u and r bands, it cancels out in the color curve. 

Open with DEXTER 
For further investigation we construct another test case where the caustic of the magnification map will be crossed during the achromatic phase, as shown in Fig. A.3. Here the microlensing effect is clearly visible in the flux ratio, although the specific intensity profiles are more similar as for later days (compare panels b and f of Fig. A.1). Also, the influence on the light curves is visible earlier and more drastic as shown in Fig. A.4. The light curves are highly distorted and peaks are shifted, which adds large uncertainty to the timedelay measurement between different images based on light curves that undergo different microlensing. Even though the u − r color curve compensates microlensing in early phases quite well and is therefore promising for measuring time delays, this is not true for all color curves as shown for the case of g − z. Here, the microlensed and nonmicrolensed curves deviate from each other even though they are in the achromatic phase.
Fig. A.3.
Effect of microlensing on SN Ia spectrum at restframe time t = 14.9 d after explosion, similar as in Fig. A.1 but at slightly different position of SN: (x, y) = (6, 4.25) R_{Ein}, where R_{Ein} = 7.2 × 10^{−3} pc = 2.2 × 10^{16} cm. For a better comparison, the case from Fig. A.1 is shown as gray dashed line in the upperleft and lowerright panels. 

Open with DEXTER 
Fig. A.4.
Influence of microlensing on light curves and color curves for SN shown in Fig. A.3. Whereas the light curves are highly influenced by microlensing, the color curve u − r is very similar for the case of microlensing and nonmicrolensing. This is not the case for all color curves, as shown for example by g − z. 

Open with DEXTER 
To explore this further, we consider a large sample of 10 000 random SN positions in the magnification map shown in Fig. 1. For each position, we calculate the light curves using Eq. (15) and then calculate the color curves. For each time bin t_{i}, we calculate from the sample the 50th percentile as well as the 1σ and 2σ spread. The results for all restframe LSST color curves are shown in Figs. A.5 and A.6, where the vertical black line marks the time when the 2σ spread is the first time beyond 0.1mag. We find the general trend that the achromatic phase in the color curves becomes shorter the further the different bands are apart. As in Goldstein et al. (2018), we find an achromatic phaselike behavior until 25–30 days after explosion, but only for restframe color curves containing combinations of u, g, r, or i bands (except u − i) or the color curve z − y (Fig. A.5). As soon as we combine one of the bands u, g, r, or i with z or y we see the influence of microlensing earlier (Fig. A.6). This behavior can be explained by looking at the normalized specific intensity profiles for early times as shown in panel b of Fig. A.1: The profiles for the outer region (pixel 150–200) are similar for filters z and y, but different from u, g, r, and i. Since the achromatic phase depends highly on the specific intensity profiles, the investigation of different explosion models is necessary to explore this further (Huber et al., in prep.).
Fig. A.5.
Comparison of nonmicrolensed color curves (dotted black) to microlensed ones (with median in solid red, and 1σ and 2σ range in different shades), for 10 000 random SNe positions in magnification map. The vertical black line indicates the first time the 2σ spread of the microlensed color curves exceeds 0.1 magnitudes. The panels are all restframe LSST color curves for a saddle image (κ = 0.6, γ = 0.6, and s = 0.6, see Fig. 1), which show an achromatic phase similar to the one reported by Goldstein et al. (2018), but we find the achromatic phase only for combinations of the bands u, g, r, and i (except u − i) and for the color curve z − y up to approximately 25 − 30 days after explosion. 

Open with DEXTER 
Fig. A.6.
Same as in Fig. A.5, except these colors curves exhibit a shorter or nonexisting achromatic phase in comparison to those in Fig. A.5. 

Open with DEXTER 
In addition to the different durations of the achromatic phase for the various color curves, we note that some of our color curves from ARTIS are different in shape from those of SEDONA in Goldstein et al. (2018). It is also very important to emphasize that our results in this section are for restframe color curves, which means that different color curves will be more or less useful depending on the redshift of the source.
Appendix B: Photometric uncertainty of LSST
The photometric uncertainty σ_{1} from Eq. (17) is defined as:
and
The parameter γ^{c} varies from 0.037 to 0.040 for different filters and x = 10^{4(m − m5)}, where m is the magnitude of the SN data point and m_{5} is the 5σ pointsource depth (for more details see LSST Science Collaboration 2009, Sect. 3.5, p. 67).
Appendix C: Optimistic estimate of the number of LSNe Ia
The numbers presented Table 2 are based on the prediction of OM10 which depends on a sharp magnitude cut, allowing only for systems where the iband peak magnitude of the fainter image for a double or the 3rd brightest image for a quad is brighter than 22.6 mag. In our investigation, assuming the W7 model with random microlensing magnifications, we investigated also fainter systems where we find that we can also get well measured time delays for these systems although the fraction is reduced by at least a factor of 1.7. This fraction for faint systems is an overestimation, because bright systems yield better time delay measurements than faint systems and our investigated sample peaks around the cut applied in OM10 and therefore we probe mostly the bright regime of the systems fainter than 22.6 mag.
However results presented by Wojtak et al. (2019) suggest that we can find approximately 440 LSNe Ia over the 10 year survey with the same approach as in OM10 but allowing also for fainter systems. If we assign about 80 of those systems to bright ones as suggested by Table 2, we expect to find 4.5 times more faint (> 22.6 mag) systems than the bright (≤22.6 mag) ones. Further, we calculate with Eq. (28) the fraction of systems with well measured delay, but separately for faint and bright systems, leading to f_{total, faint} and f_{total, bright}. Therefore an optimistic estimate of the number of LSNe Ia with well measured time delay for a given cadence can be calculated as
where N_{LSNe Ia, cad} are the numbers from Table 2. The results are shown in Fig. C.1. Despite the fact that the numbers are a factor of approximately 3.5 higher than in Fig. 14, we see the same general trend for magenta, orange, and blue cadence strategies, leaving the overall conclusions on cadence strategies unchanged.
Fig. C.1.
Number of LSNe Ia with well measured time delay for 10year survey including faint systems with iband peak magnitude (of fainter image in doubles, or third brightest image in quads) fainter than 22.6 mag (black bars), in comparison to results presented in Fig. 14 (gray bars). We see that the optimistic (black bars) and conservative (gray bars) estimates show the same trend for magenta, orange and blue cadence strategies. 

Open with DEXTER 
All Tables
Fraction of systems (in %) of 202 investigated mock systems (101 doubles and 101 quads) where time delay has been measured with accuracy smaller than 1% and precision smaller than 5% for using LSST data only.
Fraction of systems (column two, three & four in %) of 202 investigated mock systems (101 doubles and 101 quads) where time delay has been measured with accuracy smaller than 1% and precision smaller than 5% for using LSST as a discovery machine and getting time delays from followup observations.
Summary of different followup strategies and prospects of an improved analysis technique concerning modeling of microlensing.
All Figures
Fig. 1.
Example magnification map for κ = 0.6, γ = 0.6 and s = 0.6. The color scheme illustrates the different magnification factors μ at the source plane depending on the x and y coordinate. Many micro “caustics” are visible separating regions of high and low magnification. 

Open with DEXTER  
In the text 
Fig. 2.
Slice through spherically symmetric SN enclosed in 3D simulation box in order to explain 1D projection of SN and definition of observer time defined in Eq. (5). 

Open with DEXTER  
In the text 
Fig. 3.
SN projected onto disk perpendicular to line of sight to observer. The center of the disk with radius p_{S} is placed at θ_{p} = 0 at an angular diameter distance of D_{A} from the observer. 

Open with DEXTER  
In the text 
Fig. 4.
Influence of microlensing on light curves g and z for two different positions (solid cyan and violet dashed) as shown in left panel at 21 days after explosion for magnification map of Fig. 1 where R_{Ein} = 7.2 × 10^{−3} pc. The case of no microlensing is shown as black dotted line in the middle and right panels. We see that microlensing can cause distortion of light curves, shift the peaks and therefore add uncertainties to timedelay measurements between images undergoing different microlensing. 

Open with DEXTER  
In the text 
Fig. 5.
Mean internight gap (upper panel) and mean cumulative season length (lower panel) for 20 different observing strategies to define the three categories “higher cadence and fewer seasons”, “higher cadence”, and “baseline like” as described in Sect. 3.2. 

Open with DEXTER  
In the text 
Fig. 6.
Source redshift (upper panel) and timedelay (lower panel) distribution of LSNe Ia from the OM10 catalog. The blue line shows the whole catalog (417 mock systems). The orange line shows the subsample of 202 mock systems (101 randomly picked quads and 101 randomly picked doubles) under investigations in Sect. 6. For the timedelay distribution, the maximum time delay is shown (just relevant for quads) and there are three systems not in the plot with time delays greater than 140 days. The highest delay of a LSNe Ia in the OM10 catalog is 290 days. 

Open with DEXTER  
In the text 
Fig. 7.
Synthetic iband light curves (lower panel) of a mock quad LSNe Ia (upper panel) to illustrate simulated observations. The redshift of the source is 0.71 and is taken into account. The observation sequence is for a random field in the WFD survey for the baseline2018a cadence. 

Open with DEXTER  
In the text 
Fig. 8.
Illustration of Modified Julian Date (MJD) and filters when observations are taken over 10year survey for field number four from Table 1 for observing strategy baseline2018a. The y axis shows the six LSST filters and the number of observations taken in that filter. 

Open with DEXTER  
In the text 
Fig. 9.
Comparison of internight gap, cumulative season length, and 5σ depth of ten fields under investigation (orange) to sample of 719 WFD (black) fields. In addition, all 5292 LSST fields where observations are taken (blue) are shown. The lines indicate the mean and the shaded area includes everything up to the 99th percentile. We see that the ten chosen fields are representative for the WFD survey but not for the whole survey. 

Open with DEXTER  
In the text 
Fig. 10.
Illustration of spline fitting technique for a double mock LSNe Ia at redshift 0.27 for the iband light curve. The black line corresponds to the spline fit of the data (blue and orange), where the knots positions (small vertical ticks on the black lines) as well as the magnitude and time shifts have been iteratively optimized to minimize a chisquare term, resulting in the measured delay indicated in the topright. 

Open with DEXTER  
In the text 
Fig. 11.
Number of LSNe Ia for 10year survey where time delay has been measured with accuracy < 1% and precision < 5% for using only LSST data. 

Open with DEXTER  
In the text 
Fig. 12.
Timedelay and sourceredshift distribution for 202 investigated mock LSNe Ia for “LSST only” (upper panel) and “LSST + followup” (lower panel) for observing strategy kraken_2044. For a quad system, just a single delay is shown, either the first successful measured timedelay or the maximum of the six possible time delays. The blue circles show all 202 investigated systems and the orange filled dots correspond to systems where the time delay has been measured with accuracy better than 1% and precision better than 5%. Comparing the two panels we see significant improvement going from “LSST only” to “LSST + followup”, which we find for most of the observing strategies as suggested by Table 4. 

Open with DEXTER  
In the text 
Fig. 13.
Duration distribution for all 707 possible time delays (blue) and time delays with accuracy better than 1% (orange) from 202 investigated systems for “LSST + followup” and observing strategy colossus_2667. Nearly all time delays are accurate for pairs of images which yield a time delay greater than 20 days. 

Open with DEXTER  
In the text 
Fig. 14.
Number of LSNe Ia for 10year survey where time delay has been measured with accuracy < 1% and precision < 5% by using LSST as discovery machine in combination with followup observations for measuring time delays (black bars) and using only LSST data (gray bars, see also Fig. 11). Followup is every second night in filters g, r, and i, starting two nights after third LSST detection (with brightness exceeding 5σ depth in any filter). With followup observations, we get a substantial increase in the number of LSNe Ia systems with good measured delays. The numbers shown in this figure are a conservative estimate. An optimistic approach is discussed in Appendix C, leading to the same overall conclusion about the categories of cadence strategies (magenta, orange, and blue) but providing about 3.5 times more LSNe Ia with wellmeasured delays. 

Open with DEXTER  
In the text 
Fig. 15.
Timedelay and sourceredshift distribution for 100 investigated mock LSNe for baseline2018a, similar to Fig. 12. Upper two panels: standard followup observation (first panel) and the option going one magnitude deeper (second panel). Lower two panels: same followup scenarios hypothetically without microlensing. The distributions vary slightly because for a quad system just a single time delay is shown, either the first successfully measured delay or the maximum of the six possible delays. 

Open with DEXTER  
In the text 
Fig. A.1.
Effect of microlensing on restframe spectrum of SN Ia centered at (x, y) = (6.0, 4.3) R_{Ein} in microlensing map from Fig. 1 for two different restframe times since explosion t. The Einstein Radius defined in Eq. (1) is R_{Ein} = 7.2 × 10^{−3} pc = 2.2 × 10^{16} cm. For a discussion see Appendix A. 

Open with DEXTER  
In the text 
Fig. A.2.
Influence of microlensing on two light curves and corresponding color curve for the SN shown in Fig. A.1. As long as demagnification due to microlensing is similar in u and r bands, it cancels out in the color curve. 

Open with DEXTER  
In the text 
Fig. A.3.
Effect of microlensing on SN Ia spectrum at restframe time t = 14.9 d after explosion, similar as in Fig. A.1 but at slightly different position of SN: (x, y) = (6, 4.25) R_{Ein}, where R_{Ein} = 7.2 × 10^{−3} pc = 2.2 × 10^{16} cm. For a better comparison, the case from Fig. A.1 is shown as gray dashed line in the upperleft and lowerright panels. 

Open with DEXTER  
In the text 
Fig. A.4.
Influence of microlensing on light curves and color curves for SN shown in Fig. A.3. Whereas the light curves are highly influenced by microlensing, the color curve u − r is very similar for the case of microlensing and nonmicrolensing. This is not the case for all color curves, as shown for example by g − z. 

Open with DEXTER  
In the text 
Fig. A.5.
Comparison of nonmicrolensed color curves (dotted black) to microlensed ones (with median in solid red, and 1σ and 2σ range in different shades), for 10 000 random SNe positions in magnification map. The vertical black line indicates the first time the 2σ spread of the microlensed color curves exceeds 0.1 magnitudes. The panels are all restframe LSST color curves for a saddle image (κ = 0.6, γ = 0.6, and s = 0.6, see Fig. 1), which show an achromatic phase similar to the one reported by Goldstein et al. (2018), but we find the achromatic phase only for combinations of the bands u, g, r, and i (except u − i) and for the color curve z − y up to approximately 25 − 30 days after explosion. 

Open with DEXTER  
In the text 
Fig. A.6.
Same as in Fig. A.5, except these colors curves exhibit a shorter or nonexisting achromatic phase in comparison to those in Fig. A.5. 

Open with DEXTER  
In the text 
Fig. C.1.
Number of LSNe Ia with well measured time delay for 10year survey including faint systems with iband peak magnitude (of fainter image in doubles, or third brightest image in quads) fainter than 22.6 mag (black bars), in comparison to results presented in Fig. 14 (gray bars). We see that the optimistic (black bars) and conservative (gray bars) estimates show the same trend for magenta, orange and blue cadence strategies. 

Open with DEXTER  
In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.