Issue |
A&A
Volume 594, October 2016
Planck 2015 results
|
|
---|---|---|
Article Number | A2 | |
Number of page(s) | 35 | |
Section | Astronomical instrumentation | |
DOI | https://doi.org/10.1051/0004-6361/201525818 | |
Published online | 20 September 2016 |
Planck 2015 results
II. Low Frequency Instrument data processings
1 APC, AstroParticule et Cosmologie, Université Paris Diderot, CNRS/IN2P3, CEA/lrfu, Observatoire de Paris, Sorbonne Paris Cité, 10 rue Alice Domon et Léonie Duquet, 75205 Paris Cedex 13, France
2 Aalto University Metsähovi Radio Observatory and Dept. of Radio Science and Engineering, PO Box 13000, 00076 Aalto, Finland
3 African Institute for Mathematical Sciences, 6-8 Melrose Road, Muizenberg, Cape Town, South Africa
4 Agenzia Spaziale Italiana Science Data Center, via del Politecnico snc, 00133 Roma, Italy
5 Aix-Marseille Université, CNRS, LAM (Laboratoire d’Astrophysique de Marseille) UMR 7326, 13388 Marseille, France
6 Astrophysics Group, Cavendish Laboratory, University of Cambridge, J J Thomson Avenue, Cambridge CB3 0HE, UK
7 CGEE, SCS Qd 9, Lote C, Torre C, 4° andar, Ed. Parque Cidade Corporate, CEP 70308-200 Brasília, DF, Brazil
8 CITA, University of Toronto, 60 St. George St., Toronto, ON M5S 3H8, Canada
9 CNRS, IRAP, 9 Av. colonel Roche, BP 44346, 31028 Toulouse Cedex 4, France
10 CRANN, Trinity College, Dublin, Ireland
11 California Institute of Technology, 1200 E California Blvd Pasadena, Californie 91125, USA
12 Centre for Theoretical Cosmology, DAMTP, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK
13 Computational Cosmology Center, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720, USA
14 Consejo Superior de Investigaciones Científicas (CSIC), Calle Serrano 117, 28006 Madrid, Spain
15 DSM/Irfu/SPP, CEA-Saclay, 91191 Gif-sur-Yvette Cedex, France
16 DTU Space, National Space Institute, Technical University of Denmark, Elektrovej 327, 2800 Kgs Lyngby, Denmark
17 Département de Physique Théorique, Université de Genève, 24 quai E. Ansermet, 1211 Genève 4, Switzerland
18 Departamento de Astrofísica, Universidad de La Laguna (ULL), 38206 La Laguna, Tenerife, Spain
19 Departamento de Física, Universidad de Oviedo, Avda. Calvo Sotelo s/n, 33007 Oviedo, Spain
20 Department of Astronomy and Astrophysics, University of Toronto, 50 Saint George Street, Toronto, ON M5S 3H4, Canada
21 Department of Astrophysics/IMAPP, Radboud University Nijmegen, PO Box 9010, 6500 GL Nijmegen, The Netherlands
22 Department of Physics & Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1, Canada
23 Department of Physics and Astronomy, Dana and David Dornsife College of Letter, Arts and Sciences, University of Southern California, Los Angeles, CA 90089, USA
24 Department of Physics and Astronomy, University College London, London WC1E 6BT, UK
25 Department of Physics, Florida State University, Keen Physics Building, 77 Chieftan Way, Tallahassee, FL 32306, USA
26 Department of Physics, Gustaf Hällströmin katu 2a, University of Helsinki, 00014 Helsinki, Finland
27 Department of Physics, Princeton University, Princeton, NJ 08544-0708, USA
28 Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA
29 Department of Physics, University of Illinois at Urbana-Champaign, 1110 West Green Street, Urbana, IL 61801-3080, USA
30 Dipartimento di Fisica e Astronomia G. Galilei, Università degli Studi di Padova, via Marzolo 8, 35131 Padova, Italy
31 Dipartimento di Fisica e Astronomia, ALMA MATER STUDIORUM, Università degli Studi di Bologna, viale Berti Pichat 6/2, 40127 Bologna, Italy
32 Dipartimento di Fisica e Scienze della Terra, Università di Ferrara, via Saragat 1, 44122 Ferrara, Italy
33 Dipartimento di Fisica, Università La Sapienza, P.le A. Moro 2, 00185 Roma, Italy
34 Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, 20133 Milano, Italy
35 Dipartimento di Fisica, Università degli Studi di Trieste, via A. Valerio 2, 34127 Trieste, Italy
36 Dipartimento di Fisica, Università di Roma Tor Vergata, via della Ricerca Scientifica 1, 00133 Roma, Italy
37 Dipartimento di Matematica, Università di Roma Tor Vergata, via della Ricerca Scientifica 1, 00133 Roma, Italy
38 Discovery Center, Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen, Denmark
39 Discovery Center, Niels Bohr Institute, Copenhagen University, Blegdamsvej 17, 2100 Copenhagen, Denmark
40 European Space Agency, ESAC, Planck Science Office, Camino bajo del Castillo, s/n, Urbanización Villafranca del Castillo, Villanueva de la Cañada, 28692 Madrid, Spain
41 European Space Agency, ESTEC, Keplerlaan 1, 2201 AZ Noordwijk, The Netherlands
42 Gran Sasso Science Institute, INFN, viale F. Crispi 7, 67100 L’ Aquila, Italy
43 HGSFP and University of Heidelberg, Theoretical Physics Department, Philosophenweg 16, 69120 Heidelberg, Germany
44 Haverford College Astronomy Department, 370 Lancaster Avenue, Haverford, PA 19041, USA
45 Helsinki Institute of Physics, Gustaf Hällströmin katu 2, University of Helsinki, 00014 Helsinki, Finland
46 INAF–Osservatorio Astrofisico di Catania, via S. Sofia 78, 95123 Catania, Italy
47 INAF–Osservatorio Astronomico di Padova, Vicolo dell’Osservatorio 5, 35122 Padova, Italy
48 INAF–Osservatorio Astronomico di Roma, via di Frascati 33, 00078 Monte Porzio Catone, Italy
49 INAF–Osservatorio Astronomico di Trieste, via G.B. Tiepolo 11, 34143 Trieste, Italy
50 INAF/IASF Bologna, via Gobetti 101, 40129 Bologna, Italy
51 INAF/IASF Milano, via E. Bassini 15, 20133 Milano, Italy
52 INFN, Sezione di Bologna, via Irnerio 46, 40126 Bologna, Italy
53 INFN, Sezione di Roma 1, Università di Roma Sapienza, P.le Aldo Moro 2, 00185 Roma, Italy
54 INFN, Sezione di Roma 2, Università di Roma Tor Vergata, via della Ricerca Scientifica 1, 00133 Roma, Italy
55 INFN/National Institute for Nuclear Physics, via Valerio 2, 34127 Trieste, Italy
56 ISDC, Department of Astronomy, University of Geneva, Ch. d’Écogia 16, 1290 Versoix, Switzerland
57 IUCAA, Post Bag 4, Ganeshkhind, Pune University Campus, 411 007 Pune, India
58 Imperial College London, Astrophysics group, Blackett Laboratory, Prince Consort Road, London, SW7 2AZ, UK
59 Infrared Processing and Analysis Center, California Institute of Technology, Pasadena, CA 91125, USA
60 Institut Néel, CNRS, Université Joseph Fourier Grenoble I, 25 rue des Martyrs, 38042 Grenoble, France
61 Institut Universitaire de France, 103 bd Saint-Michel, 75005 Paris, France
62 Institut d’Astrophysique Spatiale, CNRS, Univ. Paris-Sud, Université Paris-Saclay, Bât. 121, 91405 Orsay Cedex, France
63 Institut d’Astrophysique de Paris, CNRS (UMR 7095), 98bis boulevard Arago, 75014 Paris, France
64 Institut für Theoretische Teilchenphysik und Kosmologie, RWTH Aachen University, 52056 Aachen, Germany
65 Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK
66 Institute of Theoretical Astrophysics, University of Oslo, PO box 1029 Blindern, 0315 Oslo, Norway
67 Instituto deAstrofísica de Canarias, C/Vía Láctea s/n, 38205 La Laguna, Tenerife, Spain
68 Instituto de Física de Cantabria (CSIC-Universidad de Cantabria), Avda. de los Castros s/n, 39005 Santander, Spain
69 Istituto Nazionale di Fisica Nucleare, Sezione di Padova, via Marzolo 8, 35131 Padova, Italy
70 Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109, USA
71 Jodrell Bank Centre for Astrophysics, Alan Turing Building, School of Physics and Astronomy, The University of Manchester, Oxford Road, Manchester, M13 9PL, UK
72 Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637, USA
73 Kavli Institute for Cosmology Cambridge, Madingley Road, Cambridge, CB3 0HA, UK
74 Kazan Federal University, 18 Kremlyovskaya St., 420008 Kazan, Russia
75 LAL, Université Paris-Sud, CNRS/IN2P3, 91898 Orsay, France
76 LERMA, CNRS, Observatoire de Paris, 61 avenue de l’Observatoire, 75014 Paris, France
77 Laboratoire AIM, IRFU/Service d’Astrophysique – CEA/DSM – CNRS – Université Paris Diderot, Bât. 709, CEA-Saclay, 91191 Gif-sur-Yvette Cedex, France
78 Laboratoire Traitement et Communication de l’Information, CNRS (UMR 5141) and Télécom ParisTech, 46 rue Barrault, 75634 Paris Cedex 13, France
79 Laboratoire de Physique Subatomique et Cosmologie, Université Grenoble-Alpes, CNRS/IN2P3, 53 rue des Martyrs, 38026 Grenoble Cedex, France
80 Laboratoire de Physique Théorique, Université Paris-Sud 11 & CNRS, Bâtiment 210, 91405 Orsay, France
81 Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
82 Lebedev Physical Institute of the Russian Academy of Sciences, Astro Space Centre, 84/32 Profsoyuznaya st., Moscow, GSP-7, 117997, Russia
83 Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching, Germany
84 McGill Physics, Ernest Rutherford Physics Building, McGill University, 3600 rue University, Montréal, QC, H3A 2T8, Canada
85 National University of Ireland, Department of Experimental Physics, Maynooth, Co. Kildare, Ireland
86 Nicolaus Copernicus Astronomical Center, Bartycka 18, 00-716 Warsaw, Poland
87 Niels Bohr Institute, Blegdamsvej 17, 2100 Copenhagen, Denmark
88 Niels Bohr Institute, Copenhagen University, Blegdamsvej 17, 2100 Copenhagen, Denmark
89 SISSA, Astrophysics Sector, via Bonomea 265, 34136, Trieste, Italy
90 SMARTEST Research Centre, Università degli Studi e-Campus, via Isimbardi 10, 22060 Novedrate (CO), Italy
91 School of Physics and Astronomy, Cardiff University, Queens Buildings, The Parade, Cardiff, CF24 3AA, UK
92 School of Physics and Astronomy, University of Nottingham, Nottingham NG7 2RD, UK
93 Sorbonne Université-UPMC, UMR 7095, Institut d’Astrophysique de Paris, 98bis boulevard Arago, 75014, Paris, France
94 Space Sciences Laboratory, University of California, Berkeley, CA 94720, USA
95 Special Astrophysical Observatory, Russian Academy of Sciences, Nizhnij Arkhyz, Zelenchukskiy region, 369167 Karachai-Cherkessian Republic, Russia
96 Sub-Department of Astrophysics, University of Oxford, Keble Road, Oxford OX1 3RH, UK
97 Theory Division, PH-TH, CERN, 1211 Geneva 23, Switzerland
98 UPMC Univ Paris 06, UMR 7095, 98bis boulevard Arago, 75014 Paris, France
99 Université de Toulouse, UPS-OMP, IRAP, 31028 Toulouse Cedex 4, France
100 University of Granada, Departamento de Física Teórica y del Cosmos, Facultad de Ciencias, 18003 Granada, Spain
101 University of Granada, Instituto Carlos I de Física Teórica y Computacional, 18071 Granada, Spain
102 Warsaw University Observatory, Aleje Ujazdowskie 4, 00-478 Warszawa, Poland
⋆
Corresponding author: A. Zacchei, e-mail: zacchei@oats.inaf.it
Received: 5 February 2015
Accepted: 4 February 2016
We present an updated description of the Planck Low Frequency Instrument (LFI) data processing pipeline, associated with the 2015 data release. We point out the places where our results and methods have remained unchanged since the 2013 paper and we highlight the changes made for the 2015 release, describing the products (especially timelines) and the ways in which they were obtained. We demonstrate that the pipeline is self-consistent (principally based on simulations) and report all null tests. For the first time, we present LFI maps in Stokes Q and U polarization. We refer to other related papers where more detailed descriptions of the LFI data processing pipeline may be found if needed.
Key words: space vehicles: instruments / methods: data analysis / cosmic background radiation
© ESO, 2016
1. Introduction
This paper, one of a set associated with the 2015 release of data from the Planck1 mission (Planck Collaboration I 2016), describes the Low Frequency Instrument (LFI) data processing that supports the second Planck cosmological release. Following the nominal mission of 15.5 months, the LFI in-flight operation was extended to fully exploit the lifetime of the Planck 20 K to 4 K cryogenic system, leading to a total of 48 months of observation (or eight full-sky surveys) with essentially unchanged instrument performance. This paper is an updated description of the LFI data processing (Planck Collaboration II 2014) that was part of the second wave of astrophysical results published in early 2014 (Planck Collaboration VIII–XXVI 2014), now incorporating the analysis of the full mission data, both in temperature and in polarization. This work describes the overall data flow of the pipeline implemented at the LFI data processing centre (DPC), including scientific telemetry from the instrument, housekeeping data, and frequency maps, as well as the tests applied to validate the data products. Detailed descriptions of critical aspects of the data analysis and products, including improvements in some of the algorithms used in the pipeline, are given in four companion papers. These discuss, respectively: systematic effects and the overall error budget (Planck Collaboration III 2016); the determination of the LFI main beams and window functions from in-flight planet-crossing measurements and optical modelling (Planck Collaboration IV 2016); photometric calibration, including methods adopted and related uncertainties (Planck Collaboration V 2016); and mapmaking, including the process used to obtain the low-resolution maps and their associated full noise covariance matrices (Planck Collaboration VI 2016). The main results and reference tables on all these topics are summarized in this paper.
The structure of this paper is organized as follows. We summarize the overall data processing pipeline in Sect. 3. Processing of the time ordered information (TOI) is described in Sect. 4, with an emphasis on changes since Planck Collaboration II (2014). Section 6 describes important changes to our calculations of LFI beams, which in turn has an effect on calibration, described in Sect. 7. LFI noise properties are described in Sect. 8. Sections 9 and 10 present Planck maps at 30, 44, and 70 GHz, both in temperature and in Q and U polarization, including the low-multipole maps needed to construct the Planck likelihood (Planck Collaboration XI 2016). Section 11 presents the major new results for this release, LFI polarization maps, and an analysis of systematic effects peculiar to polarization. Validation of the LFI products, especially by means of null tests, is discussed in Sect. 12, and the special issue of data selection for low-ℓ analysis is considered in Sect. 13. Section 15 summarizes the LFI data products (for further details, see the Explanatory Supplement2 that accompanies the release of products and provides its detailed description). We conclude briefly in Sect. 16.
2. In-flight behaviour and operations
The Planck LFI instrument is described in Bersanelli et al. (2010) and Mennella et al. (2010). It comprises 11 radiometer chain assemblies (RCAs), two at 30 GHz, three at 44 GHz, and six at 70 GHz, each composed of two independent pseudo-correlation radiometers sensitive to orthogonal linear polarization modes. Each radiometer has two independent square-law diodes for detection, integration, and conversion from radio frequency signals to DC voltages. The focal plane is cryogenically cooled to 20 K, while the pseudo-correlation design uses internal, blackbody reference loads cooled to 4.5 K. The radiometer timelines are produced by taking differences between the signals from the sky, Vsky, and from the reference loads, Vref. Radiometer balance is optimized by introducing a gain modulation factor, typically stable within 0.02% throughout the entire mission, which greatly reduces 1 /f noise and improves immunity to a wide class of systematic effects (Mennella et al. 2011). During the full operation period (ignoring a brief, less stable thermal period due to the sorption cooler switchover), the behaviour of all 22 LFI radiometers was stable, with 1 /f knee frequencies unchanging within 9% and white noise levels within 0.5%. These results are in line with those found for the 15.5 month nominal mission period (Planck Collaboration II 2014).
LFI performance parameters.
Fig. 1 Schematic representation of the Level 2 and pointing pipelines of the LFI DPC; elements in red identify those modified or augmented with respect to Planck Collaboration II (2014) |
2.1. Operations
The data set released together with this paper was acquired from 12 August 2009 to 3 August 2013, roughly four years of observations. The first two years of data (from Survey 1 to Survey 4) were acquired scanning the sky with a phase angle of 340deg, whereas for the last two years (from Survey 5 to Survey 8) the phase angle was shifted to 250deg (see Planck Collaboration I 2016, for details). This shift has allowed for more thorough investigation of systematic effects, including better characterization of the beam and the related Galactic straylight (see Sect. 7.4) using null tests based on survey differences. During the last three Jupiter crossing, the scanning strategy was optimized to obtain a better beam determination (see Sect. 6). The period from 03 August 2013 to 03 October 2013 was used to perform deep scanning of the Crab Nebula and of the regions near the minima of the cosmic microwave background (CMB) dipole, with the aim of determining the dipole direction with an alternative approach. These data are not included in this release, since they require specialized analysis, which is not yet complete.
2.2. Instrument performance update
In Table 1 we present a top-level summary of the instrument performance parameters measured in flight during the four years of operation of LFI. Optical properties have been reconstructed from Jupiter transits (Planck Collaboration IV 2016) and are in agreement with estimations made for the 2013 release (Planck Collaboration IV 2014). White noise sensitivity and parameters describing the 1/f noise component are in line with the 2013 values (Planck Collaboration II 2014), demonstrating that cryogenic operation of the low-noise amplifiers and phase switches do not result in any significant aging effects over a period of four years. Overall calibration uncertainty, determined as the sum of absolute and relative calibration, is 0.35%, 0.26%, and 0.20% at 30, 44, and 70 GHz respectively, improving by more than a factor of 2 over the LFI 2013 calibration (Planck Collaboration V 2014). The residual systematic uncertainty was computed for both temperature and polarization; it varies between 1 and 3 μKCMB (Planck Collaboration III 2016) in temperature and polarization. It should be noted that the uncertainty arising from systematic effects is lower than in the previous release (Planck Collaboration IV 2014); this is principally due to the straylight removal and the new iterative calibration algorithm now used.
3. Data processing overview
As in Planck Collaboration II (2014), the processing of LFI data is divided into three levels, shown schematically in Fig. 1. The main changes compared to the earlier release are related to the way in which we take into account the beam information in the pipeline processing, as well as an entire overhaul of the iterative algorithm used to calibrate the raw data. According to the LFI scheme, processing starts at Level 1, which retrieves all the necessary information from data packets and auxiliary data received from the Mission Operation Centre, and transforms the scientific packets and housekeeping data into a form manageable by Level 2. Level 2 uses scientific and housekeeping information to:
-
build the LFI reduced instrument model (RIMO), which containsthe main characteristics of the instrument;
-
remove analogue-to-digital converter (ADC) non-linearities and 1 Hz spikes diode by diode (see Sects. 4.2 and 4.3);
-
compute and apply the gain modulation factor to minimize 1 /f noise (see Sect. 4.4);
-
combine signals from the two diodes of each radiometer (see Sect. 4.5);
-
compute the appropriate detector pointing for each sample, based on auxiliary data and beam information corrected by a model (PTCOR) built using solar distance and radiometer electronics box assembly (REBA) temperature information (see Sect. 5);
-
calibrate the scientific timelines to physical units (KCMB), fitting the total CMB dipole convolved with the 4π beam representation (see Sect. 7), without taking into account the signature due to Galactic straylight (see Sect. 7.4);
-
remove the solar and orbital dipole convolved with the 4π beam representation and the Galactic emission convolved with the beam sidelobes (see Sect. 7.4) from the scientific calibrated timeline;
-
combine the calibrated time-ordered information (TOI) into aggregate products, such as maps at each frequency (see Sect. 9).
Level 3 collects Level 2 outputs from both HFI (Planck Collaboration VIII 2016) and LFI and derives various products, such as component-separated maps of astrophysical foregrounds, catalogues of different classes of sources, and the likelihood of cosmological and astrophysical models given in the maps.
4. Time-ordered information (TOI) processing
The Level 1 pipeline, which has the responsibility to receive telemetry data and sort them into a form manageable by the Level 2 pipeline, has not changed with respect to the 2013 release; we therefore refer to Planck Collaboration II (2014) for its description. In this section, we move directly to a discussion of the Level 2 pipeline.
4.1. Input flags
The flagging procedure used was exactly the same as described in Planck Collaboration II (2014). In Table 2 we give the percentage of usable and unused data for the full mission. It should be noted that compared with the same table in Planck Collaboration II (2014) the amount of missing data (where by “missing” we mean packets that were not been received on the ground) is larger due to two technical problems that were experienced with the spacecraft, resulting in data not being downloaded for 2 days of observation. On the other hand the anomalies were lessened due to better control of the instrument’s temperature stability. The percentages of time spent on spacecraft manoeuvres are the same for the three frequencies, and as a consequence the fraction of data used in the science analysis was similar (at more than 90%) at each frequency.
Percentage of LFI observation time lost due to missing or unusable data, and to manoeuvres.
4.2. ADC non-linearity correction
The ADCs convert the analogue detector voltages to numbers, their linearity is as important as that of the receivers and detectors, with any departure appearing as a distortion in the system power response curve. While the algorithm for determining the ADC corrections remains the same as described in Planck Collaboration IV (2014), some changes were made in its implementation and execution. First, the full mission data are now used, so that when detector voltages are revisited there will be an improvement in signal to noise (although in the particular case of radiometer 21M, some of the voltages were too poorly sampled to generate an adequate solution). Second, instead of determining the white noise amplitude via a Fourier transform, we now use the difference between the sum of the variances and twice the covariance of adjacent paired points in the timestream, such that white noise variance , where Xo and Xe are data points with odd and even indices, respectively. This not only increased the speed of calculating the noise amplitude, but avoided the iteration steps, since these can be done analytically from the initial variance-covariance estimates. Finally, data acquisition electronics (DAE) offset changes made on operational day (OD) 953 to avoid saturation also shifted the apparent ADC voltage relative to the true detector voltage. A separate ADC correction had to be generated and applied to radiometers 22M and 23S using only the post OD 953 data.
The ability to recover the correct ADC solution and the level of the residuals was assessed by simulating time-ordered data with the same noise statistics, voltage drift, gain fluctuations, and sky signal, with a known ADC error. As the correction in the DPC pipeline is a lookup table of input to output detector voltages to which a spline is fitted to interpolate the TOI voltages, we introduced the ADC error as the spline curve with the input and output voltages swapped and thereby generate the inverse of the measured ADC effect. Comparing the spline curves used to the ones recovered proved to be at the level of a few percent, leading to rms errors on the residual simulated frequency maps of ≈0.1 μKCMB at 30 and 44 GHz and ≈0.4 μKCMB at 70 GHz, both temperature and polarization. These simulations and results are summarized in more detail in Planck Collaboration III (2016).
4.3. Corrections for electronic spikes
Electronic spikes are caused by the interaction between the electronics clock and the scientific data lines. They occur in the data acquisition electronics (DAE) after the detector diodes and before the analogue-to-digital converters (ADC, Meinhold et al. 2009; Mennella et al. 2010, 2011). The signal is detected in all the LFI radiometers time-domain outputs as a 1 s square wave with a rising edge near 0.5 s and a falling edge near 0.75 s, synchronous with the on-board time signal. In the frequency domain it appears as a spike signal at multiples of 1 Hz. The 44 GHz channels are the only one that are significantly affected by this effect. Consequently the spike signal is removed from the data only in this channel. The procedure consists of the subtraction of a fitted square wave template from the time-domain data as described in Planck Collaboration III (2016). We are evaluating the possibility of further reducing the residual effect of the spikes signal at the map level, as described in Planck Collaboration III (2016), for the next Planck data release by adopting one or more of the following approaches:
-
increasing the resolution of square wave template, at the momentat 80 Hz;
-
using time varying template instead of the fixed one over the whole mission;
-
removing spikes signal from the 30 GHz and 70 GHz channels.
4.4. Demodulation: gain modulation factor estimation and application
Each Planck LFI diode switches at 4096 Hz (Mennella et al. 2010) between the sky and the 4 K reference load. The data acquired in this way are dominated by 1 /f noise that is highly correlated between the two streams (Bersanelli et al. 2010); differencing those streams results in a strong reduction of 1 /f noise. The procedure applied differs from that discussed in Planck Collaboration II (2014) in only one way: the Galaxy and point sources are masked from the time-ordered data used in the computation of the gain modulation factor R (GMF in Fig. 1). The overall variation of R over the whole mission is less than 0.02% for every LFI channel. A full description of the theory of this correction can be found in Mennella et al. (2011).
4.5. Combining diodes
Two detector diodes provide the output for each Planck LFI receiver channel. To minimize the impact of imperfect isolation between the two diodes, we perform a weighted average of the time-ordered data from the two diodes of each receiver just before the differencing. The procedure applied is the same described in Planck Collaboration II (2014) and for the sake of completeness, we report in Table 3 the values of weights used; the receiver channels are indicated either with M (main) or S (side). The weights are kept fixed during the whole mission.
Weights used in combining diodes.
Approximate dates of the Jupiter observations.
5. Pointing
The long time scale pointing correction, PTCOR, has been modified, and is now based on the solar distance and radiometer electronics box assembly (REBA) thermometry. Unlike in 2013, the reconstructed satellite attitude is now uniform across both of the Planck instruments and is discussed in detail in the mission overview paper, Planck Collaboration I (2016).
Focal plane geometry.
Main beam descriptive parameters of the scanning beams, with ±1σ uncertainties.
6. Main beams and the geometrical calibration of the focal plane
The in-flight assessment of the LFI main beams relies on the measurements performed during seven Jupiter crossings; the first four transits (“J1” to “J4”) occurred in nominal scan mode (spin shift 2 arcmin, 1 deg per day), and the last three scans (“J5” to “J7”) in a deeper coverage mode (spin shift 0.5 arcmin, 15 arcmin per day). The period of time corresponding to each Jupiter observation is reported in Table 4. By stacking data from the seven scans, we measure the main beam profiles down to −25 dB at 30 and 44 GHz, and down to −30 dB at 70 GHz. If we fit the main beam shapes with elliptical Gaussian profiles, the uncertainties of the measured scanning beams can be expressed in terms of statistical errors on these Gaussian parameters. With respect to the 2013 release, the improvement in the signal-to-noise ratio due to the number of samples and to better sky coverage is about a factor of 2. The beam full width half maximum is determined with a typical uncertainty of 0.2% at 30 and 44 GHz, and 0.1% at 70 GHz, approximately a factor of 2 better than the value achieved in 2013. The fitting procedure also returns the main beam pointing directions in the Planck field of view (i.e. the focal plane geometry), centred along the nominal line of sight as defined in Tauber et al. (2010).
We determined the focal plane geometry of LFI independently for each Jupiter crossing (Planck Collaboration IV 2016), using the same procedure as adopted in the 2013 release. The solutions for the seven crossings agree within 4 arcsec at 70 GHz, and 7 arcsec at 30 and 44 GHz. The uncertainty in the determination of the main beam pointing directions evaluated from the single scans is about 4 arcsec for the nominal scans, and 2.5 arcsec for the deep scans at 70 GHz (27 arcsec for the nominal scan and 19 arcsec for the deep scan, at 30 and 44 GHz). Stacking the seven Jupiter transits, the uncertainty in the reconstructed main beam pointing directions becomes 0.6 arcsec at 70 GHz and 2 arcsec at 30 and 44 GHz. With respect to the 2013 release, we have found a difference in the main beam pointing directions of about 5 arcsec in the cross-scan direction and 0.6 arcsec in the in-scan direction. The beam centres and polarization orientation are defined by four parameters, θuv and φuv, which define the beam pointing reconstructed using the stacked Jupiter transits; and ψuv and ψpol defining the polarization orientation of the beam (see Planck Collaboration IV 2016; Planck Collaboration 2013 for the definitions of these angles); their values for all the LFI radiometers are reported in Table 5. Only θuv and φuv, which are the beam pointing in spherical coordinates referred to the line of sight, can be determined with Jupiter observations. The polarization orientation of the beams, defined by ψuv + ψpol, is estimated based on the geometry of the waveguide components in the LFI focal plane (which for coherent detectors defines the polarization planes to high precision), reprojected in the sky through our GRASP model. As discussed in Planck Collaboration III (2016), direct measurements of bright polarized sources (such as the Crab Nebula) provide only loose constraints, and our final uncertainties on the polarization angles have been evaluated through simulations.
Details of the LFI main beam reconstruction and focal plane geometry evaluation are reported in Planck Collaboration IV (2016).
6.1. Scanning beams
The “scanning beams”, see Table 6 for main beam descriptive parameters, used in the LFI pipeline (affecting calibration, effective beams, and beam window functions) are very similar to those presented in Planck Collaboration IV (2014): they are GRASP beams properly smeared to take into account the satellite motion. They come from a tuned optical model and represent the most realistic fit to the available measurements of the LFI main beams. These beams have now been validated using seven Jupiter transits. The Jupiter scans allow us to measure the total field, that is the co- and cross-polar components combined in quadrature. The adopted beam model has the added advantage that it allows the co- and cross-polar pattern to be defined separately; it also permits us to properly consider the beam cross-polarization in every step of the LFI pipeline. The scanning beams reconstructed from Jupiter transits are shown in Fig. 2.
Unlike in Planck Collaboration IV (2014), where the main beams were full-power main beams and the resulting beam window functions were normalized to unity (because the calibration was performed assuming a pencil beam), a different beam normalization is introduced here to properly take into account the power entering the main beam (typically about 99% of the total power). Indeed, as described in Planck Collaboration V (2016), the current LFI calibration takes into account the full 4π beam (i.e. the main beam, as well as near and far sidelobes). Consequently, in the calculation of the window function, the beams are not normalized to unity; instead, their normalization takes into account the real efficiency calculated by considering the variation across the band of the optical response (coupling between feedhorn pattern and telescope) and the radiometric response (band shape). This affects flux densities derived from the maps (see Sect. 7.2).
Mean and rms variation across the sky of FWHM, ellipticity, orientation, and solid angle of the FEBeCop effective beams computed with the GRASP beam-fitted scanning beams.
In addition, “hybrid beams” have been created using planet measurements above 20 dB from the main beam power peak and GRASP beams below this threshold. The hybrid beams have been normalized to match the GRASP beams (i.e. the main beam efficiency is set to be the same). Hybrid beams have been used to perform a further check on the consistency between the GRASP model and the planet data, in terms of window functions. Further details are reported in Planck Collaboration IV (2016).
Fig. 2 Scanning beams reconstructed from Jupiter observations. The beams are plotted in logarithmic contours of −3, −10, −20, and −30 dB from the peak, for the 70 GHz channel (horns 18–23), and −3, −10, −20, and −25 dB from the peak, for the 30 and 44 GHz channel (horns 27 and 28, and 24–26, respectively). The main and side arms are indicated with black and blue lines, respectively. |
6.2. Effective beams
The GRASP combined co- and cross-polar main beams are used to calculate the “effective beams”, which take into account the specific scanning strategy and pointing information in order to include any smearing and orientation effects on the beams themselves. We compute the effective beam at each LFI frequency, using the scanning beam and scan history in real space using the FEBeCoP (Mitra et al. 2011) method. Effective beams are used to calculate the effective beam window function, as reported in Planck Collaboration IV (2016) and in the source detection pipeline used to generate the PCCS catalogue (Planck Collaboration XXVI 2016). Table 7 lists the mean and rms variation across the sky of the main parameters computed with FEBeCoP. Note that the FWHM and ellipticity in Table 7 differ slightly from the values reported in Table 6. This results from the different way in which the Gaussian fit was applied. The scanning beam fit is determined by fitting the profile of Jupiter to timelines and limiting the fit to the data with signal-to-noise ratio greater than 3, while the fit of the effective beam is computed on GRASP maps projected in several positions of the sky (Planck Collaboration IV 2016). The latter are less affected by the noise.
6.3. Window functions
Window functions based on the LFI beams are needed for the production of the LFI likelihoods and power spectra. They are based on the revised FEBeCoP (effective) beams discussed earlier in this section, and account for the renormalization of the beams described in Sect. 7.2. The derivation of the 2015 window functions is fully described in Planck Collaboration IV (2016), as are the uncertainties in the window functions. The uncertainties are sharply reduced from the previous release and are: 0.7% for the 30 GHz band (evaluated at ℓ = 600); 1.0% at 44 GHz (also evaluated at ℓ = 600); and 0.5% in the 70 GHz window function at ℓ = 1000.
7. Photometric calibration
With the term “photometric calibration”, we indicate the process that converts the raw voltages V measured by the LFI radiometers into a thermodynamic temperature. The response of an LFI radiometer to a change in the temperature coming from the sky can be modelled by the following equation: (1)where B is the beam response, the temperature T = D + TCMB + Tsky is decomposed into the sum of three terms (the dipole induced by the motion of the Solar system plus the Planck spacecraft, the CMB, and any other foregrounds), and T0 is a constant offset, which includes both instrumental offsets and the CMB monopole. The quantity G is the unknown term in the calibration problem, and its inverse K = G-1, the “calibration constant”, is used to convert the timestream of voltages V(t) into temperatures.
Planck’s calibration source has always been the dipole term, D. However, since the previous Planck data release (Planck Collaboration V 2014) we have implemented a number of important changes in the pipeline used to calibrate the voltages measured by the LFI radiometers. In this section we provide an overview of the most important result; we refer the reader interested in further details to Planck Collaboration V (2016).
We use as a calibration source the signal B ∗ D in Eq. (1), which is induced by the combined motion of the spacecraft and the Solar System with respect to the CMB rest frame. We have characterized the dipole by means of Planck data and have estimated the amplitude to be (3364.5 ± 2.0) μKCMB in the direction in Galactic coordinates (Planck Collaboration I 2016). This represents an approximately 0.3% increase in the amplitude with respect to the dipole used in the 2013 data release, which was based on the results of Hinshaw et al. (2009).
7.1. 4π calibration
When we apply Eq. (1)to solve the calibration problem, we compute the value of B ∗ D by means of a full 4π convolution over the sphere, between the dipole signal (plus the relativistic quadrupole component) and the beam response. This is different from what other experiments have done when using the dipole as a calibrator, e.g. WMAP and HFI assume the beam to be a Dirac delta function. Our approach allows us to properly take into account the asymmetric effect of the sidelobes and the efficiency of the main beam during the calibration, which is critical for polarization, especially at low multipoles. Indeed, as discussed in Planck Collaboration V (2014), the introduction of 4π calibration resulted in a significant improvement in the self-consistency of survey maps as demonstrated by null tests analysis.
It can be demonstrated (Planck Collaboration V 2014; Planck Collaboration V 2016) that the average level of the power spectrum, before convolving it with the beam window function, changes with respect to the Dirac delta case according to the following formula: (2)where fsl is the sidelobe fraction of the beam, and φD ≲ 0.2%, φsky ≈ 0.01%, and are quantities defined and discussed in Planck Collaboration V (2016); they depend on the beam and the scanning strategy, and they are therefore radiometer-dependent. The typical value of for the LFI radiometers deviates from unity by less than 1%.
The solution of the Eq. (1)is provided by an iterative destriper, DaCapo, which supersedes our previous dipole-fitting code used in the 2013 data release. At each step the iterative procedure determines the radiometer gains by fitting D to the data, at the same time extracting the contribution from the sky signal. Because of the degeneracy between the overall gain level and the signal D, it makes sense to constrain the map dipole to the model. For this to work the contribution of foregrounds to the dipole on the sky must be included in the dipole model.
The 4π beam model used in the calibration has been created taking into account the radiometer bandpass of each radiometer (measured before flight). For each radiometer about 25 realizations of the main beam, intermediate beam, and sidelobe have been produced at fixed frequencies, chosen to fully sample the shape of the bandpass (as shown for the LFI 18M bandpass and selected frequencies in Fig. 3). Those realization were then used to construct a weighted 4π beam for each radiometer.
Fig. 3 Illustration of the method used to produce LFI synthetised beams weighted for the radiometer response (in this case LFI 18M). The vertical lines identify the frequencies at which the beam has been simulated within the radiometer bandpass T(ν). The results are then used to construct a weigthed 4π beam for each radiometer. Details on the bandpass measurements can be found in Villa et al. (2010). |
7.2. Impact of 4π calibration on beam functions and source fluxes
The mapping procedure assumes a pencil beam (Planck Collaboration VI 2016), which, in the ideal case of a circularly-symmetric beam, would yield a map of the beam-convolved sky; therefore a fraction of the signal from any source appears in the far sidelobes, and would be missed by integration of the map over the main beam alone. By the same token, bright resolved features in the map have temperatures fractionally lower than in the sky, due to signal lost in the sidelobes. In essence this description remains true even given the highly asymmetric sidelobes of the Planck beam: the main difference is that the far sidelobe contribution to a given pixel varies according to the orientation of the satellite at the time of observation. For LFI beams, roughly 1% of the signal is in the sidelobes and this must be accounted for in any analysis of the maps. In particular, the flux densities of compact sources measured from the maps must be scaled up by the multiplicative factors fsour reported in Table 8. These values have been computed from:
-
the main beam efficiencies (PlanckCollaboration IV 2016);
-
a re-normalization factor introduced by the calibration pipeline to compensate for the missing power in the 4π beam (Planck Collaboration V 2016; re-normalized beam efficiencies ηnorm are also reported in Table 8);
-
a factor that takes in account the horn uniform weights applied during the mapmaking process (Planck Collaboration VI 2016).
The re-normalization factor was introduced to take in account the “missing power” due to the first-order approximation adopted in the computation carried out with the GRASP Multireflector Geometrical Theory of Diffraction (MrGTD) software package (Planck Collaboration IV 2016). The missing power was proportionally distributed between main, intermediate, and sidelobe parts; this procedure has an effect on the previously computed beam functions, and hence these have now been scaled by the factor fBl reported in Table 8.
Multiplicative factors that should be used to determine the correct flux densities from compact sources.
In practice, in order to make consistent comparison with external data, it is essential that:
-
users interested in CMB and diffuse component analysis, shoulduse the official LFI beam functions (in the LFI RIMO available inthe Planck Legacy Archiveinterface3) which alreadyinclude the rescaling factor fBl. Alternatively, users who wish to perform their own beam deconvolution should multiply their beam functions by the factor fBl;
-
users interested in point sources, the recalibration factors fsour should be used to obtain proper flux densities for sources extracted directly from LFI maps.
7.3. Smoothing algorithm
The uncertainty of the calibration constants increases significantly when the Planck spacecraft is aligned such that the observed scan circle measures a low dipole component (“minimum dipole”). This problem was particularly severe in the Surveys 2 and 4, as shown in Fig. 4.
Fig. 4 Raw gain from radiometer 27M throughout 4 year mission. Pid is a counter for pointings of the spin axis, which had an average duration of about 45 min Planck Collaboration I (2014). The increase of noise corresponding to the periods of “minimum dipole” (see text) are clearly visible for each of the eight surveys. Survey 2 (Pid range approximately 5200−10 000) and Survey 4 (Pid approximately 15 700−20 600) exhibit a significantly higher noise, as expected from the unfavourable alignment of the spacecraft spin axis with the Solar dipole in those two surveys. |
Fig. 5 High frequency fluctuations of the raw gain from radiometer 27M throughout the 4 yr mission. The major decrease in high frequency variations occurs after the transponder was left continuously on (at Pid = 5000). Subsequently the high frequency variations are generally ≪1%. |
To reduce the noise, we apply an adaptive smoothing algorithm that is also designed to preserve the discontinuities caused by abrupt changes in the working configuration of the radiometers (e.g. sudden temperature changes in the focal plane). Moreover, we apply an additive, zero-mean correction to the calibration constants derived from measurements of the emission of an internal load kept at a stable temperature of approximately 4.5 K, plus the measurement of a set of temperature sensors mounted on the focal plane of LFI. The amplitude of this correction is quite small (≪1%, see Fig. 5), but its purpose is to account for two phenomena.
-
1.
During the first survey, the transponder used to download data toEarth was repeatedly turned on and off with a24 h duty cycle. This caused periodicfluctuations in the temperature of the back-end amplifiers, whichwere clearly traceable in the signal of the 4.5 K load (Mennella et al. 2011), but are not visible in the calibration constants computed using the dipole, because of statistical noise (this is particularly true during dipole minima).
-
2.
In general, during a dipole minimum, we are not able to keep track of variations in the gain of the radiometers. However, the knowledge of the internal 4.5 K signal allows us to estimate an additive correction factor that mitigates the problem.
Fig. 6 Simulated Galactic straylight in total intensity for representative LFI radiometers for the full mission period. Top: 70 GHz radiometer 18M (right) and 18S (left). Middle: 44 GHz radiometer 24M (right) and 24S (left). Bottom: 30 GHz radiometer 27M and 27S (left). The faint stripes paralleling the scanning direction are due to the different coverage of the sky during different surveys. |
7.4. Galactic straylight removal
The light incident on the focal plane that does not reflect directly off the primary mirror (straylight) is a major source of systematic effects, especially when the Galactic plane intersects the direction of the main spillover. This effect is now corrected by removing the estimated straylight signal from the timelines. To do this the term Bsl ∗ Tsky of Eq. (1) has to be removed from calibrated timelines (here Bsl represents the sidelobes contribution to the beam). This term was computed for each radiometer by convolving both Galactic and extragalactic emissions with the antenna pattern in the sidelobe region (with angle θ> 5° from the main beam pointing direction). Here Tsky was estimated using simulated temperature and polarization maps. These included the main diffuse Galactic components (synchrotron, free-free, thermal, and anomalous dust emissions) as well as contribution from faint and strong radio sources and the thermal and kinetic Sunyaev-Zeldovich effects (although the last is barely relevant at LFI frequencies), as described in Planck Collaboration IX (2016) and Planck Collaboration X (2016). These maps are weighted across the band using the transmission function specific to each radiometer and then summed together. For polarization, the contributions from both synchrotron and thermal dust have been considered.
The convolution was performed by transforming both the sky and the sidelobe pattern into spherical harmonics coefficients up to multipole ℓ = 2048. These coefficients are then properly multiplied to produce an object containing convolution results for each position on the sky (θ,φ) and beam orientation angle ψ. For each sample in the timeline, the straylight contribution has been evaluated by performing a polynomial interpolation. Figure 6 shows expected Galactic straylight contribution in total intensity for a sample of LFI radiometers (both main and side arms), one at each frequency covering the full mission period.
7.5. Colour correction
Colour corrections are required to adjust LFI measurements for sources or foregrounds that do not have a thermal spectrum. Our initial estimates were listed in Planck Collaboration V (2014) for each LFI radiometer and frequency band. For power-law spectra they can be well approximated by a quadratic relationship between flux density and spectral index α (or equivalently temperature spectral index β = α−2), where the quadratic coefficient is proportional to the square of the fractional bandwidth, and the linear term mainly depends on the value of the chosen reference frequency (Leahy & Foley 2006). The constant component is constrained by the requirement of zero colour correction for the CMB spectrum, so there are two free parameters in the model. Accurate quadratic fits are used in the fastcc IDL code included in the Planck unit conversion and colour correction software package.
The more detailed component separation analysis for the 2015 release (Planck Collaboration X 2016) has allowed us to further constrain the colour corrections, which in the 2013 release were based purely on ground-based measurement and modelling of the radiometer bandpasses. In recent analyses, we used separate maps from each of the three co-scanning pairs of 70 GHz horns. The analysis uses maps from LFI, HFI, and WMAP, which includes several pairs of channels spaced closely in frequency. Using the nominal colour corrections for the three instruments, highly significant and systematic residuals were found to our best-fit models for the strong Galactic emission, which resemble gain errors; however, gain errors can be ruled out, because there were no detectable residuals correlated with the CMB emission. We thus assume that the previous colour corrections caused the residuals, and have tried to improve them.
A first attempt has been made to derive improved colour corrections by fitting for a frequency shift in the bandpass as part of the component separation analysis. This minimal model was adopted to avoid a strong degeneracy between the bandpass recalibration and the foreground spectral models; it is certainly an oversimplification. The resulting fractional change of frequency is 1.0 ± 0.3% at 30 GHz, 0.2 ± 0.2% at 44 GHz, and −0.6%, 1.6% and 0.7% (all ± 1.4%) for the three 70 GHz horn pairs (18 and 23, 19 and 22, 20 and 21, respectively). The uncertainties quoted here are the absolute ones. For convenience, Table 9 lists the parameters of our parabolic fit to the colour corrections derived from the shifted bandpasses, where for a map thermodynamic temperature , the Rayleigh-Jeans brightness temperature at the reference frequency ν0 is given by (3)where ηΔT(ν) = ∂TRJ/∂T | TCMB, and the coefficients in Table 9 give the colour correction as . Because they are based on a simplified analysis, these values should be treated with some caution; the revised colour corrections have only been tested for spectral indices near that of the dominant foregrounds at each frequency, namely −1 ≲ α ≲ 0 at 30 and 44 GHz, and 0 ≲ α ≲ 2.5 at 70 GHz. We plot the old and new corrections in Fig. 7.
Coefficients for parabolic fits to the LFI colour corrections C(α), revised from the 2013 values, based on the bandpass shifts derived by Commander component separation code (Planck Collaboration X 2016).
Fig. 7 Colour corrections C(α) versus intensity spectral index α. Solid lines are the current corrections given by Table 9, while dashed lines are the 2013 values. Red curves are for the 30 GHz band, green for 44 GHz, and blue for 70 GHz. Note that the corrections have only been validated for α ≲ 0 at 30 and 44 GHz, and for 0 ≲ α ≲ 2.5 at 70 GHz. |
In Planck Collaboration V (2014) we gave a rough indirect estimate of the expected uncertainties in the colour corrections, assuming that the errors for the individual radiometer bandpasses were uncorrelated. The revised corrections at 30 GHz differ from our original ones by 2–3% for for α ≈ −1, and this change is almost an order of magnitude larger than our original error estimate. In retrospect, our assumption of uncorrelated errors was flawed for this particular channel, since our ground-based estimates of the bandpass shape were particularly sensitive to modelling assumptions. This arose because the bands still had significant response at the low-frequency end of the the directly-measured range. As a result, it seems likely that the actual difference between our 2013 model and the true 30 GHz bandpasses is more in the nature of an upward revision of the low-frequency cutoff than a uniform shift to higher frequency; if this is the case, then our 2015 estimate for will still show too much curvature in this band.
The estimated bandpass shifts at 44 and 70 GHz are not significant, but they correspond to colour-correction changes of 0.5% and 1%, respectively, consistent with our original error estimates; moreover, as explained in Planck Collaboration X (2016), the relative shifts between the 70 GHz horn pairs are known much more accurately than their absolute values, and are certainly important. We therefore recommend the use of the revised colour corrections listed here. The uncertainties in the correction should be taken to be approximately | β | × 0.3% for all channels, as long as the spectral index is close to the well-sampled range, −3 ≳ β ≳ 1. We note that, by construction, the colour correction tends to unity for emission having the colour of the CMB, and so remains accurately equal to unity when β = βCMB.
Since it is possible to make total intensity sky maps from the data for each individual LFI feed horn (averaging the data from the M and S radiometers), it will be possible to improve the colour corrections individually for each horn, and we plan to do that for the next release.
Fig. 8 Noise spectra throughout the mission lifetime for a 70 GHz radiometer 18M (left), 25S (44 GHz; middle), and 27M (30 GHz; right). Spectra are shown for the ranges from OD 100 (blue) to OD 1526 (red), spaced about 20 ODs apart. White noise is stable at the level of 0.3%, while low-frequency noise shows variations both in slope and knee-frequency, with different amplitude for different radiometers. |
7.6. Summary of changes in LFI calibration
In this subsection, we summarize the changes in the overall calibration of the Planck LFI channels that have resulted from different procedures adopted since Planck Collaboration II (2014) and from our deeper understanding of instrumental systematics and their effect on calibration.
-
Overall calibration. Improved accounting for beam ef-fects and other changes discussed in Sects. 6and 7 produces a small upward shift in the cali-bration for the three LFI channels. In addition, our current use ofthe orbital dipole for the determination of the solar dipole usedfor calibration has shown that the previous calibration based onthe WMAP solar dipole was 0.28% low for allfrequencies. Combining these effects, we find the following up-ward shifts in LFI calibration: 0.83%, 0.72%, and0.95% for 30 GHz, 44 GHz,and 70 GHz, respectively.
-
Uncertainties in calibration. Improved understanding and assessment of the impact of various systematic effects on calibration have allowed us to refine our estimates of overall calibration uncertainty. The uncertainties are 0.35%, 0.26%, and 0.20% for 30, 44, and 70 GHz, respectively.
-
Window function. We now use 4π beams, rather than a pencil beam approximation. LFI window functions properly take account of the small amount of missing power in the sidelobes (a roughly 0.4% effect at most, see Table. 8).
-
Flux densities of compact sources. Our current use of a 4π beams also means that flux densities of compact sources need to be boosted by a small factor if they are derived from the LFI maps (again, see Table 8). Flux densities in the PCCS2, on the other hand, are already corrected for this factor.
White noise sensitivities for the LFI radiometers.
Knee frequencies and slopes for the LFI radiometers.
8. LFI noise estimation
8.1. Radiometer noise model
A detailed knowledge of instrumental noise properties is fundamental for several stages of the data analysis. First of all evolution in time of basic noise properties (e.g. white noise variance) throughout the entire mission lifetime is an important and simple way to track possible variations and even anomalies in the instrument behaviour. In addition, noise properties serve as inputs for the Monte Carlo noise simulations (used, e.g. for power spectrum estimation) and also give correct weights for properly combining different detectors.
We proceed as already shown in Planck Collaboration II (2014) using an implementation of a Monte Carlo Markov chain (MCMC) approach to estimate basic noise properties. As before, the noise model is (4)where σ2 is the white noise level, and fknee and β describe the non-white component of the instrumental noise. To evaluate σ2, we take the mean of the noise spectrum in the last few (typically 10%) of the bins at the highest frequency, which exhibits a flat, high-frequency tail, as shown in Fig. 8. At 30 GHz the knee-frequency is fknee ≈ 100 mHz and therefore a smaller percentage of data has been taken for computing σ2. These values for σ are given in Table 10. Once this is done we can proceed with the evaluation of the other two parameters. After discarding a burn-in period from our chains, we obtained the best-fit and variances values reported in Table 11.
8.2. Updated noise properties
We estimate noise properties at the radiometer level using the MCMC approach. As already done with the previous data release, we work with calibrated data and select chunks of data 5 days long and process them with the roma generalized least-squares mapmaking algorithm (de Gasperis et al. 2005). The outputs are frequency spectra that are then fitted for the basic noise parameters. Results are summarized in Tables 10 and 11, for the white noise sensitivity and 1 /f noise parameters, respectively. These numbers are the medians, computed from the fit results throughout the whole mission lifetime.
Time variations of the noise properties are a good indicator of possible changes in instrument behaviour. There are known events that caused such variations, such as the sorption cooler switchover at OD 460 (Planck Collaboration I 2014). Indeed, variations in noise properties due to changes in temperature are expected as the performance of the first cooler degraded, as well as when the second cooler came in and took time to stabilize the temperature. Figure 8 shows a sample of noise spectra for radiometers LFI27M, LFI25S, and LFI18S, spanning the whole mission lifetime. The white noise is stable at the level of 0.3%. As already noted in the previous release, knee-frequencies and slopes are stable until OD 326 and show significant variations afterwards, altering the simple “one slope, one knee” model. This is due to the progressive degradation of the first sorption cooler and the insertion of the second one. Once the environment became thermally stable, the spectra moved back towards their initial shape. Of course this is evident at different levels in the individual radiometers, depending on their frequency, position on the focal-plane, and susceptibility to thermal instabilities.
9. Mapmaking
Mapmaking is the last step in the LFI pipeline, after calibration and dipole removal, and before bandpass correction and component separation. Mapmaking takes as its input the calibrated timelines, from which the 4π convolved dipole and Galactic straylight signal has been removed. Output consists of sky maps of temperature and Q and U polarization, and a description of the residual noise in them.
An important part of the mapmaking step is the removal of correlated 1 /f noise. An optimal mapmaking method will remove the noise as accurately as possible, while simultaneously keeping systematics at an acceptable level.
LFI maps were produced by the Madam mapmaking code Keihänen et al. (2005). The code is the same as used in the 2013 release. In the following we give a short overview, and point out aspects relevant to polarization (see Planck Collaboration VI 2016 for details).
Madam removes the correlated noise using a destriping technique. A noise prior is used to improve the map quality further. The correlated noise component is modelled by a sequence of baseline offsets. The choice of the baseline length is a trade-off between computational burden and optimal noise removal. We have chosen to use 1 s long baselines for 44 and 70 GHz, and 0.25 s for 30 GHz where the typical knee frequencies are higher.
Fig. 9 LFI full mission low-resolution maps, Nside = 16. From left to right 30 GHz, 44 GHz, and then 70 GHz: top intensity I; middle polarization Q component; and bottom polarization U component. Units are μKCMB. |
The full time-ordered data stream is modelled as (5)Here vector a represents the baselines, and F is formally a matrix that spreads the baselines into time-ordered data. Vector n represents white noise, and P is a pointing matrix that picks a time-ordered data stream from the sky map m. Map m has three columns, corresponding to the three Stokes components I, Q, and U.
The noise prior describes the expected correlation between baseline amplitudes, (6)The prior is constructed from the known noise parameters presented in the previous section (knee frequency, white noise sigma, and spectral slope). The noise prior provides an extra constraint which makes it possible to extend the destriping technique to very short baseline lengths, allowing for more accurate removal of noise.
With the assumptions above, the baseline vector a can be solved from the linear system of equations (7)where (8)Here Cw is a diagonal weighting matrix. The final map is constructed as (9)The destriping technique constructs the final map through a procedure in which one first solves for the baselines, and then bins the map from the data stream from which the baselines have been removed. This two-step procedure provides a way of reducing systematics. We control the “signal error” by applying a mask in the destriping phase, while still binning the final map to cover the whole sky. Signal error is the uncertainty in baseline determination that arises from deviations of the actual sky signal from the model Pm. The main sources of signal error are signal variations within a pixel, differences in radiometer frequency responses (bandpass mismatch), and beam shape mismatch. The error arises mainly at low Galactic latitudes, where signal gradients are strong.
The choice of the destriping mask is a trade-off between acceptable signal error level and noise removal. A mask that is too wide may lead to a situation where there are not enough crossing points between scanning rings to reliably determine the noise baselines.
It can be shown that residual noise is minimized when Cw equals the variance of white noise in time-ordered data. In order to reduce leakage from temperature to polarization, however, we apply horn-uniform weighting, which differs from this ideal case. We replace the white noise variance by the average of the variances of the two radiometers of the same horn. This has the effect that the systematic error related to beam shape mismatch, which is strongly correlated between the radiometers, largely cancels out in polarization analysis. Thus we are reducing the leakage from temperature to polarization.
Along with maps of the sky, Madam provides a covariance matrix for residual white noise in the maps. This consists of a 3 × 3 matrix for each pixel, describing the correlations between I, Q, and U components in the pixel. White noise is uncorrelated between pixels. Correlated noise residuals are captured by the low-resolution noise covariance matrix describe in Sect. 9.3 below.
Madam produces its output maps in HEALPix format (Górski et al. 2005). For the bulk of the products we used resolution Nside = 1024, and the same resolution was used when solving the destriping equation. Maps at 70 GHz were also produced with Nside = 2048.
To accurately decompose the map into I, Q, and U components it is necessary to have several measurements from the same sky pixel, with different parallactic angles. If this is not the case, the pixel in question is eliminated from analysis. Madam uses as rejection criterion the reciprocal condition number of the matrix .
9.1. Low-resolution data set
Low-resolution products are an integral part of the low-ℓ likelihood. To fully exploit the information contained in the largest structures of the microwave sky, a full statistical description of the residual noise present in the maps is required. This information is provided in the form of pixel-pixel noise covariance matrices (NCVMs). However, due to resource limitations they are impossible to employ at native map resolution. Therefore a low-resolution data set is needed for the low-ℓ analysis; this data set consists of low-resolution maps and corresponding noise covariance matrices. At present, the low-resolution data set can be efficiently used only at resolution Nside = 16, or lower. All the low-resolution products are produced at this target resolution.
9.2. Low-resolution maps
The low-resolution maps, shown in Fig. 9, are constructed by downgrading the high-resolution maps (described in the previous section) to the target resolution. We chose to downgrade the maps using a “noise-weighted” scheme.
The noise-weighted scheme has also been used in previous studies (see, e.g. Planck Collaboration II 2014). The noise-weighted map corresponds to a map that is first destriped at the high resolution, and the destriped TOI is directly binned onto the low target resolution. This approach gives adequate control over signal and noise in the resulting maps. However, concerns have been raised that the noise-weighted scheme transfers signal from one pixel to another. As a consequence we employ Gaussian smoothing to minimize this effect, at the cost of some increase in noise. After downgrading, the temperature component is smoothed with a Gaussian window function with FWHM = 440′. We will re-examine this choice in the next release.
In practice the high-resolution maps are noise-weighted to an intermediate resolution of Nside = 32. The Stokes I part of the map is expanded in spherical harmonics, the expansion is treated with the smoothing beam, and the final map is then synthesized at the target resolution. The last step of resolution downgrading for Stokes Q and U maps, however, is performed by carrying out naive averaging of higher resolution pixels.
Due to the chosen downgrading scheme the resulting NCVM will be singular. We regularize the problem by adding some white noise both to the maps and matrices. Specifically we add 2μK for I, and 0.02μK for Q and U at Nside = 16 resolution.
9.3. Noise covariance matrices
The statistical description of the residual noise present in a low-resolution map is given in the form of a pixel-pixel noise covariance matrix, as described in Keskitalo et al. (2010). The NCVM formalism describes the noise correlations of a map produced at the same resolution as the noise covariance matrix. Therefore, for an exact description we should construct the matrices at resolution Nside = 1024 and subsequently downgrade to the target resolution. This is computationally impractical. Therefore the matrices are computed at the highest possible initial resolution, and then downgraded to the target resolution. For consistency the noise covariance matrices must go through the same processing steps as applied to the low-resolution maps.
The Madam/TOAST code, a Time Ordered Astrophysics Scalable Tools (TOAST) port of Madam, was used to produce the pixel-pixel noise covariance matrices (Keihänen et al. 2010)4. The TOAST interface was chosen on the basis of added flexibility and speed; see Planck Collaboration VI (2016).
The outputs of Madam/TOAST software are inverse-noise covariance matrices, specifically one inverse matrix per radiometer for a given time period. Because inverse NCVMs are additive, the individual inverse matrices are merged together to form the actual inverse NCVM. To obtain the noise covariance matrix from its inverse, the matrices are inverted using the eigen-decomposition of a matrix. These intermediate-resolution matrices are then downgraded using the same downgrading scheme as applied to the maps. The matrices are regularized by adding the same level of white noise to the diagonal elements of the covariance matrix as to the low-resolution maps.
The noise covariance matrix computation takes two inputs: the detector pointing; and noise estimates. Since the matrices are calculated with Madam/TOAST, we use the pointing solution provided by TOAST. For more details see Planck Collaboration XII (2016). We also use the most representative noise model available, namely the FFP8 (full focal plane 8 simulations) noise estimates (Planck Collaboration XII 2016). The noise model comprises daily 1 /f model parameters.
The key parameter in the NCVM production is the baseline length. We have demonstrated in an earlier study (Planck Collaboration II 2014) that using shorter baseline lengths when producing the noise covariance matrix production better models the residual noise. Therefore we chose to use 0.25 s baselines for the 30 GHz LFI frequency channel; we show in Planck Collaboration VI (2016) that 1.0 s is adequate for the 44 GHz and 70 GHz channels. Reducing the baseline length still further gives only a marginal improvement, while the resource requirements increase rapidly.
Previous studies (Planck Collaboration II 2014) have also shown that matrices should be calculated at the highest computationally feasible resolution. For the current release the initial resolution is Nside = 64. Increasing the initial resolution beyond Nside = 64 is likely to improve results, but the matrix size will be 16 times larger, i.e. 2.5 TB. Inverting such a matrix is a formidable task.
The noise covariance computation makes two further deviations from the high-resolution mapmaking: it does not take into account the destriping mask; and the horns are not uniformly weighted. The effect of these differences is much smaller than is obtained by either decreasing the baseline length or increasing the destriping resolution in the production. For more details see Planck Collaboration VI (2016).
10. Overview of LFI map properties
Figures 10 to 12 show the 30, 44, and 70 GHz frequency maps created from LFI data. The top panel in each figure is the temperature (I) map, based on the full observation period at native resolution and HEALPix Nside = 1024. The middle panel is the Q polarization component, while the bottom panel is the U polarization component at Nside = 256 smoothed at 1° resolution. In Fig. 13 the eight surveys at 30, 44, and 70 GHz are shown; the grey areas identify the regions of the sky not observed in each survey. Table 12 reports the main parameters used in the mapmaking process.
The delivered maps have been processed in order to remove any spurious zero-level (or monopole term). To do this we implemented the following procedure. We derived from LFI data only an estimation of the CMB signal by processing 1° smoothed maps with an ILC (Internal Linear Combination) method, as described in Eriksen et al. (2004). We then smoothed the single frequency LFI maps at the same resolution and subtracted the CMB estimate. For each map we used the variation with Galactic latitude of the remaining Galactic emission signal to estimate the zero-level. We assumed a simple plane-parallel model for Galaxy emission and fit the data with a functional form as T = Acscb + B in the range −90°<b< −15°, using the same mask as employed in the mapmaking procedure. The value of B is the zero-level we are looking for, which has to be subtracted from the maps in order to obtain an overall “null” zero-level. This value is reported in Table 12.
Finally Table 13 lists the delivered maps along with the data period used to create them. All have HEALPix resolution Nside = 1024; in the case of 70 GHz we also provide maps at a higher resolution, Nside = 2048.
Frequency-specific mapmaking parameters and related information.
Periods covered by the released maps.
Fig. 10 LFI maps at 30 GHz. Top: intensity I. Middle: polarization Q component. Bottom: polarization U component. Polarization components are at Nside = 256 and smoothed at 1deg, the intensity is left at the native Nside = 1024. Units are μKCMB. The polarization components have been corrected for the bandpass leakage effect (see Sect. 11). |
Fig. 13 Individual survey temperature maps. Left: 30 GHz. Middle: 44 GHz. Right: 70 GHz. From top to bottom are Surveys 1 to 8. The gray area identify the regions of the sky not observed in each survey that depends from the spin axis orientation. |
11. Polarization
The most important new results in this release are polarization measurements. The maps of Stokes Q and U at each LFI frequency are shown in Figs. 10–12 at 30, 44, and 70 GHz, respectively. The 70 GHz polarized data play a critical role in the construction of the Planck low-ℓ likelihood, as described in Planck Collaboration XI (2016). Given the small amplitude of CMB polarization, we have paid careful attention to systematic effects that could bias our polarization results. The dominant effect is leakage of unpolarized emission into polarization (Leahy et al. 2010), which we describe in detail in Sect. 11.2. An overview of systematics impacting both temperature and polarization data is provided in Sect. 12.6, while a full account of the 2015 LFI systematic error budget is given in Planck Collaboration III (2016).
11.1. Bandpass mismatch leakage
Any difference in gain between the two arms of an LFI radiometer will result in leakage of unpolarized emission into the polarization signal. Since gains are calibrated by observations of the CMB dipole, exact gain calibration would ensure that unpolarized, well resolved, CMB emission perfectly cancels in the polarization signal.
However, because the bandpasses of the two arms are not identical, unpolarized foreground emission, if it has a different spectrum from the CMB, will still appear with different amplitudes in the two arms and therefore leak into polarization. This is “bandpass mismatch” leakage, which was discussed extensively in Leahy et al. (2010).
In principle, two approaches can be used to correct for it. The first exploits the fact that the bandpass leakage is independent of the polarizer orientation, and performs a “blind” separation using observations of a given pixel with multiple orientations of the same radiometer. With the second method, we can characterize both the instrumental bandpass mismatch, and the foreground spectrum and intensity, and hence predict the leakage explicitly.
The blind approach was used by WMAP (Page et al. 2007), but for most sky pixels it is not effective for Planck, because only a relatively small range of detector orientations are available; this causes very large covariances between the leakage and the true Q and U values, effectively increasing the Q and U noise by a large factor. Hence we use the predictive method to calculate the leakage in our Q and U maps, and subtract it. We discuss in turn the determination of the foreground model, the derivation of the instrumental term, and the algorithm for making the correction.
11.2. Leakage maps
The spectra of all important LFI foregrounds are very smooth continua, and so to a good approximation can be modelled as a power law within the bandpass at each LFI frequency band. As described by Leahy et al. (2010), the leakage into the polarization signal recorded by radiometer k can be written as (10)where the a-factor characterizes the bandpass mismatch (see next subsection), β = dlnTF/ dlnν | ν0 is the spectral index of the foreground within the band and is the foreground Rayleigh-Jeans brightness temperature at the band fiducial frequency ν0. We separate this into the instrumental a-factor, and an astrophysical leakage term . We derive L from our Bayesian component separation analysis, as described in Planck Collaboration X (2016). The analysis incorporates the Planck full-mission data, along with the WMAP 9-year maps and the Haslam et al. (1982) 408 MHz map, to give 15 data points at each pixel. This was an earlier run than the one described in Planck Collaboration X (2016): the Planck maps were from a slightly earlier version of the calibration pipeline; the original bandpass models were used to make colour corrections; only a single spinning dust component was included in the model, not two; and the synchrotron template from the Galprop code was scaled only in amplitude, not in frequency.
This analysis produces numerous Gibbs-sampled realizations of the astrophysical component parameters, from which TF and β can be reconstructed at any given frequency, for each pixel in each realization, j. In practice, we evaluate these individually for each component i, to find a leakage map for each component, , and then sum the components to give . This is not only more straightforward to evaluate, but also automatically corrects for any in-band spectral curvature caused by the superposition of foregrounds with similar amplitudes but very different spectral indices. The final leakage map is then simply the average over the realizations, . In practice we use 1000 realizations taken after the sampling chains have successfully burnt in.
The uncertainty in L at each pixel is based on the variance over the Gibbs realizations, σL. However, we also have a measure of goodness-of-fit of the model , measured per pixel for each realization. Because our MCMC chains are well burned in, we work with the average over all realizations (but still separate for each pixel). In regions of strong foreground emission the component separation is limited not by noise but by a mismatch between the assumed algorithmic form of the model and the actual spectrum, signalled by high χ2. Because the model is non-linear and many of the model parameters are subject to strong prior constraints, the χ2 statistic is not expected to follow a χ2 distribution with the number of degrees of freedom equal to the number of data points. We therefore define a fiducial χ2 equal to the median χ2 over the whole sky, which of course is dominated by the high-latitude regions, where the foregrounds are weak, and therefore the component separation residuals are dominated by noise. We adopt an empirical correction to the uncertainty by multiplying σL by the square root of the ratio of the mean χ2 to our fiducial value wherever this ratio exceeds unity.
The component separation analysis must be done at identical resolution for all frequency channels, and this was chosen as 1deg FWHM to allow use of the 408-MHz survey. Consequently, polarization maps corrected for bandpass mismatch leakage are only available at this or lower resolution. Since the full-resolution polarization maps have a signal-to-noise ratio of much less than unity for nearly all pixels, most scientific analysis must in any case be done with smoothed maps, or equivalently with only the low multipoles in harmonic space, so the low resolution of the leakage maps is not a problem for most purposes. However, full resolution data are needed to give the most accurate polarimetry of point sources. A special procedure was therefore used to correct the polarization of sources, as described in Planck Collaboration XXVI (2016).
11.3. a-factors
Fig. 14 IQUSS solution maps at 30 GHz. Top left: Stokes Q. Top right: Stokes U. Bottom left: spurious signal from the first RCA, S1. Bottom right: spurious signal from the second RCA, S2. Polarization maps are noisier than the usual mapmaking solution, since S1 and S2 have to be extracted from the same data. |
Ground-based measurements of the LFI instrumental bandpasses (Zonca et al. 2009) are not accurate enough for our purpose. Fortunately, to a good approximation, the bandpass mismatch can be characterized by a single parameter, the a-factor, which quantifies the difference in effective frequency between the two bandpasses: (11)where “s” and “m” refer to the side and main arms of the radiometer, respectively, and ν0 = (νeff,s + νeff,m) / 2. We determine the a-factors from flight data by first using the blind approach at each frequency to estimate (I,Q,U,S1,S2...; hereafter IQUSS) at each pixel, where Sk is the spurious signal from each RCA. Taking the 30 GHz data as an example, the Sk maps are defined as (12)This can also be written in a more compact form (13)where α1 and α2 take the values −1,0, + 1, depending on the radiometer. To estimate m = [I, Q, U, S1, S2] we need to solve a problem similar to mapmaking, where the noise covariance matrix per pixel Mp is given by the usual 3 × 3 matrix block from Madam, with two more columns (and rows) in the form
(14)To ameliorate the limited range of orientations, we perform a joint solution for all the RCAs at each frequency, in contrast to the WMAP approach of solving for each radiometer independently. In Fig. 14 we show output maps from the IQUSS approach at 30 GHz: Q and U maps (top row); and S1 and S2 maps (bottom row). Note that Q and U maps are noisier than for the nominal mapmaking solution. Over most of the sky the resulting maps of spurious signals are still noisy and therefore we chose a conservative approach to estimating the a-factor for each RCA. This is done with a weighted least-squares fit of the leakage map L to the spurious signal Sk (Sk = akL in the absence of errors) using only those pixels with | b | < 15°, since at higher Galactic latitudes the foregrounds and hence the spurious signals are weak and mainly contribute noise to the solutions. Our code removes pixels where the condition number for the noise covariance matrix Mp is less than a given threshold. This now has a negligible effect, since thanks to the modification of the Planck scanning strategy after Survey 5, the matrix Mp is very well-behaved; even with our conservative limit of 8 × 10-5 for the condition number, fewer than 200 pixels are excluded at 44 GHz and none at 30 and 70 GHz. Our derived values for the a-factors are listed in Table 14.
Bandpass mismatch a-factors from fitting the leakage model map to the spurious maps.
Beam-shape mismatch between radiometer arms can also lead to polarization leakage when there are strong intensity gradients. We therefore examined the effect of excluding compact sources using the WMAP 7-year point source mask. However, this made no significant changes, apart from a dramatically increased uncertainty, so our final values do not use such masking.
We compared these values with an independent derivation based on aperture photometry of bright sources in the IQUSS maps, including the Tarantula nebula in the LMC, which lies in the “deep” region around the Ecliptic pole, which is scanned with multiple different polarimeter orientations across a wide range of angles, and hence allows a particularly accurate blind separation of spurious signal. Other calibrators were bright H ii regions at relatively high Ecliptic latitude, since the range of polarization orientations observed increases towards the Ecliptic poles. H ii regions were chosen because they have minimal intrinsic polarization, but we did not force Q and U to zero in the analysis. The a-factors derived from the calibrators were consistent with our preferred values derived from the large area fit, but somewhat less precise.
11.4. Production of correction maps
The polarization data from a given radiometer constrains one Stokes parameter (say QH) in a frame of reference tied to the specific feed horn (or RCA). This is projected onto the sky according to the sky orientation of the horn frame. Hence the contribution of the spurious signal from each radiometer is modulated into the Q and U sky pixels by geometric projection factors. This modulation can be derived by re-scanning, in a mapmaking fashion, the estimated spurious map Ŝ = akL. Instead of an actual re-scanning, which is time consuming, we create projecting maps AQ [U] by solving the mapmaking system (15)where mp are the maps obtained by binning a stream of −1 for the “side” and of + 1 for the “main” arms separately. Finally the correction maps are (16)One of the main drawbacks of deriving our L maps at 1deg resolution emerges at this stage. The correction must be applied to Q and U maps matched in resolution, and so the raw Q and U maps are smoothed to give the required 1deg FWHM Gaussian beam. We can regard the raw maps as the true Q and U sky, smoothed with the instrumental beam, plus the leakage term, plus noise. The leakage term in the raw maps can be thought of as an infinite-resolution leakage sky convolved with the instrumental beam, and then multiplied by the leakage projection maps PQ [U] = ∑ kakAk,Q [U], which are defined at the pixel level according to Eq. (14). When we smooth this raw map, we smooth the product PL, but when we construct our correction map, we have only the smoothed L map. The smoothed product is not equal to the smoothed L map multiplied by the full-resolution P map, which contains fine-scale structure induced by caustics, lost data, and abrupt changes in the survey strategy; nor is it equal to the smoothed L-maps multiplied by the smoothed P map, which is over-smoothed in regions where both L and P vary rapidly. In practice we used the smoothed P maps, since P only varies rapidly near a small subset of pixels, and only a few of these will also have rapidly varying L. The issue is most significant for compact sources, for which we recommend analysis of the raw maps, followed by a leakage correction using the derived IQU fluxes, as described in Planck Collaboration XXVI (2016).
Although our a-factor estimates are relatively stable, our fit of the leakage maps to the spurious maps showed significant residuals at the level of 18 μK, 24 μK, and 16 μK, respectively at 30, 44, and 70 GHz. Contributing factors may include: errors in the leakage maps caused by errors in the component separation; residual beam ellipticity after smoothing to 1deg; and any variation with time of the bandpass, which would cause corresponding changes in the a-factors. As a check on our results, we used the IQUSP procedure in which we create a prior for each S map using our component-separation L map and our best a-factor estimates. This process returns the prior S-map essentially unchanged over most of the sky, and hence gives Q and U maps indistinguishable from our corrected versions. However, it prefers the IQUSS solution when it differs significantly from the prior (essentially in regions of the brightest foreground emission, where the limitations of our simplified emission models become apparent). An advantage of the method is that the maps are returned at full resolution, wherever the data can constrain the resolution to be higher than in the prior. These maps confirm that most details of the structure along the Galactic plane in our corrected LFI polarization maps are consistent with the data, i.e. are reproduced in the IQUSP images, including the most significant discrepancies with WMAP. Although not fully validated and therefore not included in the current release, the IQUSP images are likely to form the baseline for our final-release polarization maps.
12. Data validation
We verify the quality of the LFI data with a suite of null tests, as well as with a set of simulations reproducing the main instrumental systematic effects and the calibration process. In this section we summarize the main results of our analysis, and refer to Planck Collaboration III (2016) and Planck Collaboration V (2016) for more details.
Null tests are performed on blocks of data covering different time scales (from the pointing period to surveys and years) and considering different instrument combinations (radiometer, horn, horn-pairs, and frequency) both in total intensity and polarization (when applicable).
Such null tests can probe different systematic effects depending on the time and instrument selection considered. Differences at horn level between odd and even surveys may show effects due to the sidelobe contribution, since the relative orientation of the horns with respect to the sky is changed. Furthermore, the comparison of power spectra at the frequency level may reveal the impact of calibration uncertainties related to the relative orientation of the scans and the CMB dipole, our main calibration source as discussed in Sect. 7.
12.1. Null test results
In order to assess null test results, it is fundamental to define a clear figure of merit as a pass-fail criterion. Failure of a specific test is an indication of a data problem and/or issues in data processing that should be studied further. As we already did for the previous release, we take the noise level as derived from “half-ring” difference maps, made of the first and second half of each stable pointing period (half-ring maps) weighted by the hit count, as the figure of merit. This quantity traces the actual properties of the data, including white noise, as well as un-modelled and un-corrected effects. Figure 15 shows results at the frequency level for both TT and EE power spectra when we compare survey differences to the noise level derived from the corresponding half-ring maps. For simplicity we show here only a subset of survey differences that are illustrative of the general trend.
Fig. 15 Null test results comparing power spectra from survey differences to those from the half-ring maps. Differences are: left Survey 1 − Survey 2; middle Survey 1 − Survey 3; and right Survey 1 − Survey 4. These are shown for 30 GHz (top), 44 GHz (middle), and 70 GHz (bottom) for both TT and EE power spectra. |
When interpreting these results it is important to note that we have substantially improved the quality of data at 30 GHz by using the new 4π calibration (Planck Collaboration V 2016), which accounts for the impact of the full beam during calibration, and by removing at the TOD level the modelled sidelobe signals of both the CMB dipole and Galactic emission (derived from the FFP8 simulation runs). This is particularly evident from TT spectra, where the null test data match the level of the half-ring differences. However, there is still an issue in polarization when considering differences involving Surveys 2 and 4. At 44 GHz, which has the lowest sidelobes among LFI channels, the agreement with the half-ring noise is extremely good. We have almost the same situation at 70 GHz, although at very low multipoles (ℓ< 10) there are discrepancies between survey difference and half-ring noise; again this is particularly evident when Surveys 2 and 4 are considered.
To be more quantitative about these results at 70 GHz we compute deviations from the half-ring noise in terms of (17)We sum up single values in the range 2–50 and from χ2 and Ndof we derive p-values of the distribution. While in principle proper noise simulations should be used, for our purposes it is sufficient to consider simple half-ring noise, which is already able to reveal interesting features in the data. Table 15 reports χ2 and p-values for the three survey differences shown in Fig. 15, which suggests that Surveys 2 and 4 clearly yield poor χ2 and problematic p-values.
Survey difference χ2 and p-values.
As discussed further in Sect. 13, on the basis of these and other results, we have discarded these two surveys from the released likelihood.
12.2. Half-ring test
As already pointed out, the half-ring difference maps are the best direct information about the actual noise in the LFI data. A proper characterization of the noise is fundamental for the creation of realistic NCVMs and noise MC required for the following steps in the data analysis. In this respect such noise modelling has to be validated against the half-ring maps. For the current analysis we followed the same procedure exploited in the previous data release. We computed auto-spectra in temperature and polarization with anafast of both the half-ring difference maps and 10 000 noise Monte Carlo simulated maps taken from FFP8. We compared the half-ring spectra with the distribution of the noise MC simulations and with the white noise derived from the white noise covariance matrices (WNCVM) calculated by Madam map-making.
Figure 16 gives a flavour of this comparison for the three LFI frequencies and for both total intensity TT and polarization (EE and BB) power spectra. Note that the half-ring noise spectra are binned over a range of Δℓ = 25 for ℓ ≥ 75. The agreement between half-ring noise spectra and noise MC distribution is remarkable, and gives us confidence that the LFI noise properties are accurately characterized.
Fig. 16 Consistency of the noise angular power spectra from the half-ring difference maps (red), white noise covariance matrix (black dash-dotted line), and 10 000 full-noise MCs (grey band showing 50% quantiles, black solid line, and limits at 16% and 84% quantiles). From top to bottom we have TT, EE, and BB spectra for 30 (left), 44 (centre) and 70 GHz (right). Half-ring spectra are binned with Δℓ = 75 for ℓ ≥ 75. |
We further inspect this comparison computing the mean Cℓ for the high-ℓ tail of the spectrum (1150 ≤ ℓ ≤ 1800) and comparing it with the WNCVM (white noise covariance matrix) estimate (Fig. 17). It is clear that there is some residual 1 /f noise also at high-ℓ as has been already pointed out in the 2013 release (Planck Collaboration II 2014). This means that both data and noise MCs predict a slightly higher noise than the WNCVM. The residual is of the order of 1.6% (TT) maximum at 30 GHz, 1.3% (BB) at 44 GHz, and 1.0% (EE) at 70 GHz. On the other hand, the agreement between the actual data and the full noise MCs is extremely good being of the order of 0.5% at 30 GHz, 0.4% at 44 GHz, and 0.2% at 70 GHz.
12.3. End-to-end test results
The LFI calibration pipeline is necessarily quite complex, since it includes iterative mapmaking, sidelobe removal, Galactic masking, map domain fits to the 4π beam-convolved dipole, and filtering. While the accuracy of the mean calibration constant is important, particularly for inter-frequency validation and foreground modelling, we are mostly concerned with quantifying the level of systematic errors in our estimation of the calibration over time. The gain of the LFI radiometers typically varied by a few percent over the four years mission lifetime, with changes at time scales from single pointing periods to the full mission. Null tests on survey and year time scales set useful limits on systematic effects, including incorrect calibration estimation, but it is still important to develop a “bottom up” estimation of possible errors. Consequently we have carried out several parallel efforts to simulate our calibration procedure, each using different software and detailed choices for inputs, but following the same general approach, which we now summarize.
-
Start with a fiducial sky map (in kelvins), either from thefrequency maps of the data, FFP8, or some other simulation. Thismap includes CMB anisotropies, foregrounds, and possiblysome systematics, but no dipole signals, and can be eithertemperature only or Q and U.
-
“Unwrap” or rescan the map to a time ordered signal data set, in “ring” basis (still in Kelvin). This is done using actual flight pointing data.
Fig. 17 Ratio of the mean noise angular power spectrum in the high-ℓ (1150 ≤ ℓ ≤ 1800) tail to the white noise as derived from the white noise covariance matrices from Madam.
-
Add dipole signals, including the solar dipole and the orbital dipole. We can choose here whether to use a “pencil beam” model, where the dipole signal that is added has been sampled from the sky model with a Dirac delta function, or a 4π model consisting of an all-sky convolution of the detector beam model with the dipole model.
-
Add instrument noise, either white noise or full 1 /f noise (only white noise turns out to be relevant).
-
All the steps described so far assume a timeline in kelvins. Next we “decalibrate” these simulated data streams using a fiducial model for the actual detector gain, and produce timelines in volts. A standard choice here is to use the so-called “Delta V” gain, which is a radiometer gain estimated directly from the DC-coupled detector data. While we know from detailed tests that this gain does not track the actual gain fluctuations better than about 0.5%, it has the advantage of being a gain estimate with no smoothing applied, and should reflect closely the true statistics of the radiometer gain.
-
From this simulated timestream, we proceed with our nominal calibration pipeline to recover the input gain. In this way, we can compare the recovered time domain gain estimate to the fiducial input, as well as the final calibrated maps to the fiducial input maps. The results of such comparisons are shown for two radiometers in Figs. 18 and 19.
These simulations are designed to test the impact of our procedures on the results. They also provide a mechanism for quantitatively determining the impact that errors in the inputs, such as beam shape or far sidelobe contribution, have on our output maps and other scientific products. They do not provide a way to estimate what those input errors are; these must be determined by dedicated investigations on the optical model or instrument-specific simulations. Starting from reasonable estimates of the systematics affecting our instrument, however, we can introduce changes in the input within the expected range and then test for deviations in the recovered calibration. We thus obtain both the sensitivity to that effect and an estimate of the probable error causing it, assuming either extreme values (conservative) or the expected 1σ (typical). Similarly, we can use this approach to determine the sensitivity of the calibration process to Galactic masking.
Fig. 18 Relative variations between input and output of the end-to-end test for radiometer 27S at 30 GHz. In general, we recoverthe input gain to better than 0.1%, except for some larger excursions introduced by sudden changes in the instrument configuration, to which the 30 GHz radiometers are particularly sensitive. |
Fig. 19 Relative variations between input and output of the end-to-end test for radiometer 22S at 70 GHz. The overall recovery is under 0.1%, with some spikes in the longest pointing periods. |
The basic results of such end-to-end tests of the effects of systematics are summarized in Table 16. Comparison of the difference between input and output gains shows a typical bias of order 0.2%.
Figures 18 and 19 show input and output gain constants and the relative variations between the two, at 30 GHz and 70 GHz, respectively. The 30 GHz channels are the most difficult to calibrate, because they are more sensitive to changes in instrument configuration, causing a bigger number of jumps, while the 70 GHz are more sensitive to the instrument noise. Table 16 shows mean and standard deviations of the relative variations between the input gain constants and the output ones, for all the radiometers. The resulting precision of the photometric calibration is up to 0.2%, thus validating the calibration algorithm.
Mean and associated error of the percentage variation between input and output of the end to end tests.
12.4. Intra-frequency consistency check
We tested consistency between 30, 44, and 70 GHz maps by means of power spectra, as already done in the previous release (Planck Collaboration III 2014). In order to avoid the need to estimate the noise bias, we simply took the cross-spectra between half-ring maps at the three LFI frequencies. As in the 2013 data release, we used the cROMAster code which extends the pseudo-Cℓ approach of Hivon et al. (2002) to cross-power spectrum estimation (Polenta et al. 2005). Although suboptimal with respect to the maximum likelihood approach, this method provides accurate results, and is at the same time computationally quick and light. Consequently, this method is widely used within the CMB community (see e.g. Molinari et al. 2014 and references therein for a comparison between different power spectrum estimators).
Those spectra are computed using a mask that is the combination of the G040, G060 and G070 Planck masks, respectively at 30, 44, and 70 GHz, together with the proper frequency-dependent point source mask. In Fig. 20 cross-spectra from 30, 44, and 70 GHz half-ring maps are presented, showing very good agreement among these maps (especially as we did not apply any component separation to the maps). All three data sets show strong consistency with the Planck best-fit TT spectrum (black points) to which a contribution from unmasked point sources has been properly added.
Another more quantitative way for assessing data consistency is to build scatter, or TT-, plots for the three frequency pairs. In order to do this we have to subtract the contribution of point sources below the mask threshold at each individual frequency. After that we perform a linear fit, accounting for errors in both x- and y-axes, to quantify the level of agreement between pairs. Results are presented in Fig. 21, where we compare spectra in the multipole range around the first acoustic peak. The agreement is extremely good and spectra are consistent with unity within the errors (deviations are between 0.9 and 0.1%). That in turn means a calibration accuracy in the map at the sub-percent level. This is very significant considering that we did not take into account foreground removal or uncertainties on the window function and calibration; therefore we may expect the agreement to improve when these issues are taken into account.
Fig. 20 Temperature cross-power spectra (from half-ring maps) at 30, 44, and 70 GHz, binned in multipole space. Foreground emission is excluded only by means of a Galactic sky mask, without further component separation. Best-fit Planck temperature spectra plus contributions from un-resolved point sources are shown as dashed lines for each LFI band. |
Fig. 21 Consistency between cross-power spectra at LFI frequencies: left 70 GHz versus 30 GHz; middle 70 GHz versus 44 GHz; and right 44 GHz versus 30 GHz. The solid red line is the linear regression, accounting for error on both axes. Slope values are found to be consistent within the uncertainties. |
12.5. Internal consistency check
In order to assess the internal consistency of 70 GHz data, we build three flavours of cross-power spectra that use different kind of data splits, namely the half-ring maps, the detector set (quadruplet) maps, and the year 1–3 and year 2–4 maps. In Fig. 22 we show residuals of the three estimates compared to the expected deviations computed by running the same procedure on the realistic FFP8 Monte Carlo simulations. A simple χ2 analysis shows that residuals are compatible with the null hypothesis.
We then apply the Hausman test (Polenta et al. 2005) to further verify the consistency of the three power-spectrum estimates. We define the statistic (18)where and represent two different cross-spectra, and we combine the information from different multipoles through the quantity (19)where [.] denotes integer part. It can be shown that the distribution of BL(r) converges to a Brownian motion process, which can be studied using three test statistics defined as s1 = suprBL(r), s2 = supr | BL(r) | and . Results for the comparison of detector set (DS) and year based (YR) cross-spectra are shown in Fig. 23. Vertical lines represent the values of the test statistics computed from Planck maps as compared to the empirical distribution of the test statistics derived from FFP8 simulations. The application of the Hausman test to the other cross-spectra combinations produces similar results, thus supporting the strong internal consistency of the LFI 70 GHz data.
In this second Planck data release, the calibration pipeline considers the full convolution between the beam response B and the calibration signal D. This is a novel approach, which allows us to better control the impact of optical systematic effects on the calibration and to improves the self-consistency of the data. Note that in the first data release, the dipole fitting routines used to measure the calibration constants assumed a pencil-like beam, and the mismatch in power was fixed by applying a beam window function to the power spectra.
As Planck Collaboration V (2014) has shown, the convolution B ∗ D retains the same dipole shape as D, but there are two effects of particular relevance for this discussion:
-
1.
the finite width of the main beam and the presence of lobesreduces the peak-to-peak amplitude of the dipole itself (i.e. thepeak-to-peak variation in B ∗ D is smaller than the variation in D);
-
2.
the lack of perfect axial symmetry (particularly in the region which is far from the main beam) induces a tilt in the dipole axis.
The first point implies that using the B ∗ D signal as a calibration source reduces the average value of the calibration constant K ([K] = KV-1). Planck Collaboration V (2014), Planck Collaboration V (2016) quantify the amount of such variation in terms of the measured power spectra : (20)where fsl is the fraction of B that falls outside 5° of the main beam (the “sidelobes”), φD = ∂tBsl ∗ D/∂tBmain ∗ D is the ratio between the variation of the dipole signal entering the sidelobes and the variation of the same signal entering the main beam, and φsky and are defined similarly to φD but in terms of the amount of Galactic signal plus CMB (φsky), and of the CMB alone ()5.
Fig. 22 Residuals between three different cross-power spectra computed from 70 GHz data: half-ring (HR) maps, quadruplet (detector set, DS) maps, and year 1–3/year 2–4 (YR) maps. Error bars are derived from the realistic FFP8 simulations. |
We have verified the consistency of this approach by producing a set of maps using data from the current release, but calibrated using the pencil-beam approximation. By comparing the raw power spectra of these maps with the official LFI power spectra of the second release, we have measured excellent agreement (better than 0.03%) with the estimate provided by Eq. (20), apart from four out of six 44 GHz radiometers. In the 44 GHz case, however, because of the small level of the sidelobes, the resulting change in the is (at <0.4%) still smaller than for the other two LFI bands.
12.6. Updated systematic effects assessment
Known instrumental systematics affecting LFI maps are discussed in detail in Planck Collaboration III (2016) and are listed in Table 17, along with short descriptions of their causes and strategies for their removal. In Tables 18–20 we list both the rms and the difference between the 99% and the 1% quantiles in the pixel value distribution for the I, Q, and U maps, at 30, 44 and 70 GHz respectively. We refer to the latter as the peak-to-peak (p-p) difference, even though it neglects outliers, since it effectively approximates the peak-to-peak variation of the effect on the map.
Detailed analysis reported in Planck Collaboration III (2016) shows that systematic uncertainties are at least two orders of magnitude below the CMB TT power spectrum and are not significantly contaminating the EE and BB spectra.
Fig. 23 From Left to Right, the empirical distribution (estimated via FFP8 simulations) of the s1,s2,s3 statistics of the Hausman test (see text). Vertical lines represent the values obtained from Planck 70 GHz data. |
List of known instrumental systematic effects in Planck-LFI.
13. Low-ℓ data selection
The 70 GHz polarization data are of special importance since the Planck low-ℓ likelihood (Planck Collaboration XI 2016) used to determine cosmological parameters is based on them. In order to provide the best data possible for the construction of the likelihood, we perform several tests at survey level in order to choose the most reliable data combination. For this purpose we focus on the very low multipoles, especially ℓ = 2–4, which are the most susceptible to systematic errors.
We compare results from actual data and from noise-only Monte Carlo realizations made for the FFP8 simulations. Specifically, we take differences between the full data set (over the entire mission lifetime) and some specific combinations of surveys, for both noise simulations and real data. We then compute the angular power spectra of these differences to look for anomalies.
The analysis at the level of surveys is very informative: as a consequence of the scanning strategy and payload geometry, Survey 1 and Survey 3 share the same beam orientation with respect to the sky. The same is true for Surveys 2/4, 5/7, and 6/8. For this reason we consider these combinations jointly for the null tests, thus maximizing signal-to-noise. Figure 24 shows the distribution of angular power for E- and B-modes for each survey pair, as derived from the Monte Carlo simulations, with results from the actual LFI data indicated by vertical lines. Evidently, Survey 2 and Survey 4 are quite anomalous with respect to the rest of the surveys. We will offer some possible explanations below. First, however, we can be more quantitative and compute the probability to exceed (PTE) of our data, based on simulations. Results are reported in Table 21. These probability values seem to indicate that Surveys 2 and 4 show systematic effects. Guided by these findings, we report the PTE values for the differences between the full mission and the survey combinations in Table 22.
Summary of systematic effect uncertainties on 30 GHz maps in μKCMB.
Summary of systematic effect uncertainties on 44 GHz maps in μKCMB.
Summary of systematic effect uncertainties on 70 GHz maps in μKCMB.
We can also combine the PTE results from the survey null tests across these multipoles. In Table 23 we report results from a test of uniformity of the PTEs, simply counting how many entries are lower than a given threshold. The p-values for these tests are computed assuming binomial statistics. We report results for different values of the threshold to show their robustness and stability with respect to the thresholds.
Quantitatively Surveys 2 and 4 again stand out as anomalous at roughly the 3σ level. Currently the reason for this is not fully understood, but we note that this particular survey pair has a scanning strategy that produces larger uncertainties in gain, as demonstrated in Fig. 4. The geometry for these two surveys also increases the sensitivity of the very low-ℓ results to small errors in estimates of Galactic contamination of the far sidelobes. These issues are under investigation and will be addressed further in the next data release, but for the moment we choose to be conservative and remove Surveys 2 and 4 from the default likelihood developed in Planck Collaboration XI (2016). The default likelihood is used in Planck Collaboration XIII (2016) to derive cosmological parameters. The optical depth to reionization, τ, is the parameter most affected by this choice: removing Surveys 2 and 4 changes the value of this parameter by about 0.5σ.
14. The low-ℓ likelihood
The baseline 2015 Planck low-ℓ likelihood is described in depth in Planck Collaboration XI (2016). Here we briefly discuss its polarization content, largely based on data from the Planck 70 GHz channel. As noted in the previous sections, Survey 2 and 4 are excerpted from the data set to reduce the chance of systematic contamination. In this section, we do not focus on the low-ℓ temperature block of the likelihood developed in Planck Collaboration XI (2016) that is based on a CMB map derived using the Commander algorithm which employs all Planck channels from 30 to 353 GHz (Planck Collaboration IX 2016).
At multipoles ℓ< 30, we model the likelihood assuming that the maps are Gaussian distributed with known covariance (Planck Collaboration XV 2014): (21)where n is the total number of observed pixels, M(Cℓ) is the covariance matrix of m = [T,Q,U], being T, Q, and U the pixel space intensity and linear polarization Stokes parameter maps. Note that the covariance matrix depends on the CMB model angular power spectra, Cℓ, only through the CMB signal covariance matrix:
Fig. 24 Measured LFI 70 GHz EE (top) and BB (bottom) null power spectra for ℓ = 2, 3, and 4 (vertical lines), compared to the distribution derived from noise-only Monte Carlo simulations. Null spectra from the difference between full data and specific surveys combinations: left Survey 1 and Survey 3; (middle) Survey 2 and Survey 4; and (right) Survey 5 and Survey 7. It is clear that Survey 2/Survey 4 stands out with respect to the others. |
PTE for EE and BB low multipoles, for the differences between full mission and individual surveys.
PTE for EE and BB low multipoles, for the differences between full mission and survey combinations.
(22)In order to clean the 70 GHz Q and U maps, we perform a template fitting procedure using the Planck 30 GHz channel as a tracer of polarized synchrotron emission and the Planck 353 GHz channel as a tracer of polarized dust emission. Restricting from now onwards m to the Q and U maps (i.e. m ≡ [Q,U]) we write: (23)where m70, m30 and m353 are bandpass corrected versions of the 70, 30, 353 maps (Planck Collaboration III 2016; Planck Collaboration VII 2016), whereas α and β are the scaling coefficients for synchrotron and dust emission, respectively. The latter are best fitted by minimizing the quantity (24)where N70 is the pure polarization part of the 70 GHz noise covariance matrix6 (Planck Collaboration VI 2016), and Cℓ is taken as the Planck 2013 fiducial model (Planck Collaboration XVI 2014). We have verified that changing this model does not impact the results significantly. We find α = 0.063, β = 0.0077, with three sigma uncertainties σα = 0.025 and σβ = 0.0022. The best fit values quoted correspond to a polarization mask that allows 47% of the sky to pass through. In fact, we have repeated this procedure for a set of 24 masks, allowing sky fractions from 80% to 29%. Such masks have been constructed by rescaling the templates m30 and m353 to 70 GHz assuming fiducial spectral indexes, computing the polarized intensity and thresholding the latter. For each mask, we evaluate the probability to exceed . The 47% analysis mask is chosen as the tightest mask satisfying the requirement .
Uniformity of the PTEs for survey null tests based on the number of entries lower than a given threshold (p-values are from the binomial distribution).
We define the final polarization noise covariance matrix used in Eq. (22) as: (25)We have verified that the external (column to row) products involving the foreground templates are subdominant corrections. We do not include further correction terms associated with the band pass leakage error budget since they are completely negligible.
15. Product description
We now give a list and brief description of Planck LFI released products, which can be freely accessed via the Planck Legacy Archive interface7, based on all the data acquired during routine operation from 12 August 2009 to 23 October 2014; the full format is reported in the Explanatory Supplement8.
-
Pointing timelines: one FITS file for each OD for each frequency,each FITS file contains the OBT (onboard time) and the threeangles, θ, φ, and ψ, which identify each sample on the sky.
-
Time timelines: one FITS file for each OD for each frequency, each FITS files containing the OBT and its corresponding TAI (International Atomic Time) value (with no leap second) in modified Julian day format. This will allow the user to cross-correlate OBT with UTC.
-
Housekeeping timelines: all the housekeeping parameters with their raw and calibrated values are provided, separated by the housekeeping sources and for each OD.
-
Timelines in volts: raw scientific data in engineering units for each detector at 30, 44, and 70 GHz and each OD, before its calibration from which instrumental systematic effects have been removed.
-
Cleaned and calibrated timelines: provided in KCMB for each detector at 30, 44, and 70 GHz and each OD, after scientific calibration from which the convolved dipole and convolved Galactic straylight have been removed.
-
Scanning beam: 4π beam representation used in the calibration pipeline.
-
Effective beam: sky beam representation as a projection of the scanning beam on the maps.
-
Full sky maps at each frequency: maps of the sky at 30, 44, and 70 GHz in temperature and polarization at Nside = 1024 and, in the case of 70 GHz at Nside = 2048. Maps are provided for different data periods, as detailed in Table 13. Note that the polarization convention used for the Planck maps is referred to as “COSMO” instead of the “IAU”; see the Explanatory Supplement for details.
-
Baseline timelines for the full and half-ring periods: these timelines have the baseline offset removed (the length is specified in Table 12) during the mapmaking process.
-
Low-resolution maps: maps provided at Nside = 16 and their associated full noise covariance matrices.
-
RIMO (reduced instrument model): model provided with all parameters that identify the main instrument characteristics from noise to bandpass and beam function.
16. Discussion and conclusions
We have summarized in this paper all the steps taken to assemble, calibrate, and map the data gathered by the Planck LFI instrument. While the focus is on the changes in data and methods since our previous release in 2013 (Planck Collaboration II 2014), this paper provides a complete, if brief, description of LFI data processing, and of the resulting temperature and polarization maps at 30, 44, and 70 GHz. Many supporting details are provided in four additional papers accompanying this release, Planck Collaboration III (2016), Planck Collaboration IV (2016), Planck Collaboration V (2016), and Planck Collaboration VI (2016), which treat systematic effects, beams, calibration and mapmaking, respectively. We note that Planck Collaboration VII (2016) and Planck Collaboration VIII (2016) cover the same set of topics for the Planck HFI instrument.
16.1. Operations, TOI, and beams
LFI operated stably for all four years of observations (eight sky surveys). The last four surveys were performed with a different phase angle (see Sect. 2), allowing us to investigate some systematic effects (and also reducing Galactic straylight). The most significant change in LFI operations was the gradual degradation of the sorption cooler, and its replacement by a second cooler (on OD 460). For the current release, construction of the satellite attitude and pointing takes account of two additional variables, the distance to the Sun and the temperature of the REBA (Planck Collaboration I 2016).
Routine spacecraft manoeuvers made approximately 8% of the data unusable; other losses of TOI data were <1% for all three LFI bands.
The TOI required several small corrections described in Sect. 4. These include corrections for ADC non-linearity and for electronic spikes. Residual effects in the LFI maps are at the µK level or below (see Planck Collaboration IV 2016 for a fuller discussion).
Measurements of LFI beam properties (Sect. 6) have substantially improved since the earlier release, based on repeated scans of Jupiter and better modelling of sidelobes. The effective beam solid angles at 30, 44, and 70 GHz are 1190.06, 832.00, and 200.90 [arcmin2], respectively see Table 7 for details. The remaining sidelobe power outside the main beam is very small, 0.808%, 0.117%, and 0.646% for the three LFI bands.
16.2. Noise and calibration
Calibration of the TOI (to convert to units of μKCMB s1/2) has improved in several ways since the previous release (Planck Collaboration II 2014). Firstly, Planck calibration is now based on the dipole signal induced by the annual motion of the satellite around the Sun (the orbital dipole). The calibration thus does not depend on WMAP measurements of the larger solar dipole, and it is also absolute, in the sense that it depends only on well-measured properties of the solar system and fundamental constants.
Secondly, LFI calibration is now based on full 4π convolution of the beam with the dipole (see Sect. 7.1). While the calibration is based on the dipole, the dipole signal is removed from the TOI before mapmaking.
A major source of potential systematic error in calibration is Galactic straylight (Galactic emission leaking into the LFI horns). We model this effect, and correct the TOI accordingly (Sect. 7.4). Straylight (if not corrected) produces evident rings centred on the Galactic centre (see Fig. 6).
As noted, Planck calibration is carried out on a large-scale source, namely the orbital dipole, which has a thermal spectrum. When assessing the brightness temperature or flux density of other astronomical objects with non-thermal spectra, small colour corrections are necessary; these are provided in Sect. 7.5. For compact sources, the small amount of power missing from the main beams, listed above, must be taken into account. As an example, the flux density of a compact source with spectral index −0.5 extracted from the 30 GHz map requires a 1.00808 multiplicative correction for missing power and a multiplicative colour correction of 0.997.
The noise properties (white noise levels and knee frequencies) of the LFI receivers are discussed in Sect. 8. The white noise was stable over the four-year mission for all receivers.
16.3. Maps
LFI produces full-sky maps in Stokes parameters I, Q, and U at all three frequencies; the map properties are listed in Sect. 10. Calibrated TOI data are destriped using the Madam mapmaking code and maps are constructed using the same package (see Sect. 9 for a description and Planck Collaboration VI 2016 for full details). In destriping, a mask is employed to limit noise introduced by Galactic emission; the final maps, however, cover the entire sky (at Nside = 1024 resolution). Madam also produces the noise covariance matrix (NCVM) for each pixel of the maps.
We also provide maps at lower resolution (Nside = 16; Fig. 9) for use in the construction of the low-ℓ likelihood (fully described in Planck Collaboration XI 2016). The downgrading scheme to smooth the maps from Nside = 1024 to 32 and then to 16 is described in Sects. 9.1 and 9.2. Section 9.3 describes the NCVM for these low-resolution products.
16.4. Polarization
The major new feature of this release is the set of polarized maps and products. The low-resolution polarization maps at 70 GHz, in particular, play a crucial role in the construction of the Planck low-ℓ likelihood (Planck Collaboration XI 2016) and consequently on Planck values for cosmological parameters (Planck Collaboration XIII 2016). We therefore devote considerable attention to investigating potential systematic errors in these maps (detailed in Sect. 11). The largest source of uncertainty in LFI polarization measurements is leakage from temperature to polarization. This leakage is largely caused by differences in the frequency responses or bandpasses between the two arms of a given LFI radiometer (“bandpass mismatch”). This mismatch can be quantified by a single parameter; for the 70 GHz radiometers, it varies between 0.18 and 1.24%. The bandpass mismatch correction maps are provided in these release at Nside = 256, those should be applied to LFI Q and U maps.
16.5. Validation
We employ suites of both null tests and simulations to assess the quality of LFI maps and other products derived from them (see Sect. 12). The null tests exploit the many ways in which the data can be divided: survey by survey, year by year, and on the much shorter time scale of half-ring differences. The results of some of these null tests are shown in Fig. 15; further details appear in Planck Collaboration III (2016) and Planck Collaboration V (2016). We call attention to the substantially lower residuals (and cleaner maps) resulting mainly from better calibration. The null tests do, however, reveal larger than average residual signals in polarized maps made from Survey 2 and Survey 4 data (we return to this issue below).
Another type of null test is to compare the CMB power spectra derived from different frequencies. This topic is discussed for the entire mission in Planck Collaboration I (2016). Here, we point out that Fig. 17, shows good agreement among the three LFI bands. In the ℓ range 40–300 (which covers the first peak of the CMB power spectrum), the three LFI power spectra agree to better than 1% This agreement extends to measurements of compact sources (which involve both a wider ℓ range and values for the beam solid angles; see Planck Collaboration XXVI 2016).
We validated LFI polarization maps by comparing our bandpass-mismatch-corrected maps to maps constructed using the IQUSP procedure (Sect. 11.4), and found that the Stokes Q and U maps were indistinguishable. In particular, the polarized structure along the Galactic plane is reproduced, including the most significant discrepancies with WMAP maps.
Simulations based on FFP8 (Planck Collaboration XII 2016) are also used to validate LFI results. We perform end-to-end simulations primarily to test the impact of systematic errors and various steps in our calibration and mapmaking procedures on the final results. Section 12.6 and Tables 18–20 summarize the sources of systematic error and their effects on LFI maps. The far sidelobes of the LFI beams are the dominant source of uncertainty in the 30 GHz maps. At 44 and 70 GHz, other instrumental effects dominate, particularly 1 Hz electronic spikes and ADC non-linearity, respectively. The overall systematic effects uncertainty was estimated to be 0.88, 1.97, and 1.87 μKCMB in the I component; 1.11, 1.14, and 2.25 μKCMB in the Q component; 0.95, 1.20, and 2.22 in the U component at 30, 44 and 70 GHz, respectively.
As mentioned above, null tests show that the polarized data from Surveys 2 and 4 contain residual signals (possibly due to contamination from Galactic emission). As a consequence, we choose to be conservative and omit these two surveys (approximately 1/4 of the data) from the low-ℓ likelihood. The studies supporting this decision are described in Sect. 13. The optical depth, τ, is the cosmological parameter most affected; including or omitting Surveys 2 and 4 changes τ by about 0.5σ.
Planck (http://www.esa.int/Planck) is a project of the European Space Agency (ESA) with instruments provided by two scientific consortia funded by ESA member states and led by Principal Investigators from France and Italy, telescope reflectors provided through a collaboration between ESA and a scientific consortium led and funded by Denmark, and additional contributions from NASA (USA).
Refer to Planck Collaboration V (2016) for a mathematical derivation and a discussion of the formula.
Acknowledgments
The Planck Collaboration acknowledges the support of: ESA; CNES and CNRS/INSU-IN2P3-INP (France); ASI, CNR, and INAF (Italy); NASA and DoE (USA); STFC and UKSA (UK); CSIC, MINECO, JA, and RES (Spain); Tekes, AoF, and CSC (Finland); DLR and MPG (Germany); CSA (Canada); DTU Space (Denmark); SER/SSO (Switzerland); RCN (Norway); SFI (Ireland); FCT/MCTES (Portugal); ERC and PRACE (EU). A description of the Planck Collaboration and a list of its members, indicating which technical or scientific activities they have been involved in, can be found at http://www.cosmos.esa.int/web/planck/. Finally, we thank Benjamin Walter for a careful reading of the manuscript.
References
- Bersanelli, M., Mandolesi, N., Butler, R. C., et al. 2010, A&A, 520, A4 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- de Gasperis, G., Balbi, A., Cabella, P., Natoli, P., & Vittorio, N. 2005, A&A, 436, 1159 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Eriksen, H. K., Banday, A. J., Górski, K. M., & Lilje, P. B. 2004, ApJ, 612, 633 [NASA ADS] [CrossRef] [Google Scholar]
- Górski, K. M., Hivon, E., Banday, A. J., et al. 2005, ApJ, 622, 759 [NASA ADS] [CrossRef] [Google Scholar]
- Haslam, C. G. T., Salter, C. J., Stoffel, H., & Wilson, W. E. 1982, A&AS, 47, 1 [NASA ADS] [Google Scholar]
- Hinshaw, G., Weiland, J. L., Hill, R. S., et al. 2009, ApJS, 180, 225 [NASA ADS] [CrossRef] [Google Scholar]
- Hivon, E., Górski, K. M., Netterfield, C. B., et al. 2002, ApJ, 567, 2 [NASA ADS] [CrossRef] [Google Scholar]
- Keihänen, E., Kurki-Suonio, H., & Poutanen, T. 2005, MNRAS, 360, 390 [NASA ADS] [CrossRef] [Google Scholar]
- Keihänen, E., Keskitalo, R., Kurki-Suonio, H., Poutanen, T., & Sirviö, A. 2010, A&A, 510, A57 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Keskitalo, R., Ashdown, M., Cabella, P., et al. 2010, A&A, 522, A94 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Leahy, J. P., & Foley, K. 2006, in CMB and Physics of the Early Universe, 43 [Google Scholar]
- Leahy, J. P., Bersanelli, M., D’Arcangelo, O., et al. 2010, A&A, 520, A8 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Meinhold, P., Leonardi, R., Aja, B., et al. 2009, J. Instrumentation, 4, 2009 [Google Scholar]
- Mennella, A., Bersanelli, M., Butler, R. C., et al. 2010, A&A, 520, A5 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Mennella, A., Butler, R. C., Curto, A., et al. 2011, A&A, 536, A3 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Mitra, S., Rocha, G., Górski, K. M., et al. 2011, ApJS, 193, 5 [NASA ADS] [CrossRef] [Google Scholar]
- Molinari, D., Gruppuso, A., Polenta, G., et al. 2014, MNRAS, 440, 957 [NASA ADS] [CrossRef] [Google Scholar]
- Page, L., Hinshaw, G., Komatsu, E., et al. 2007, ApJS, 170, 335 [NASA ADS] [CrossRef] [Google Scholar]
- Planck Collaboration ES. 2013, The Explanatory Supplement to the Planck 2013 results, http://pla.esac.esa.int/pla/index.html (ESA) [Google Scholar]
- Planck Collaboration I. 2014, A&A, 571, A1 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration II. 2014, A&A, 571, A2 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration III. 2014, A&A, 571, A3 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration IV. 2014, A&A, 571, A4 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration V. 2014, A&A, 571, A5 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XV. 2014, A&A, 571, A15 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XVI. 2014, A&A, 571, A16 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration I. 2016, A&A, 594, A1 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration II. 2016, A&A, 594, A2 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration III. 2016, A&A, 594, A3 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration IV. 2016, A&A, 594, A4 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration V. 2016, A&A, 594, A5 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration VI. 2016, A&A, 594, A6 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration VII. 2016, A&A, 594, A7 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration VIII. 2016, A&A, 594, A8 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration IX. 2016, A&A, 594, A9 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration X. 2016, A&A, 594, A10 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XI. 2016, A&A, 594, A11 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XII. 2016, A&A, 594, A12 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XIII. 2016, A&A, 594, A13 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XIV. 2016, A&A, 594, A14 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XV. 2016, A&A, 594, A15 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XVI. 2016, A&A, 594, A16 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XVII. 2016, A&A, 594, A17 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XVIII. 2016, A&A, 594, A18 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XIX. 2016, A&A, 594, A19 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XX. 2016, A&A, 594, A20 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XXI. 2016, A&A, 594, A21 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XXII. 2016, A&A, 594, A22 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XXIII. 2016, A&A, 594, A23 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XXIV. 2016, A&A, 594, A24 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XXV. 2016, A&A, 594, A25 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XXVI. 2016, A&A, 594, A26 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XXVII. 2016, A&A, 594, A27 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Planck Collaboration XXVIII. 2016, A&A, 594, A28 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Polenta, G., Marinucci, D., Balbi, A., et al. 2005, J. Cosmol. Astropart. Phys., 11, 1 [NASA ADS] [CrossRef] [Google Scholar]
- Tauber, J. A., Norgaard-Nielsen, H. U., Ade, P. A. R., et al. 2010, A&A, 520, A2 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Villa, F., Terenzi, L., Sandri, M., et al. 2010, A&A, 520, A6 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
- Zonca, A., Franceschet, C., Battaglia, P., et al. 2009, J. Instrumentation, 4, 2010 [Google Scholar]
All Tables
Percentage of LFI observation time lost due to missing or unusable data, and to manoeuvres.
Mean and rms variation across the sky of FWHM, ellipticity, orientation, and solid angle of the FEBeCop effective beams computed with the GRASP beam-fitted scanning beams.
Multiplicative factors that should be used to determine the correct flux densities from compact sources.
Coefficients for parabolic fits to the LFI colour corrections C(α), revised from the 2013 values, based on the bandpass shifts derived by Commander component separation code (Planck Collaboration X 2016).
Bandpass mismatch a-factors from fitting the leakage model map to the spurious maps.
Mean and associated error of the percentage variation between input and output of the end to end tests.
PTE for EE and BB low multipoles, for the differences between full mission and individual surveys.
PTE for EE and BB low multipoles, for the differences between full mission and survey combinations.
Uniformity of the PTEs for survey null tests based on the number of entries lower than a given threshold (p-values are from the binomial distribution).
All Figures
Fig. 1 Schematic representation of the Level 2 and pointing pipelines of the LFI DPC; elements in red identify those modified or augmented with respect to Planck Collaboration II (2014) |
|
In the text |
Fig. 2 Scanning beams reconstructed from Jupiter observations. The beams are plotted in logarithmic contours of −3, −10, −20, and −30 dB from the peak, for the 70 GHz channel (horns 18–23), and −3, −10, −20, and −25 dB from the peak, for the 30 and 44 GHz channel (horns 27 and 28, and 24–26, respectively). The main and side arms are indicated with black and blue lines, respectively. |
|
In the text |
Fig. 3 Illustration of the method used to produce LFI synthetised beams weighted for the radiometer response (in this case LFI 18M). The vertical lines identify the frequencies at which the beam has been simulated within the radiometer bandpass T(ν). The results are then used to construct a weigthed 4π beam for each radiometer. Details on the bandpass measurements can be found in Villa et al. (2010). |
|
In the text |
Fig. 4 Raw gain from radiometer 27M throughout 4 year mission. Pid is a counter for pointings of the spin axis, which had an average duration of about 45 min Planck Collaboration I (2014). The increase of noise corresponding to the periods of “minimum dipole” (see text) are clearly visible for each of the eight surveys. Survey 2 (Pid range approximately 5200−10 000) and Survey 4 (Pid approximately 15 700−20 600) exhibit a significantly higher noise, as expected from the unfavourable alignment of the spacecraft spin axis with the Solar dipole in those two surveys. |
|
In the text |
Fig. 5 High frequency fluctuations of the raw gain from radiometer 27M throughout the 4 yr mission. The major decrease in high frequency variations occurs after the transponder was left continuously on (at Pid = 5000). Subsequently the high frequency variations are generally ≪1%. |
|
In the text |
Fig. 6 Simulated Galactic straylight in total intensity for representative LFI radiometers for the full mission period. Top: 70 GHz radiometer 18M (right) and 18S (left). Middle: 44 GHz radiometer 24M (right) and 24S (left). Bottom: 30 GHz radiometer 27M and 27S (left). The faint stripes paralleling the scanning direction are due to the different coverage of the sky during different surveys. |
|
In the text |
Fig. 7 Colour corrections C(α) versus intensity spectral index α. Solid lines are the current corrections given by Table 9, while dashed lines are the 2013 values. Red curves are for the 30 GHz band, green for 44 GHz, and blue for 70 GHz. Note that the corrections have only been validated for α ≲ 0 at 30 and 44 GHz, and for 0 ≲ α ≲ 2.5 at 70 GHz. |
|
In the text |
Fig. 8 Noise spectra throughout the mission lifetime for a 70 GHz radiometer 18M (left), 25S (44 GHz; middle), and 27M (30 GHz; right). Spectra are shown for the ranges from OD 100 (blue) to OD 1526 (red), spaced about 20 ODs apart. White noise is stable at the level of 0.3%, while low-frequency noise shows variations both in slope and knee-frequency, with different amplitude for different radiometers. |
|
In the text |
Fig. 9 LFI full mission low-resolution maps, Nside = 16. From left to right 30 GHz, 44 GHz, and then 70 GHz: top intensity I; middle polarization Q component; and bottom polarization U component. Units are μKCMB. |
|
In the text |
Fig. 10 LFI maps at 30 GHz. Top: intensity I. Middle: polarization Q component. Bottom: polarization U component. Polarization components are at Nside = 256 and smoothed at 1deg, the intensity is left at the native Nside = 1024. Units are μKCMB. The polarization components have been corrected for the bandpass leakage effect (see Sect. 11). |
|
In the text |
Fig. 11 Same as Fig. 10 for 44 GHz. |
|
In the text |
Fig. 12 Same as Fig. 10 for 70 GHz. |
|
In the text |
Fig. 13 Individual survey temperature maps. Left: 30 GHz. Middle: 44 GHz. Right: 70 GHz. From top to bottom are Surveys 1 to 8. The gray area identify the regions of the sky not observed in each survey that depends from the spin axis orientation. |
|
In the text |
Fig. 14 IQUSS solution maps at 30 GHz. Top left: Stokes Q. Top right: Stokes U. Bottom left: spurious signal from the first RCA, S1. Bottom right: spurious signal from the second RCA, S2. Polarization maps are noisier than the usual mapmaking solution, since S1 and S2 have to be extracted from the same data. |
|
In the text |
Fig. 15 Null test results comparing power spectra from survey differences to those from the half-ring maps. Differences are: left Survey 1 − Survey 2; middle Survey 1 − Survey 3; and right Survey 1 − Survey 4. These are shown for 30 GHz (top), 44 GHz (middle), and 70 GHz (bottom) for both TT and EE power spectra. |
|
In the text |
Fig. 16 Consistency of the noise angular power spectra from the half-ring difference maps (red), white noise covariance matrix (black dash-dotted line), and 10 000 full-noise MCs (grey band showing 50% quantiles, black solid line, and limits at 16% and 84% quantiles). From top to bottom we have TT, EE, and BB spectra for 30 (left), 44 (centre) and 70 GHz (right). Half-ring spectra are binned with Δℓ = 75 for ℓ ≥ 75. |
|
In the text |
Fig. 17 Ratio of the mean noise angular power spectrum in the high-ℓ (1150 ≤ ℓ ≤ 1800) tail to the white noise as derived from the white noise covariance matrices from Madam. |
|
In the text |
Fig. 18 Relative variations between input and output of the end-to-end test for radiometer 27S at 30 GHz. In general, we recoverthe input gain to better than 0.1%, except for some larger excursions introduced by sudden changes in the instrument configuration, to which the 30 GHz radiometers are particularly sensitive. |
|
In the text |
Fig. 19 Relative variations between input and output of the end-to-end test for radiometer 22S at 70 GHz. The overall recovery is under 0.1%, with some spikes in the longest pointing periods. |
|
In the text |
Fig. 20 Temperature cross-power spectra (from half-ring maps) at 30, 44, and 70 GHz, binned in multipole space. Foreground emission is excluded only by means of a Galactic sky mask, without further component separation. Best-fit Planck temperature spectra plus contributions from un-resolved point sources are shown as dashed lines for each LFI band. |
|
In the text |
Fig. 21 Consistency between cross-power spectra at LFI frequencies: left 70 GHz versus 30 GHz; middle 70 GHz versus 44 GHz; and right 44 GHz versus 30 GHz. The solid red line is the linear regression, accounting for error on both axes. Slope values are found to be consistent within the uncertainties. |
|
In the text |
Fig. 22 Residuals between three different cross-power spectra computed from 70 GHz data: half-ring (HR) maps, quadruplet (detector set, DS) maps, and year 1–3/year 2–4 (YR) maps. Error bars are derived from the realistic FFP8 simulations. |
|
In the text |
Fig. 23 From Left to Right, the empirical distribution (estimated via FFP8 simulations) of the s1,s2,s3 statistics of the Hausman test (see text). Vertical lines represent the values obtained from Planck 70 GHz data. |
|
In the text |
Fig. 24 Measured LFI 70 GHz EE (top) and BB (bottom) null power spectra for ℓ = 2, 3, and 4 (vertical lines), compared to the distribution derived from noise-only Monte Carlo simulations. Null spectra from the difference between full data and specific surveys combinations: left Survey 1 and Survey 3; (middle) Survey 2 and Survey 4; and (right) Survey 5 and Survey 7. It is clear that Survey 2/Survey 4 stands out with respect to the others. |
|
In the text |
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.