A&A 445, 373 (2006)
DOI: 10.1051/0004-6361:20053413
M. Reinecke1 - K. Dolag1 - R. Hell1 - M. Bartelmann2,1 - T. A. Enßlin1
1 - Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching, Germany
2 - Institut für Theoretische Astrophysik, Universität Heidelberg,
Albert-Überle-Str. 2, 69120 Heidelberg, Germany
Received 12 May 2005 / Accepted 23 August 2005
Abstract
We describe an assembly of numerical tools to model the output data
of the Planck satellite (Tauber 2004, Adv. Space Res., 34, 491). These start with
the generation of a CMB sky in a chosen cosmology, add in various foreground
sources, convolve the sky signal with arbitrary, even non-symmetric and
polarised beam patterns, derive the time-ordered data streams measured by the
detectors depending on the chosen satellite-scanning strategy, and include
noise signals for the individual detectors and electronic systems. The
simulation products are needed to develop, verify, optimise, and characterise
the accuracy and performance of all data processing and scientific analysis
steps of the Planck mission, including data handling, data integrity checking,
calibration, map making, physical component separation, and power spectrum
estimation. In addition, the simulations allow detailed studies of the impact
of many stochastic and systematic effects on the scientific results.
Efficient implementation of the simulation allows extended
statistics of signal variances and co-variances to be built up.
Although being developed
specifically for the Planck mission, it is expected that the framework
being employed, as well as most of the simulation tools, will be of use for other
experiments and for CMB-related science in general.
Key words: cosmology: cosmic microwave background - cosmology: large-scale structure of Universe - methods: numerical
The simulation of realistic data streams is a very important task in
the preparation for the Planck satellite mission. Scientifically,
simulated data are required for a realistic assessment of the mission
goals, and for planning how these goals can best be achieved. The
performance of data-analysis algorithms and software needs to be
tested against the scientific requirements of the mission. This
pertains to methods for removing systematic noise from the maps,
separating the multi-frequency data into different physical emission
processes, deriving both
power spectra and higher-order statistical measures for CMB
temperature and polarisation fluctuations, identifying "point''
sources like distant galaxies and galaxy clusters, and much
more. In order to develop, test, and calibrate methods for pursuing
these scientific goals, full-sky maps need to be produced containing
the CMB and all known components of foreground emission in the nine
Planck frequency channels between 30 and 857 GHz, at the
resolution of 5' and the sensitivity of
to
be achieved by the Planck instruments. These maps need first to be
"observed'' following the scanning law foreseen for Planck and
using realistic beam shapes, then filtered with the frequency and time
response of Planck's detectors, and finally deteriorated by random and
systematic instrumental noise before realistic scientific assessments
can be made.
Also important is the use of simulated data for developing
data-analysis techniques for the mission. At a resolution of 5',
and assuming sufficient oversampling of the beams, the full sky has
approximately
pixels. Full-sky maps in all frequency bands and
physical components (like foregrounds) need to be efficiently handled and
stored. There
will be 74 detectors on-board Planck, 52 on the High-Frequency
Instrument (HFI), and 22 on the Low-Frequency Instrument (LFI). During
one year of the mission, these detectors will produce something on the order of
1011 measurements along the slowly precessing, stepwise-circular
scanning path from which the frequency maps need to be
produced. Storing the data efficiently, preparing them for fast
access, and handling them within the data-analysis software pipelines
needs to be thoroughly tested using data simulated at the correct
temporal and spatial sampling rate.
Simulated data are also important for assessing mission operations and the technical planning. Depending on the scanning law, i.e. the path along which the satellite moves and scans the sky, the noise characteristics of the maps will be more or less anisotropic, because some regions on the sky will be observed much more often than others. At a fixed scanning law, the position of the detectors in the focal plane, their orientation relative to the optical axis, and the orientation of the optical axis relative to the spin axis of the satellite, all further affect the scientific goals by changing the noise characteristics, the coverage pattern on the sky, the quality of polarisation measurements, and more.
Finally, it is an important side-aspect of simulated data that they can be used to investigate the advantages and disadvantages of different models for data organisation and storage. With single data sets exceeding the size of multiple GBytes, efficient data handling and processing requires that these data be stored in highly sophisticated ways, e.g. in data bases, such that sequential and random access to parts of the data is possible. As a European-based mission with many participating countries, Planck data analysis will operate in a distributed fashion (i.e. in several geographically separate data processing centres), which places further and stricter requirements on the way data are archived and retrieved.
The arguments above demonstrate that the simulation pipeline is essential for, and an integral part of, both the HFI and LFI Data Processing Centres (Pasian & Sygnet 2002). Even before the Announcement of Opportunity for the Planck instruments was addressed, and actually during the phase-A studies for the mission, several simulations were made to understand the possible scientific outcome from Planck and to define the instrumental setup. The need was felt to put such a simulation effort on the same footing, and to extend it within a coordinated environment.
These considerations led to construction of the Planck simulation pipeline which began quite early in the project, approximately six years ago. At that time there were numerous simulation activities within many groups involved in the project, but those simulations were incomplete and were aiming at investigating a multitude of specific and different goals. They were widely used by different groups throughout the Planck consortia, but produced data in non-standardised and non-portable formats, and were not optimised with respect to their consumption of computer resources.
Starting with such existing simulation programmes, the Planck simulation pipeline was built according to the following design guidelines:
Following these criteria, construction of the Planck simulation pipeline proceeded in four steps:
After giving a brief overview of the Planck mission in Sect. 2, in Sect. 3 we describe the layout of the pipeline, i.e. the decomposition of its simulation tasks into modules, and then in Sect. 4 the data it produces. Sections 5 and 6 list the pipeline components common to all modules and the individual simulation modules themselves. A summary is presented in Sect. 7.
Here we briefly summarise some characteristics of the Planck mission that are crucial to the design and construction of the simulation pipeline and to understanding its function.
Planck will observe the microwave sky from a Lissajou orbit
centred on the outer (and unstable) Lagrangian point L2 of the
Sun-Earth/Moon system,
from Earth. This
enables the spacecraft to always have the Earth and the Sun at its
back, and have the Moon at a small angle relative to its backward
direction. Planck will be spin-stabilised, rotating at
1 r.p.m. The optical axis of its Gregorian telescope points at
an almost right angle with the rotation axis, thus scanning the sky
along circles. The rotation axis will be kept fixed for 60 min
and then repointed by 2.5 arcmin such that
Planck progresses as the Earth moves around the
Sun. Planck will thus cover the full sky in half a year.
The full mission (after reaching L2) is expected to last 18 months.
The rotation is governed by the inertial tensor of the spacecraft, which is time-dependent due to fuel and coolant consumption. Repointing the spacecraft every hour causes precession, which needs to be damped away at the beginning of each pointing period.
The angle between the optical and rotation axes is slighly less than
.
For Planck to cover the full sky, its rotation axis
will make slow excursions from the Ecliptic plane with an
amplitude of several degrees. This scanning pattern implies that the sky
will be covered inhomogeneously, less dense near the Ecliptic and
more dense along lozenge-shaped curves centred on the Ecliptic
poles. Rings on the sky scanned in consecutive pointing periods will
intersect, allowing cross calibration. The scanning pattern also
implies that contact with the receiving antenna in Perth (Australia)
will be limited to two hours per day, during which data down- and
uploads will have to be completed.
Two instruments will receive the incoming microwave radiation. The low-frequency instrument (LFI) has high electron mobility transistors (HEMTs) as detectors, allowing measurement of amplitudes and phases of incoming waves, and thus of their intensity and polarisation. It has three channels at 30, 44, and 70 GHz. The high-frequency instrument (HFI) uses bolometers instead. Four of its channels at 100, 143, 217, and 353 GHz are polarisation-sensitive, while the highest-frequency ones at 545 and 857 GHz are not. Passive cooling is insufficient for these instruments. LFI will operate at 20 K, while HFI needs to be cooled down to 100 mK. This will be achieved by a four-stage cooling system. Residual thermal coupling between the cooling system, in particular the sorption cooler, and the detector electronics, the instruments, and the telescope gives rise to a noise component that varies with the cyclic operation of the cooler.
The angular resolution of the beams decreases from about 30 to 5 arcmin from the 30 to the 857 GHz channels. While the cores of the beams have approximate Gaussian profiles with slightly elliptical cross sections, their wings are extended and can receive weak signals from regions of the sky that are quite far away from the direction of the beam core. These so-called far-side beam lobes thus add a weak noise contribution. The feed horns of the individual detectors are slightly inclined from their positions in the focal plane towards the optical axis of the telescope, which implies that at a given time all detectors are observing different points on the sky.
The channel width is approximately 20% of the central frequency for LFI and 30% for HFI. During scans, the signal received by the detectors will be integrated over certain time intervals and read out at frequencies varying between 30 and 200 Hz. The sampling frequency needs to be high enough for the beam to proceed sufficiently less than its own width along the scanning path during one sampling period.
Slow gain drifts in the detector sensitivity cause Planck's sensitivity to vary slowly along the circular scan paths. The frequency power spectrum of these gain drifts falls approximately as a power law in frequency and is thus called 1/f noise. This noise component gives rise to stripe patterns in the measurements that follow the scan circles on the sky.
Figure 1 illustrates a typical pipeline setup used for simulating detector timelines and pointings starting from basic inputs like cosmological parameters, microwave emission models for galactic and non-galactic components, and a scanning strategy. A short description of the individual steps is given below:
![]() |
Figure 1: Schematic data flow in a typical Planck simulation pipeline. Rectangular components denote parameters or data products (see Sect. 4), whereas elliptic shapes represent modules (see Sect. 6). |
Open with DEXTER |
Mainly for diagnostic purposes, multimod can also generate
co-added sky maps (produced by simply adding every simulated
sample to the pixel containing the corresponding pointing and subsequent
averaging) and coverage maps in
HEALPix format, which can in turn
be converted to Targa
image files by the map2tga programme.
Conceptually, the simulation process can be subdivided into two parts. All modules up to and including the totalconvolver and beam-generation modules produce whole-sky data without time dependence; the subsequent modules generate time-dependent, localised output.
It is important to note that the layout described here is by no means fixed, but can be adjusted to whatever is needed to perform a particular simulation task. This can be achieved by using the pipeline editor code, which is part of the Planck process coordinator package.
The following sections give a more in-depth description of the data types and codes constituting the simulation pipeline. While this documentation should suffice to decide in which situations a given module or data type can be used, it does not give detailed instructions, such as lists of mandatory parameters. This topic is covered by the document "Compilation and Usage of the Planck Simulation Modules'', available in Planck's LiveLink repository or from the authors, which serves as a detailed manual for practical use of the simulation package.
This section documents the various types of data sets produced by the Planck simulation package, which are intended for further use by the Planck Data Processing Centres.
Unless otherwise stated, all data products of the Planck simulation package containing intensities are stored as antenna temperatures measured in Kelvin. Angles are generally stored as radians. The simulation pipeline produces maps and pointing information in ecliptic coordinates.
![]() |
(1) |
For polarisation information, which can also be stored in the form of
spherical-harmonic coefficients, the relations between real-space
and spherical-harmonic space are more involved because the Q and U Stokes parameters are quantities of spin ,
and must
therefore be expanded in spin-weighted harmonics
.
Their exact definition can be found in
Zaldarriaga & Seljak (1997).
![]() |
(2) |
![]() |
(3) |
![]() |
(4) |
For real-valued skies and beams, the condition
is fulfilled, so the
coefficients with negative m'' and the imaginary part of
need not be stored.
Since all modules are presently communicating via FITS files (NOST 1999), the cfitsio library (Pence 1999) is required for the pipeline to operate. Currently, version 2.510 of the library is used. In the future, it is planned to store the data in data-base system.
Several modules need to perform fast Fourier transformations, which can be divided into two different usage scenarios:
Apart from using externally developed general-purpose software such as libcfitsio, the codes have many additional commonalities: they perform file input and output in a very similar way (using only a small subset of the cfitsio function), they deal with a common set of data types (see Sect. 4), and they need to operate on them in similar ways. Keeping a separate implementation of these facilities for each module would cause unnecessary code duplication, leading to software that is both error-prone and hard to maintain. For this reason, common functions are implemented exactly once and linked to all programmes that need it. However, since some modules in the Planck simulation package are written in Fortran 90 and others in C++, a few common components (most notably the support functions concerned with data I/O) had to be implemented in both languages to allow convenient usage.
Several modules, most notably the 1/f-noise generator and the syn_alm_cxx module, which creates sets of spherical-harmonic coefficients from a power spectrum, require streams of random numbers. Using the default random-number generators supplied with the programming language is not appropriate in our situation, since the period length and overall quality of these random numbers depend on the operating system and the compiler, whereas the data products created with the Planck simulation package should be identical on all supported platforms (for identical random seeds).
Currently, the Planck simulation modules use a generator of the xorshift type (Marsaglia 2003) with a period of 2128-1, which is more than sufficient for the necessary calculations. In addition to its excellent randomness properties, the algorithm is also very efficient. And since most codes require random numbers with a Gaussian distribution instead of uniform ones, the simulation package contains an implementation of the Box-Muller method (Box & Muller 1958) for generating the former from the latter.
All Planck simulation modules follow a single convention for
obtaining simulation parameters (like cosmological parameters, map resolution,
detector names, names of input and output files, etc.) from the environment.
This is done via plain ASCII files obeying a simple format. Each parameter
is defined in a separate line using the form
[name] = [value]
Empty lines are skipped. Lines starting with a hash (#) sign are
ignored and can thus be used as comment lines.
Data exchange between the various pipeline modules is performed exclusively via FITS files. Although the cfitsio library provides a comprehensive interface for creating and reading these files, calling it directly from the modules is in most cases more complicated and error-prone than necessary for the limited set of data types used within the Planck simulation package. To simplify data input and output for the module author, a powerful but fairly easy-to-use set of "wrapper'' functions around cfitsio was written in both Fortran 90 and C++, which supports all input and output operations performed by the Planck simulation modules.
This abstraction from the interface of the underlying library has the additional advantage that the external data format can be changed with relatively little impact on the module code, simply by adjusting the implementation (without modifying the user-visible interface) of the common input-output functions mentioned above to the new format. The anticipated switch from FITS files to the Planck Data Management Component as a means for transporting data between modules will greatly benefit from this design.
On an even higher level, functions exist which read and write entire maps, spherical-harmonic coefficient sets and power spectrum objects; this code is also used by many modules.
In the following subsections, we describe the function of all
modules in the Planck simulation package. Where possible, we
only give a short description of the underlying algorithms and cite
the original works introducing the numerical methods; if no such
publication exists, we provide a more detailed description. For the
most resource-consuming applications, we also specify their CPU, memory,
and disk-space requirements, as well as their scaling behaviour with
regard to relevant input parameters (such as the map resolution or the
maximum multipole order
). Parallelisation of a code
is also mentioned where applicable.
For generating CMB power spectra, the November 2004 version of the
CAMB code
(Lewis et al. 2000) is used. This code is based on CMBfast
(Seljak & Zaldarriaga 1996), but is more portable and can be integrated
more easily into the Planck simulation package, mainly
because it does not rely on unformatted files for storage of
intermediate data products.
Memory and CPU requirements of the CAMB code are rather modest for the parameter sets typically used for Planck simulations (memory consumption <100MB, runtime <1 min), hence it can be run without any problems on desktop machines.
The Planck simulation pipeline makes extensive use of the HEALPix pixelisation of the sphere (Górski et al. 2002,2005). It is implemented in a software package which contains many associated algorithms, among them some exploiting the efficiency of the HEALPix scheme for spherical-harmonic transforms. The main usage areas of the package in the simulation pipeline are:
The ported code (especially the conversions between maps and almcoefficients) was optimised with regard to memory and CPU time consumption. On shared-memory computers, OpenMP parallelisation is used.
The alm2grid programme is an addition to the existing HEALPix
facilities. This code converts alm coefficients to values on an
equidistant grid in the polar angles (,
), whose
extent in
can be chosen freely. Using this format, it is
easy to calculate a high-resolution real-space representation of a
beam pattern, which in turn is a necessary input for the point-source
convolver.
The C++ package also contains a testsuite to validate the correctness and
accuracy of the ported code.
Starting with version 2.0, the official HEALPix package contains the
described C++ modules, and is distributed under the terms of the GNU General
Public License.
The skymixer component serves two purposes:
The microwave sources currently supported by skymixer are:
A spectral-index map derived from Wilkinson-MAP data is also available, but has not yet been included in the extrapolation algorithm.
The employed template map was published by Schlegel et al. (1998).
Both codes allow the suppression of the CMB monopole in the output, which is usually not needed by any subsequent analysis and which would decrease the numerical accuracy of the output.
The runtime of both programmes is rather short and dominated by the
input and output operations. The memory consumption
of the skymixer is low (below 10 MBytes) and independent of the
map size, since it processes them chunk by chunk. In contrast, the
almmixer module holds two alm sets in memory, so that it
requires approximately
bytes of storage.
Several Planck simulation modules require information about the beam shapes of Planck's detectors, typically in the form of a set of spherical-harmonic coefficients.
One way to obtain these is the module gaussbeampol, which takes a detector name as input, looks up the detector's beam characteristics in the focal-plane database, and produces the corresponding almcoefficients. The gaussbeampol module can generate axisymmetric or elliptical Gaussian beams with and without polarisation, and allows arbitrary orientations for polarisation and ellipticity.
Realistic beam patterns are often delivered in the GRASP8 format,
which in turn consists of the GRD and CUT sub-formats. The beam2alm code can convert beam files of both sub-formats to almcoefficients, and also allows combining two GRASP8 files (containing
the full and main beam, respectively) to a single alm set.
The point-source convolver requires a real-space representation of the beam, which can be obtained from the alm coefficients using the alm2grid or alm2map_cxx modules.
This module takes as input the spherical-harmonic coefficients of a
sky map and a beam, and computes the convolution of sky and beam for
all possible directions (,
)
and orientations
(
)
of the beam relative to the sky. Both unpolarised and
polarised convolutions are supported. In the polarised case, three
sets of spherical harmonics are required for sky and beam,
respectively, for their Stokes-I, Q, and U parameters. The output then
consists of the sum of the three separate convolutions.
Our implementation is analogous to the algorithm presented by
Wandelt & Górski (2001), including the extensions for polarisation
given by Challinor et al. (2000). However, the code was modified
to allow complete shared-memory parallelisation of all CPU-intensive
tasks (starting from the partial parallelisation contributed by
Stephane Colombi), and the symmetries inherent in the matrix
dmm'l(introduced in Eq. (6) of Wandelt & Górski 2001) were used for
further CPU and memory savings. With the new code, convolutions up to
and
can be done in less than
five minutes on a modern desktop computer.
The resource consumption of the totalconvolve_cxx module scales
roughly with
for memory and
for CPU time. Although the code is
fully parallelised, it will most likely not scale very well, since the
cache re-use of the algorithm is poor and a saturation of the memory
bandwidth will limit overall performance.
The output consists of the three-dimensional complex-valued array
,
which contains
points
in
-direction (x is a small number of additional points
needed for interpolation purposes),
points in
-direction, and
values in
m-direction. In contrast to the original implementation by
B. Wandelt, the final FFT is only carried out over
- and
-directions, because interpolation is easier for this case;
see also Sect. 6.7.2.
To increase the spatial resolution of the output ring set, the working
array can be zero-padded to any desired
before the FFT is performed.
The Planck simulation package makes use of the simmission code developed by Floor van Leeuwen and Daniel Mortlock (Challinor et al. 2002; van Leeuwen et al. 2002), which takes a wide variety of mission-related parameters as input like the exact orbit around the outer Lagrangian point L2 of the Sun-Earth/Moon system, the scanning strategy, the satellite's inertial tensor, etc.
The most important output of the simmission module is a table containing the position and orientation of the satellite in fixed time intervals during the entire mission. In addition, it optionally calculates the positions of the Sun and planets relative to the Planck spacecraft, which is important for the point-source convolver.
The multimod code comprises most of the functions offered by the Planck simulation package for creating and manipulating time-ordered information. This includes creation of detector pointings and signal time streams from various sources like the CMB dipole, point sources, and detector noise.
At first glance, this approach appears to contradict the goal of modularity stated at the beginning of this paper, which would suggest writing many small pieces of code implementing a single task each. This strategy, however, would have a negative impact on performance, since in most of the simulation codes working on time-ordered data, the disk input or output times are at least comparable to the computations themselves. Combining all the effects in one programme allows us to read and write the data only once and perform all manipulations without storing intermediate data products on disk.
Even though all the functions are linked together into one executable, the various components are still implemented in a very modular fashion, so that they can be easily used to build smaller modules with reduced generality. For example, the Planck simulation package contains a stand-alone point-source convolution code, which includes exactly the same C++ classes also used by the multimod code.
The multimod programme processes data in chunks of one pointing period at a time, which corresponds to an hour of measurement. This keeps the amount of main memory occupied by time-ordered data rather low. On the other hand, a pointing period still contains enough individual samples to allow efficient OpenMP-parallelisation of the code.
Schematically, multimod performs the following steps for each pointing period:
The parameter ringres can be chosen freely by the user; it determines
the sampling rate used for producing the "ideal'' detector signal
(i.e. the signal before the modification by the detector electronics and
the sampling process). In theory this sampling rate should be infinitely
high; for practical purposes it should be sufficient to choose
ringres/60 s
for smooth signals (like the CMB
and galactic foregrounds) and ringres/60 s
for the SZ and point-source signals.
Using the time stream Mn of satellite orientations generated by the
simmission module, the detector-pointing component produces
time-ordered detector pointings at arbitrary sampling rates. For this
purpose, it is necessary to determine the satellite orientation at any
given point in time. This is achieved by extracting the rotation
matrix Rn between every Mn and Mn+1. From this matrix, an
axis
and angle
of rotation can be computed. The
satellite orientation at a time t with
can then
be approximated by
![]() |
(5) |
After this rotation interpolation, another rotation matrix must be
applied, which describes the detector position and orientation relative to
the satellite coordinate frame.
This matrix depends on the angle between the optical axis and the nominal
spin axis, as well as on the detector-specific
angles ,
,
and
given in the
focal-plane database,
which specify the position and orientation of all detector
horns relative to the coordinate frame of the focal plane.
The interpolator module's task is to determine the radiation
intensity falling onto a detector for a given co-latitude ,
longitude
,
and beam orientation
.
To accomplish this,
the output of the totalconvolve_cxx module for the desired sky
signal and the detector's beam is used. As Wandelt & Górski (2001)
point out, this dataset is sufficient (for a given
and
of the beam) to reconstruct the exact signal for
any (
,
,
)-triple. However, exact evaluation
is prohibitively expensive in practice, since it would involve a sum
over
terms for a single data point.
The approach used in the interpolator is based on the assumption
that
will be rather large (
)
for
realistic simulations, while the beam's
is usually
not larger than 20; i.e. the beam pattern has no high-frequency
azimuthal asymmetries. The high
implies a fairly
high resolution in
and
for the totalconvolver's
output, so that polynomial interpolation, whose order can be chosen
via a parameter, can be used for those two directions. In the
-direction, this approach is not accurate enough because of the
sparse sampling, so that the Fourier sum of the
-components must
be calculated:
![]() |
(6) |
The memory consumption of the module is proportional to
,
and its runtime scales with
.
Since interpolations for
different pointings are independent, parallelisation was trivial to implement.
By default, the interpolator module calculates the total intensity observed by the detector. Optionally, any of the individual Stokes parameters I, Q, and U, can also be extracted, but this calculation is only exact for axisymmetric polarised beams.
When using linear interpolation, it has been observed that the power
spectrum of the resulting co-added maps was suppressed in comparison to
the input maps at high l. This is a consequence of the finite
resolution of the ring set, combined with the non-perfect
interpolation in
and
.
Although this effect
cannot be eliminated entirely, increasing the interpolation order
and/or increasing
in the totalconcolver
reduces the problem dramatically.
This component calculates the time-dependent dipole signal observed by a given detector. The user can choose whether the kinematic dipole component (caused by the motion of the satellite in the Solar System, with the Solar System within the Galaxy, with the Galaxy within the Local Group, and so forth) should be included, whether relativistic effects should be calculated, whether only higher order corrections should be returned, etc. The default output quantity is the dipole signal in antenna temperature, but thermodynamic temperature can also be calculated.
The task of the point source convolver is to simulate time streams of the signal created by fixed and moving point sources. For fixed sources, this could also be achieved by adding another foreground to skymixer or almmixer, but this is not very practical because the maximum multipole moment required in the totalconvoler to accurately treat this kind of strongly localised sources would be very high. For moving sources, this solution is not viable at all, since all signal contributions entering the mixing modules must be constant over the mission time scale.
For these reasons, the point source convolver operates in real space, taking
the detector pointings as input and convolving the sources close to a given
pointing with the properly oriented beam pattern. Obviously, an efficient
retrieval of sources in the vicintity of the beam is crucial for the
performance of the module. This was achieved by pre-sorting the point source
catalog into a HEALPix map with low resulution (
)
and
by identifying, for all detector pointings, the nearby point sources using
the query_disc() function of HEALPix with a radius of several times
the beam FWHM.
While fixed point sources stay in this HEALPix lookup table during a whole simulation run, the positions of the moving point sources are calculated once per pointing period. These positions are then sorted into the lookup table accordingly and taken out again after the signal for the pointing period has been calculated.
As mentioned above, it is recommended to choose a high value for the ringres parameter when calculating the point source signal, because of their very localised nature.
For the fixed point sources, the radiation flux in each Planck frequency
band is taken from the point source catalog. For moving point sources
(i.e. planets and asteroids), the corresponding positions in time are
calculated using the JPL HORIZONS system, whereas the radiation intensity of
the moving point sources is calculated using an extended Wright & Odenwald
thermal emission model (Wright 1976; Neugebauer et al. 1971),
which has been adapted separately for rocky planets,
gaseous planets, and asteroids and which
accounts for effects like distance to the satellite, illumination
geometry, beaming, surface roughness, heating and cooling of the surface, etc.
This will be described in more detail in a separate publication
(R. Hell, in preparation; see also Schäfer et al. 2004b).
Temperature fluctuations in the detectors caused by the operation of Planck's sorption cooler constitute an important systematic effect and have to be included in realistic simulations of time-ordered data. To this end, the Planck simulation package contains the glissando code contributed by LFI, which models this effect and will be fully integrated with multimod in the future. Since the LFI and HFI detectors may react quite differently to temperature fluctuations, it is likely that a completely different code will ultimately be required for the HFI channels.
The sampler component takes the idealised signal impinging on the detector as input, which is produced by the components described above, and applies to it a model of the detector's measurement process. Two effects play an important role:
![]() |
(7) |
![]() |
(8) |
It should be mentioned that both effects introduce a time delay into
the sampled time-ordered data, i.e. that the signal written with the
time stamp tn was produced entirely from measurements at ,
which in the case of HFI reflect a signal that was seen even earlier,
because of the bolometer's time response. This must be taken into
account when time-ordered signal data and detector pointings are used
to produce sky maps.
The sampler code can be trivially parallelised and has perfect cache behaviour, so that it should scale very well on multiprocessor machines. As expected, the sampler module produces data at a rate equal to the detector's sampling frequency.
The electronics of both LFI and HFI detectors produces noise that has
a power-law spectrum with spectral index
between a
lower frequency
and a higher (or knee) frequency
.
Below and above those frequencies, the spectrum is white.
The targeted spectral power is thus
A noise time-stream with this spectrum could be obtained by modelling it in Fourier space with random coefficients and then performing an FFT. However, considering the fact that the coherence length for an HFI detector would be approximately 108 samples, this method is rather expensive in terms of main memory consumption.
The noise is more efficiently generated by sampling and adding
numerical solutions of stochastic differential equations (SDEs) of the
form
In order to have good logarithmic frequency coverage, a logarithmic
grid of
between
and
is used together with a process with
for the
high-frequency white noise n0(t).
The resulting SDE spectra
![]() |
(11) |
![]() |
(12) | ||
![]() |
(13) |
Since the output of the noise generator is added to the output of the sampler module, both are naturally produced at the same sampling frequency.
Parallelisation of the algorithm is not straightforward, because every
noise sample can only be computed after all of its predecessors have
been calculated. This prohibits the most straightforward strategy of
dividing the requested samples between multiple processors. In our
approach, each SDE process calculates its associated time stream
separately, which can be done in parallel processes, and all of these
time streams are added at the end. While this solution can be
significantly faster than a scalar implementation, its scalability is
limited by the number of SDE processes (
). Increasing the
number of processors beyond this value achieves no further speedup.
We have described the current state of the Planck simulation pipeline in this paper. This package allows a complete simulation of the time-ordered data streams produced by the satellite's detectors, taking as input a set of cosmological parameters, various foreground emission models, a satellite scanning strategy, and parameters characterising the detectors and on-board electronics. The availability of such data sets allows extensive testing of the data analysis software for the mission already before launch, which greatly decreases the time interval between the availability of raw data and the publication of first results. Furthermore, the simulation modules can be conveniently used to study the influence of assumed systematic effects or changes in the mission parameters on the quality of the final data.
Obviously, many of the package's modules are still under development, and new modules are expected. Integration with the Planck Data Management Component and the mission's Integrated Data and Information System (IDIS) will be another important milestone. Any significant changes and enhancements will be presented in a follow-up paper.
The non-Planck-specific modules will hopefully be of use outside the Planck
collaboration as well. This covers almost all of the presented codes,
with the exception of the modules for the satellite dynamics and
sorption-cooler noise. A future public release of the non-Planck-specific parts
of the package is envisaged by the authors. Currently, a partial simulation
pipeline, which is hosted by the German Astrophysical Virtual Observatory, can be tested
interactively over the Internet.
Acknowledgements
The work presented in this article was done in the course of the Planck satellite mission. The authors are grateful for many productive interactions with the Planck team.
We thank Mark Ashdown for implementation and support of the Beam package, Benjamin Wandelt for providing an initial implementation of the oofnoise and totalconvolver modules, Carlo Burigana, Davide Maino, and Krzysztof Górski for early FFT-based noise generation modules, Daniel Mortlock and Floor van Leeuwen for the simmission module, Ian Grivell and Bob Mann for the original sampler module, Aniello Mennella and Michele Maris for the glissando package, Fabrizio Villa and Maura Sandri for instrumental information on LFI, Bob Mann for compiling the first focal plane database and Mark Ashdown for maintaining it. Our thanks go to Giovanna Giardino, Bjørn Schäfer, Christoph Pfrommer, Nabila Aghanim, and François Bouchet for the contribution of template maps, Carlo Baccigalupi for contributing his dust emission model, and to Antony Lewis for helping with the integration of CAMB.
Some of the codes presented in this paper are based on the HEALPix package (Górski et al. 2005), the use of which is gratefully acknowledged.