Free Access
Volume 539, March 2012
Article Number A38
Number of page(s) 16
Section Astronomical instrumentation
Published online 23 February 2012

© ESO, 2012

1. Introduction

Map making is a critical step in the processing of astronomical data of various imaging instruments (interferometers, telescopes, spectro-imager, etc.), and two recent special issues have been published (Leshem et al. 2010, 2008) on the subject. Because the observed sky may contain structures of various scales, from extended emission to point sources, the challenge is to design reconstruction methods that deliver maps that are photometrically valid for the broadest range of spatial frequencies.

For long-wavelength instruments, be they ground based (SCUBA/JCMT, LABOCA/APEX, etc.), on-board balloons (Archeops, BLAST, etc.) or space borne (IRAS, ISO, Spitzer, WMAP, Planck, Herschel, etc.), the task is especially challenging for two reasons. First, the physical resolution is poor at these wavelengths. Second, the distance between the detectors of these instruments generally prevents a proper sampling of the focal plane, given the maximum spatial frequency allowed by the optical response. Therefore, specific scanning strategies have to be defined, which depend on the detector positions and need to be closely combined with a well-designed image reconstruction method.

The Herschel Space Observatory (Pilbratt et al. 2010) was launched in May 2009 together with the Planck satellite. It continuously covers the 55–672 μm spectral range with its very high spectral resolution spectrometer HIFI (de Graauw et al. 2010) and its two photometers / medium resolution spectrometers PACS (Poglitsch et al. 2010) and SPIRE (Griffin et al. 2010). With a 3.5 m primary mirror, Herschel is the largest space telescope launched to date. In order to take full advantage of the telescope size, the accurate representation and processing of the highest spatial frequencies presents a particular challenge. To this end, two step-by-step photometer pipelines have been developed by the instrument consortia by Griffin et al. (2008) for SPIRE and by Wieprecht et al. (2009) for PACS: they produce flux density timelines corrected for various effects, calibrated and associated with sky coordinates (level-1 products), then produce maps (level-2 products). An important step is the correction of the 1/f noise components, which can be correlated or uncorrelated between bolometers. For SPIRE, a significant fraction of the correlated component is processed using the signals delivered by blind bolometers. For PACS, it is currently processed using different kinds of filtering. The glitches caused by the deposit of thermal energy by ionizing cosmic radiation are flagged or corrected. Finally, the timeline outputs can be simply coadded on a spatial grid to produce “naive maps”, with a rounded pointing approximation. Maximum likelihood approaches with the same coaddition algorithm, namely MADmap  (Cantalupo et al. 2010) and SANEPIC  (Patanchon et al. 2008) have also been developed to compute maps, using the spatial redundancy to correct for the 1/f noise.

There are several drawbacks to these pipelines. First, because they work on a step-by-step basis, the performance of the whole process is limited by the step with the worst performance. Second, the ultimate performance of one step is out of reach because only a reduced part of the available information is handed over from the previous step. This mean that better perfomances can be achieved by a more global approach. More important, the instrument and the telescope properties (mainly the diffraction) are not taken into account, which is why the maps are unavoidably smoothed by the Point Spread Function (PSF), whereas the scanning strategy allows higher spatial frequencies to be indirectly observed.

To overcome these limitations, we resorted to an inverse problem approach (Idier 2008) that is based on an instrument model and an inverson method.

  • It requires an instrument model that faithfully reflects the phys-ical reality to distinguish in the observations between what iscaused by the instrument and what is due to the actual sky. Tothis end, an important contribution of our paper is an analyticalinstrument model based on a physical description of the phenom-ena as functions of continuous variables. Moreover, it includesscanning strategy, mirror, wavelength filter, feedhorns, bolome-ters and read-out electronics. The point for the resolution is thefollowing. On the one hand, the field of view is covered by hexago-nally packed feedhorn-coupled bolometers, the sampling periodis twice the PSF width, which potentially lead to spectral aliasingfor wide-band objects. On the other hand, the scanning strategywith a pointing increment lower than the bolometer spacingintroduces an higher equivalent sampling frequency. Therefore,it is crucial to properly take into account the scanning strategyand the whole instrument including irregular sampling to obtainsuper-resolution (see also the analysis in Orieuxet al. 2009). To the best of ourknowledge, a physical model of the instrument this accurate hasnever been used in a map making method.

  • The inversion of our instrument model constitutes an ill-posed problem (Idier 2008) because of the deficit of available information induced by convolution with the instrument PSF. Moreover, the ill-posedness becomes all the more marked as the resolution requirement increases. The inversion methods must therefore exploit other information by regularization to compensate for the deficits in the observations. Each reconstruction method is therefore specialized for a certain class of maps (point sources, diffuse emission, superposition of the two, etc.) according to the information that is included. From this standpoint, the present paper is essentially devoted to extended emission. The method is linear w.r.t. the data for the sake of simplicity and computational burden. From the methodological point of view, it is built within the framework of quadratic regularization (Tikhonov & Arsenin 1977; Andrews & Hunt 1977). It relies on a criterion involving an adequation measure (observed data vs model output) and a spatial smoothness measure. From a numerical standpoint, we resort to a gradient-based optimisation algorithm (Nocedal & Wright 2000) to compute the map. Moreover, in as much as it relies on two sources of information, the method is based on a trade-off tuned by means of an hyperparameter. It is empirically set in the present paper and work in progress, based on Robert & Casella (2000) and Orieux et al. (2010), is devoted to the question of the hyperparameter and instrument parameter auto-calibration (myopic and unsupervised inversion).

One of the most striking results of our research is the correct restoration of small-scale structures (wide-band), which are not detectable on naive maps. This result is reached thanks to the developed instrument model together with the used inversion: they jointly enable the proposed method to reduce instrument effects, overtake instrument limitations and restore high spatial frequencies.

In the image processing community, these capabilities are referred to as super-resolution (Park et al. 2003) and we were partly inspired by recent developments in this field. They are usually based on various (scene or camera) motion or scanning strategy. Some of them account for possible rotation (Elad & Feuer 1999) and/or a magnifying factor (Rochefort et al. 2006). Other approaches introduce an edge-preserving prior (Nguyen et al. 2001; Woods et al. 2006). These works rely on the description of the unknown object as a function of continuous variables that is decomposed on pixel indicator basis (Hardie et al. 1997; Patti et al. 1997), on a truncated discrete Fourier basis (Vandewalle et al. 2007), on a family of regularly shifted Gaussian functions (Rodet et al. 2008), or spline family (Rochefort et al. 2006). Other approaches have been proposed, based on shift-and-add step (Farsiu et al. 2004) followed by a deconvolution step (Molina & Ripley 1989). Finally, several contributions are devoted to the performance of super-resolution approaches (Champagnat et al. 2009; Orieux et al. 2009).

The paper is organized as follows. The instrument model describing the relationship between the measured data and the unknown sky is presented in Sect. 2. Section 3 details the method that we propose to inverse the data and compute high-resolution maps. Finally, Sect. 4 presents experimental results, first on simulated data (Sect. 4.1), then on real data (Sect.  4.2).

2. Instrument model

The prime objective of the instrument model is the reproduction of observed data taking into account the physics of the acquisition. In addition, the reconstruction algorithms use the instrument model many times, it is therefore necessary to adopt hypotheses and approximations to the reduce computational burden. This is one of the differences to a simulator (Sibthorpe et al. 2009; Sibthorpe & Griffin 2006), which is designed to be run once per data set.

2.1. Physical models

2.1.1. Mode of observation

The sky, , is characterized by two spatial dimensions (α,β) and one spectral dimension λ. To model telescope translations, we used a frame of reference defined by the instrument. The map at the input is time-dependent and can be written (1)where α and β define the central angular position of the observation and (pα(t),pβ(t)) the translations into the two directions as a function of time t.

Here, we present only the “Large map” protocol. Data were acquired over a complete observation sequence composed of two almost perpendicular directions and several scans back and forth for each of the two directions. The pointing acceleration and deceleration phases were not included in the zone of interest and there was no rotation during the observation sequence. The pointing functions are therefore written (2)for scanning at a constant velocity (vα,vβ). The pointing accuracy is of the order of a few seconds of arc. This protocol enables us to introduce spatial redundancy, which is an essential element for the reconstruction of a sky at a resolution greater than the detector spatial sampling period (Orieux et al. 2009; Champagnat et al. 2009).

2.1.2. Optics

The Herschel Telescope is a classical Cassegrain instrument with a 3.5 m diameter primary mirror and a 308 mm diameter secondary mirror. The SPIRE photometer has three channels for a single field of view. The light is split by a combination of dichroics and flat-folding mirrors. The spectral channels are defined by a sequence of metal mesh filters and the reflection/transmission edges of the dichroics. They are centred at approximately 250, 350 and 500 μm (noted as PSW, PMW and PLW respectively). We assumed the overall transmission curves of the wavelength filter hk(λ), for k = 1,2,3, as given by the SPIRE Observers’ Manual (no analytical form is available).

The three detector arrays contain 139 (250 μm), 88 (350 μm) and 43 (500 μm) bolometers, each coupled to the telescope beam with hexagonally close-packed circular feedhorns. The beam solid angle is apodized by a bell-shaped weight whose width increases with λ. Efforts have been made to correctly integrate the feedhorns in the instrument model but the detailed coupling of feedhorns on incoming radiation is, to the best of our knowledge (Griffin et al. 2002), not fully understood at present.

Our final choice as an effective PSF for the telescope coupled with feedhorns was a Gaussian shape ho (α,β,λ). This choice has two advantages: (i) it allows a closed equation for the instrument model (see Sect. 2.2), and (ii) it agrees with the response measured from observations of Neptune (Griffin et al. 2010). As a first approach, we assumed isotropic Gaussians with standard deviations σo (λ) =  proportional to the wavelength since the width of the beam varies almost linearly with the wavelength. The widths obtained are close to the FWHM measured on the sky with 18.1′′, 25.2′′, and 36.9′′ at 250 μm, 350 μm and 500 μm, respectively (Griffin et al. 2010). The feedhorn diameter is 2, which introduces a detector spatial sampling period of 2 (50′′ for the 350 μm array, or equivalently with sampling frequency fs ≈ 0.02 arcsec-1).

The output after each feedhorn is then written as a 2D convolution of the input and the effective PSF ho in addition to the hk wavelength filter (3)where (αlm,βlm) is the direction pointed at by the feedhorn (l,m), for l = 1,...L and m = 1,...M. The k subscript can be safely removed from since each spectral band is processed separately. Finally, the optics was modelled as a linear invariant system w.r.t. continuous variable.

2.1.3. Bolometers

To set up the bolometer model, we took the thermal model of Sudiwala et al. (2002), which was also used in the simulator developed by Sibthorpe et al. (2009). Bolometers absorb the entire received radiation (4)and this power provides the system excitation. The temperature Tlm(t) determines the system output. The link between the input P(t) and the response T(t) is described by the differential equation deduced from a thermal balance, where C is the heat capacity of the bolometer, R(T) is its resistivity, T0 is the temperature of the thermal bath, ν is a physical parameter that depends on the bolometer, G0 is the thermal conductance (at temperature T0) and Vp and Rc are the polarization voltage and charge. No explicit solution of this equation is available in the literature. Sudiwala’s approach (Sudiwala et al. 2002), which we adopted here, is to linearize this equation around an operating point . In the following, we consider only the variable part of the flux and exclude the constant part that defines the operating point. All constants are defined with respect to the operating point.

For SPIRE, most of the observations should be carried out in the linear regime (Griffin 2006, 2007). We therefore considered that a development is sufficient to model the bolometer behaviour correctly. Then, knowing the variations of the resistivity R(T) with temperature, it is possible to determine the tension at the terminals. This first-order development models the bolometer as a first-order, low-pass filter with an impulse response (5)where the gain S and the time constant τ depend on the physical parameters in the differential equation (Sudiwala et al. 2002). The values of these parameters are defined with respect to the operating point and correspond to the official SPIRE characteristics (Griffin 2006, 2007). The output voltage around the operating point can then be written as a function of the incident flux, (6)Finally, downstream, we have the read-out electronics, composed of several stages (modulation, amplification, low-pass filter, demodulation, quantification). However, except for the low-pass filters, they seem to have negligible effects to the other elements and are not included in our model. The equations are nevertheless available (Griffin 2007) and it is possible to integrate them into the model.

The low-pass filters introduce a delay on the data with respect to the telescope position along the scan. As a trade-off between model accuracy and computational burden, we have chosen to model the combination of the low-pass filter and the bolometer as a unique first-order filter. The time constant1 value (0.2 s) is taken to be representative of the combination.

Finally, we accounted for regular time sampling that takes the values at times t = nTs (with a sampling frequency Fs  = 1/Ts  ≈ 30 Hz) and then , for n = 1,...N. Given the scanning speed of 30″  s-1 this induces a spatial sampling period of 2″ between two succesive time samples for one bolometer, while the detector sampling period is 50″ for the 350 μm array.

2.1.4. Complete model equation

Adding these elements yields the equation of the acquisition chain. For a spectral channel k, the time signal at the bolometer (l,m) at time n is (7)This equation introduces four integrals: two from the optics (spatial convolution), one from the spectral integration, and one from the time convolution. This is the fundamental equation of the instrument model since it describes the data bolometer by bolometer at each instant as a function of the sky . It should be noted that this model includes the discretization process (and possible aliasing) in the sense that the data is a discret set and is a function of continuous variables.

2.1.5. Super-resolution sky model

The model of the sky is an important element for the reconstruction method. As stated in the introduction and presented in Sect. 2.1.1, the sub-pixel scanning strategy should allow for reverse aliasing and enable to estimate a super-resolved sky (Orieux et al. 2009). The model of must therefore be suitable for super-resolved reconstruction and, in particular, allows a fine description of the physical reality and integration with the instrument model.

Firstly, unlike conventional models of SPIRE (Sibthorpe et al. 2009; Cantalupo et al. 2010), we considered the sky spectrum within each channel. The emission observed by SPIRE is mainly caused by thermal equilibrium (between emission and absorption of UV and visible photons from incident radiation), and the intensities can be written (8)where τλ0 is the optical depth at wavelength λ0, β is the spectral index, Bλ is the Planck function, and T the dust temperature. The SPIRE data alone do not allow the proper measurement of the dust temperature (the combination of SPIRE and PACS is mandatory, Abergel et al. 2010), consequently we decided to exclude the dust temperature in our sky model and work in the Rayleigh-Jeans approximation, so that Bλ(T) ∝ λ-4. Moreover, we assumed β = 2, which is the “standard” value of the diffuse ISM (e.g., Boulanger et al. 1996). Finally, we have (9)with ϱ = 6. However, as we will see in Sect. 2.2, the wavelength integration of the acquisition model will be performed numerically. In other words, the spectrum profile can be set adequately with the available knowledge of the observed sky.

Secondly, was generated onto a family of functions regularly shifted in space: ψij(α,β) = ψ(α − iδα, β − jδβ) where ψ is an elementary function and (δα,δβ) are the shifts between the ψij in (α,β). We then obtain (10)

where ψ is the generating function and xij are the coefficients. In addition, the axis α is determined by the first scan of the observation.

One of the purposes of this model is to describe maps with arbitrary fine details, that is to say, arbitrary wide band. Within this model, the usual function ψ is the cardinal sine with shift and width adapted to the target band. However, cardinal sines require analytical calculations that cannot be made explicit. To lighten the computational burden, we chose Gaussian ψ functions. These functions are parametrized by their spatial shifts (δα,δβ) and their standard deviations (σα,σβ). The parameters (δα,δβ) are chosen to be equal to the inverse of the target band width as for the cardinal sines. In the numerical processing of Sect. 4, the values of (δα,δβ) are equal to the sampling period of 2′′ induced by the scanning scheme of the “Large map” protocol (Orieux 2009). For the Gaussian function width parameters (σα,σβ), we determined the value that minimizes the difference between the width at half-maximum of the cardinal sine and the Gaussian: σα/β ≈ 0.6 δα/β in a similar manner in α and β.

2.2. Explicit calculation of the acquisition model

Given the linearity of the instrument model (7)and the sky model (9), (10), the instrument output for a given sky is (11)Thus, to obtain the contribution of a sky coefficient xij to a data item , it is necessary to calculate four integrals, whose discretization by brute force would result in time-consuming numerical computations.

Concerning the optics, the convolution of the function ψ with the optical response ho appears in Eq. (11)and, because these are Gaussians, the convolution can be made explicit (12)with, in a similar manner in α and β: .

For the integral over time, only the constant velocity phases can be explicitly described for the “Large map” protocol. To integrate over time in (11), we used the expressions of (2)for pα(t) and pβ(t), which gives (13)

It can be shown that explicit integration can be performed by including the Gaussians and the bolometer response (see details of the calculations in Appendix B, and the model becomes (14)In this equation, the angles oα and oβ are defined by oα = cα + iδα − αlm and oβ = cβ + jδβ − βlm. Moreover, .

The data point does not depend directly on the “scanning time” t because it is integrated. It depends on time through the sampling instant n occuring only in nTs v { α,β } , i.e. a distance. In practice, the sampling period along the scans is much shorter than the sampling period of the detecteor array. Thus, this properly modelled sampling scheme is a key element for reconstruction with an higher resolution.

In addition, the time constant of the bolometer and the electronics τ appears only in the argument of the function erfcx. It is consequently through this function that the bolometers and the electronics influence the spatial response.

The dependence on the wavelength through σo (λ) precludes explicit integration with respect to λ. However, the integral depends neither on the data nor on the unknown object but only on the protocol. Accordingly, for a given protocol, these integrals can be calculated once and for all. Finally, the work described above allow three explicit intergration of the four introduced by the initial model.

Equation (14)models the acquisition of the data item at time n by bolometer (l,m) from the coefficients xij. These equations can be written (15)where ℋ is calculated from Eq. (14). The model is linear and we can therefore write (16)where y and x are vectors of size LMN and IJ, and H is a LMN × IJ matrix, each row of which can be deduced from (14)by varying l,m,n for fixed i,j.

2.3. Invariant structure

Initially, the physical model (7)is based on convolutive (so invariant) transforms w.r.t. continuous variables. However, the discretization operation is inhomogeneous, consequently the invariance property does not hold anymore, which lead to long computational time. Nevertheless, the trace of this initial invariance can still be perceived because H is a sum of terms at different spatial positions of the Gaussians (cf. Eq. (14)). Because the problem is now discretized, we seek to bring out an invariance by quantified shifts in for the α direction, and similarly for β. With the approximation that the terms are multiples of a common factor Δα, the continuous shift is The pointed directions are therefore rounded to match the grid of sky. The MADmap and SANEPIC methods use this idea but there is a notable difference: they perform the operation on a low-resolution grid, which limits the map resolution. In contrast, the developments proposed here exploit the idea of a high-resolution grid, enabling super-resolution reconstruction. By acting in the same way in the β direction, we have (17)and by computing the discrete convolution, we obtain (18)Therefore, if, and only if, In these conditions, the data y , for a given scanning direction are computed by discrete convolution (18)followed by (inhomogeneous) down-sampling defined by (19), (20), which is much more efficient than using a generic linear model (16). First of all, the decomposition by convolution then decimation is faster than the direct calculation and, what is more, the convolution can be computed by FFT. Finally, given that only the impulse response is necessary, there is no need to compute and store all elements of the matrix.

In this form, some computations may be made even though they are useless, because the convolution is performed for all indices, whereas only some of them are used. In practice, the excess computation is reduced because we chose shifts (δα,δβ) close to the sampling period induced by the scanning scheme. Almost all convolution results are observed, from 1 to 7 times for PSW as illustrated in Fig. 1.

There is, however, the disadvantage that the bolometer positions are approximated. Yet these positions are important because they allow to best exploit the data and to properly manage the information needed to estimate high frequencies. We chose a step Δ that is close to the sampling period along the scan, i.e. Δ ≈ 2″. The error introduced is therefore small. This can be seen to be all the more valid when we consider the expected level of noise and telescope pointing errors, which are of the same order of magnitude, 2″.

Finally, the initial model (16)is decomposed in the discrete convolution defined by (18)following the (inhomogeneous) down-sampling defined by (19), (20), that is to say, H is factorised and (21)where Hc is a convolution matrix and P a pointing matrix that takes the values observed after convolution. It has one, and only one, “1” per row because each data item can only come from one position. Some columns may be entirely zero because certain coefficients may not be observed. Conversely, some columns may contain several “1” because certain coefficients may be observed several times.

To summarize, using an approximation of the pointed direction, we have separated the model H into two sub-models HP Hc, where Hc is invariant and P contains the non-invariant structure. This decomposition is broadly similar to the one generally found in super-resolution in the field of image processing (see references in the introduction).

Figure 1 presents this decomposition for the PSW detector with a velocity of 30′′/s towards the left: spatial redundancy contained in P (the blacker the pixel, the more often it was observed) and spatial impulse response (the time response of the bolometer and the electronics is clearly visible as the spatial extent of the Gaussian lobe).

thumbnail Fig. 1

Factorised physical model (PSW detector, velocity of 30′′/s towards the left): map of spatial redundancies P (left) and spatial impulse response Hc (right). The spatial scales are different for better visualisation of the impulse response.

2.4. Conclusion

We have constructed a linear instrument model from the physical description of the phenomena involved during acquisition: scanning, optics, filters, bolometers, and electronics were taken into account, together with a description of the sky in continuous variables in the three dimensions. We next explicitly described certain calculations and approximated the model in a factorised form to lighten the numerical computational burden.

The proposed model differs from those currently used in SANEPIC  (Patanchon et al. 2008) or MADmap  (Cantalupo et al. 2010) in that it includes the physics of acquisition. Moreover, unlike monochromatic models (Sibthorpe et al. 2009), the sky model extends spectrally across the whole channel. Again, unlike (Sibthorpe et al. 2009), our bolometer model is linearized, which simplifies the developments and allows the bolometer time response to be made explicit.

Finally, the consistent, global definition of the acquisition allows the over-sampling to be directly exploited and a processing method to be designed that uses these properties to estimate the sky at higher resolution than the detector sampling period.

3. Data inversion for high-resolution maps

The previous section was dedicated to the instrument model and we deduced the relationship between the measured data z and the unknown sky or its coefficients x through (22)The matrix H is relatively complex and high-dimensional, but the forward model (16)remains linear. The terms o and n account for measurement and modelling errors and quantify the data uncertainties. The term o is the noise mean (offset) and n is a zero-mean white and stationnary Gaussian noise with variance . We assumed that each bolometer denoted b is affected by an unknown offset ob. Equation (16)can be rewritten for the bolometer b(23)where z b contains data from bolometer b, Hb is the corresponding part of the instrument model and (n b,ob) accounts for errors of the bolometer b. This section presents the method to estimate the unknown x and the offsets o from the data z .

We tackled the map-making question in an inverse problem framework. Abundant literature is available on the subject (Idier 2008; Demoment 1989; Tikhonov & Arsenin 1977; Twomey 1962). As presented in the previous section, the instrument model embeds convolutions and low-pass systems. The inverse problem is ill-posed (Park et al. 2003) and this is particularly true when super-resolution is intended. In this context, a naive inversion, such as a least-squares solution, would lead to an unacceptably noisy and unstable solution.

A usual class of solutions relies on regularization, i.e. the introduction of prior information on the unknown object x to compensate for the lack of information in the data. A consequence of regularization is that reconstruction methods are specific to a class of (sky) maps, according to the introduced information. From this standpoint, the present paper considers extended sources and relatively spatially regular maps.

Since it is defined as a function of continuous variables, the regularity can be measured by the squared energy2 of derivatives of . For first derivatives in both directions, it can be shown (see Appendix A) that (24)where DD α + D β is obtained from the sum of the autocorrelation of the derivative of ψ with respect to α and β and is similar to a discrete gradient operator. This relation illustrates the equivalence between the measure on the continuous function and the measure on coefficient x , thanks to the use of a Gaussian generating function.

With the regularity measure (24)and the white Gaussian model hypothesis for n , the regularized least-squares criterion is (25)Another consequence of ill-posedness and regularization is the need to tune the compromise between different sources of information. The hyperparameter μ tunes this trade-off. With Eqs. (22)and (24), we obtain a regularized least-squares criterion that depends only on the coefficients (26)the desired map is defined as the minimizer As a consequence , is the optimum of the criterion Eq. (25).

A Bayesian interpretation of criterion (26) is a Gaussian posterior law with Gaussian iid likelihood, Gaussian correlated prior and flat prior law for o . An advantage of the Bayesian interpretation is the ability to derive an uncertainty around the maximum through the variance (see Sect. 4) of the posterior law. Another important advantage of the Bayesian interpretation deals with the estimation of hyperparameter and instrument parameters (Orieux et al. 2010).

The proposed algorithm for the computation of and is an alternating minimization algorithm: after an initialization, the following two steps are iterated

  • 1.

    find for fixed o (27)

  • 2.

    find for fixed x (28)

until a criterion is met. For fixed x , the solution is straightforward and is the empirical mean of the residual z b − Hbx for each bolometer separately. For fixed o , the solution Eq. (27)is unique and explicit (29)The estimator is linear w.r.t. data z . Unfortunately, since H is not circulant, cannot be computed with a “brute force” algorithm: the practical inversion of the Hessian matrix H tHμD is impossible (the size of this matrix is the square of the number of coefficients x ). The proposed solution relies on an iterative conjugate gradient descent algorithm (Nocedal & Wright 2000; Shewchuk 1994). The most expensive part is the computation of the product between the matrix H tH and the current point x k and it can be efficiently computed based on FFT, decimation, and zero-padding (see Appendix C).

4. Experimental results

This part illustrates the improvement that our approach can bring w.r.t. to the standard approach based on coaddition first using simulated data and then with actual data transmitted by Herschel.

4.1. Simulated data

4.1.1. Experimental protocol

We chose three 20′ × 20′ maps used by the SPIRE consortium to assess reconstruction methods (Clements et al. 2006): a map of galactic cirrus (Fig. 3) complying with the a priori regularity model, a map of galactic cirrus superimposed on point sources (Fig. 6), and a galaxy map (Fig. 7).

We studied the PMW channel and the “Large Map” protocol with three scans in each direction and a velocity of 30′′/s. The data were generated using a simulated map of coefficients x and (Clements et al. 2006) the instrument model (16), considering for this simulation part that the bolometers are not affected by any offset. We used a flat spectrum (ϱ = 0 in Eq. (9)) for the simulations and the inversions. The noise is zero-mean white Gaussian with three levels characterized by their standard deviation σn (“standard noise” hereafter), 10 σn (“high noise”) and 0.1 σn (“low noise”). The standard deviation is the same for all bolometers and, unless stated otherwise, all data sets were generated with the same noise realization.

The proposed reconstruction for the 20′ × 20′ maps performed using δα = δβ = 2″, i.e. maps of 600 × 600 coefficients. We compare our results with the map obtained by coaddition, with 6″ as pixel size.

In agreement with Sect. 3, the map was reconstructed as the minimizer of criterion (25), (26)and the minimization was performed by a conjugate gradient algorithm with optimal step size. The value of the criterion decreases at each iteration and a few tens of iterations appear to be sufficient to reach the minimum.

In the simulated cases, the original map (the “sky truth”) is known, accordingly, we can quantitatively assess the reconstruction through an error measure defined by (30)where x and are the coefficients of the true map and the reconstructed map.

thumbnail Fig. 2

Reconstruction error ℰ  vs. regularization parameter μ for of cirrus with “standard noise”. The minimum error is ℰ  min = 0.08 for the proposed method, while ℰ  = 0.12 for μ = 0.

The estimate depends on the regularization parameter, as illustrated in Fig. 2. A non-zero optimum value μopt appears (here ~1012) for which ℰ is a minimum, which confirm the interest of the regularization. A value lower than 1011 leads to an under-regularized map and a value greater than 1013 to an over-regularized one. In the following, it is, of course, the optimal value that is used to reconstruct the maps. Also, it appears empirically that μ needs to vary by a factor 2 around μopt to obtain a noticeable modification of the map. This result is confirmed in Fig. 2, where the minimum is not very marked compared to the horizontal scale.

Figure 2 also illustrates the improvement provided by the regularization: the errors for the non-regularized and optimum-regularized maps are 0.12 and 0.08 respectively.

4.1.2. Restoration of galactic cirrus

thumbnail Fig. 3

Comparison of results. a) Shows the true map; b) presents the proposed map and c) the coaddition. A horizontal profile is shown in d) and e) gives a zoom.

Figure 3 summarises the results concerning the cirrus in the “standard noise” case. The proposed map is very close to the true one. Our method restores details of small spatial scales (with spectral extension from low to high frequency) that are invisible on the coaddition but present on the true map (see the profiles in Figs. 3(d) and e, especially the fluctuations around pixels 250 and 350). In addition, our method also correctly restores large-scale structures, which correspond to low-frequencies down to the null frequency (mean level of the map). We conclude that our method properly estimates the photometry.

Moreover, the reconstruction method is linear with respect to the data (see Sect. 2), which means that the use of arbitrary units is valid.

To quantify the gain in correctly restored bandwidth, we considered the power spectra of the maps (Fig. 4) for the true sky, the sky convolved with the PSF ho (see Sect. 2.1.2), the coaddition, and the proposed sky. As mentioned in Sect. 2.1.2, the sampling frequency of the detector is fs  ≈ 0.02 arcsec-1. Consequently the acquired data during one integration cannot correctly represent frequencies above fs /2 ≈ 0.01 arcsec-1. We have also seen in Sect. 2.1.2 that the FWHM of the PSF is 25.2′′ at 350 μm, i.e. a cutoff frequency of the optical transfer function of ≈0.04 arcsec-1. The attenuation effect of the convolution by the PSF on the true map is visible the power spectra of the convolved and coaddition maps for all frequencies above ≈0.008 arcsec-1 (Fig. 4).

The power spectrum of the proposed map perfectly follows the power spectrum of the true map, from the null frequency up to a limit frequency that depends on the noise level. In the “standard noise” case (Fig. 4(a)) this limit is 0.025 arcsec-1, that is to say, almost three times the limit frequency of each integration (fs /2 ≈ 0.01 arcsec-1). It illustrates that our method also takes full advantage of the high-frequency temporal sampling. In any case and compared to the coaddition, we have multiplied the spectral bandwidth by a factor ≈4 (starting from the null frequency) where frequencies attenuated by the optical transfer function are accurately inverted.

thumbnail Fig. 4

Circular means of power spectra for the three levels of noise (standard deviations: σn, 10 σn and 0.1 σn). The parameter μ is chosen to be optimal each time from the point of view of the error ℰ (Eq. (30)).

thumbnail Fig. 5

Uncertainty provided by the a posteriori standard deviation . a) Shows the map of the standard deviation for each pixel and b) gives a profile. c) Shows a profile of the true map as a solid line and the two dashed lines give a interval around the estimated map. d) Shows the power spectrum of the true map as a solid red line and the two dashed lines give a interval around the estimated power spectrum in the “standard noise” case.

Our method also yields the uncertainties through the Bayesian interpretation (see remark 1), from the standard deviation of the a posteriori law (Figs. 5(a) and b). The uncertainties increase as we move away from the centre of the map because the data contain less information. Moreover, we see in Fig. 5(c) that the true map is inside a interval around the estimated map. In the Fourier space (Fig. 5(d)), up to the 0.03 arcsec-1, the true power spectrum is inside a interval around the estimated power spectrum.

The possibilities of restoring frequencies obviously depend on the noise levels, as illustrated in the spectra shown in Fig. 4. When the noise level is lower, it is possible to restore slightly higher frequencies: up to 0.03 arcsec-1 for “low noise”, compared to 0.025 arcsec-1 for “standard noise”. Conversely, in the case of “high noise”, our method no longer restores the frequencies attenuated by the optical transfer function Fig. 4(b). The deconvolution effect is reduced and the essential effect is one of denoising. Nevertheless, the proposed method gives better (or equivalent) results than coaddition in all cases.

4.1.3. Other types of sky

thumbnail Fig. 6

Restoration of cirrus superimposed on point sources.

Our method is based on spatial regularity information but to assess its robustness as it is, we tested it with two other types of sky in which the spatial regularity is less pronounced: galactic cirrus superimposed on point sources, and a galaxy image (Figs. 6 and 7).

thumbnail Fig. 7

Restoration of galaxy.

The coaddition map (Fig. 6(c)) is smoother than the proposed one (Fig. 6(b)), and several point sources are visible on the proposed map but not on the coaddition one. The amplitude of point sources is underestimated but markedly less so by the proposed method than by coaddition (Figs. 6(d) and e). Rebounds also appear around the point sources, a feature characteristic of linear deconvolution (resulting from a quadratic criterion).

The galaxy does not contain any point source but has spatial structures that are more complex than the galactic cirrus. These structures are considerably better restored by our method than by coaddition (Fig. 7) and it is particularly clear around pixels 250 and 300.

In conclusion, the proposed method is flexible and shows a good restoration capacity for various types of maps. In particular, it possesses a certain robustness compared to an input sky presenting characteristics that are poorly taken into account by the a priori model based on regularity information. It provides a sky that is closer to the real one than that obtained by coaddition, even in the least favourable cases.

4.2. Processing real data

thumbnail Fig. 8

Central part (23′ ×  23′) of NGC 7023 in the three channels PSW, PMW and PLW (left, middle and right, respectively). Top panels: coadded maps; bottom panels: proposed maps.

thumbnail Fig. 9

Profiles along the three sections shown in the top left panel of Fig. 8. Each panel shows the profiles within the PSW, PMW and PLW channels, offset for clarity from bottom to top, respectively. Left panel: horizontal profile; central and right panels: vertical profiles. Black: coadded maps, blue: proposed maps.

thumbnail Fig. 10

85′ ×  110′ rectangle in the Polaris flare for the PSW channel (the total field observed during the science demonstration phase of Herschel is presented in Miville-Deschênes et al. 2010). Left panel: coadded map. Right panel: proposed result.

thumbnail Fig. 11

Zoom on the 10′ ×  10′ green square seen in Fig. 10. Top panels and from left to right: coadded maps in the PSW, PMW and PLW channels, respectively; bottom panels: proposed maps in the three channels.

thumbnail Fig. 12

Circular means of the power spectrum of the Polaris flare in the PSW (left panels), PMW (middle panels) and PSW (right panels) channels. The bottom panels present plots on the frequency range from 5 × 10-3 to 10-1 arcsec-1. The red lines show the power law P(k) ∝  kγ adjusted in a frequency range from 10-3 arcsec-1 to 3 × 10-2 arcsec-1, with γ =  −2.7. The pink solid lines show the optical transfer functions (OTF) for each band.

We conducted tests with real data of the reflection nebula NGC 7023 and of the Polaris flare (which is a high Galactic latitude cirrus cloud) performed during the science demonstration phase of Herschel and already presented in (Abergel et al. 2010) and (Miville-Deschênes et al. 2010), respectively. In order to run our algorithm, we took the level-1 files processed using HIPE. The true sky is not known, of course, so the value of the regularization parameter was fixed for each of the spectral channels by a trade-off between the gain in spectral resolution and the amplification of the noise.

Figures 8 to 12 illustrate the results for NGC 7023 and the Polaris flare. The gain in spatial resolution is spectacular in the three channels. It is interesting to note that the map of NGC 7023

obtained by our method in the longest wavelength channel (500 μm, PLW channel) shows spatial structures that are not visible in the coaddition but are real since they are visible at shorter wavelengths (250 μm, PSW channel), as illustrated for instance in the right panel of Fig. 9. The same panel also shows that negative rebounds appear on the sharpest side of the brightest filament of NGC 7023. This filament is the narrowest structure of the map and its width is comparable to the width of the PSF. Similar rebounds were also seen in our simulations with point sources (Fig. 6). The Polaris flare does not contain comparable bright and narrow filament, so the proposed map does not present this kind of artefact. The zoom on a 10′  ×  10′ square (Fig. 11) illustrates the gain in angular resolution for faint structures.

Figure 12 shows that the power spectra of the proposed maps of the Polaris flare in the three channels follow the expected power law that is typical of the infrared emission of high Galactic cirrus P(k)αkγ with γ =  −2.7 (e.g. , Miville-Deschênes et al. 2010) on a frequency range from 10-3 to 3 × 10-2 arcsec-1. For the simulated data of our Sect. 4.1, the attenuation effect of the convolution by the PSF is accurately inverted up to the frequency where the noise is dominant. Thanks to this correction, the contrast of the small-scale structures is enhanced (Figs. 10 and 12) w.r.t. the coaddition, since the energy of each structure is put in a shorter number of pixels than for the coaddition. At smaller frequencies, Miville-Deschênes et al. (2010) have shown that the SPIRE spectra are attenuated compared to IRAS, which is likely owing to the correction of 1/f noise attributed to thermal drifts in the preprocessing of the data.

5. Conclusion

We have proposed a new method for super-resolved image reconstruction for scanning instruments and its application to the SPIRE instrument of the Herschel observatory.

The first key element is an instrument model that describes the physical processes involved in the acquisition. To explain the data in a reliable way, our model combines the descriptions of three elements: (i) the sky as a function of continuous variables in the three dimensions (two spatial and one spectral), (ii) the optics and the rest of the instrumentation (bolometer, electronics, etc.) and (iii) the scanning strategy. We thus arrived at a linear model in integral form (Eq. (7)). We then wrote it in a matrix form (Eq. (16)) by making certain calculations explicit. Next, by coming close to the pointed directions (on a fine grid), we decomposed it into a convolution followed by inhomogeneous down-sampling (Eq. (21)). This model provides a faithful link between the data, the sky actually observed, and the instrument effects.

On the sole basis of this instrument model and the data, the inversion is an ill-posed problem, especially if resolution enhancement is desired. The lack of information brought by the data, considering the limitations of the instrument, leads to instability of the inversion, which is all the more noticeable when the target resolution is high. This difficulty is overcome by a standard regularization method that constitutes the second key element. The method relies on spatial regularity information introduced by quadratic penalisation and on a quadratic data attachment term, the trade-off being governed by a regularization parameter. Thus, the inversion is based on a relatively standard linear approach and its implementation uses standard numerical optimization tools (conjugate gradient with optimal step).

The presented results for the SPIRE instrument illustrate, for simulated and real data, the potential of our method. Through the use of the accurate instrument model and a priori regularity information, we restored spatial frequencies over a bandwidth ~4 times that obtained with coaddition. In all channels, the attenuation by the optical transfer function is accurately inverted up to the frequency where the noise is dominant. The photometry is also properly restored.

A future work will focus on the question of hyperparameter and instrument parameter estimation, that is to say, unsupervised and myopic problems. We have a work in progess about this problem and it is developed in a Bayesian framework and resorts to an Markov Chain Monte-Carlo algorithm. Moreover, an estimation of the correlation matrix parameters (cutoff frequency, attenuation coefficients, spectral profile, etc.) could be achieved for the object or the noise (typically for the 1/f noise).

From another perspective, quadratic prior is known for possible excessive sharp edge penalisation in the restored object. The use of convex L2 − L1 penalisation (Künsch 1994; Charbonnier et al. 1997; Mugnier et al. 2004; Thiébaut 2008) can overcome this limitation, if needed. Moreover, the proposed method can be specialized to deal with construction/separation of two superimposed components: (i) an extended component together with (ii) a set of point sources (Giovannelli & Coulais 2005).

Finally, another relevant contribution could rely on the introduction of the spectral dependence between the different channels in the data inversion. The conjunction with a PACS direct model and the joint inversion of SPIRE and PACS data would greatly improve the map reconstruction.


For the illustration on real data in Sect. 4.2, the correction of the low-pass filter was performed using the Herschel Interactive Processing Environment (Ott 2010), and the time constant of the first-order low-pass filter was set to the time constant for the bolometer alone (5.7 ms).


As an alternative, a non-quadratic norm of the derivative, e.g. convex penalty, could also be used. Its interest is less penalization of high gradients in the map. Unfortunately, the measure on coefficients is no more explicit.


The authors would like to thank M. Griffin (cspa – Cardiff University), B. Sibthorpe (ukatc – Royal Observatory Edinburgh), G. Le Besnerais (dtim – onera) and F. Champagnat (dtim – onera), for fruitful discussions, and cnes and the astronet consortium for funding. The authors also thank K. Dassas for her help in the preprocessing of SPIRE data.


  1. Abergel, A., Arab, H., Compiègne, M., et al. 2010, A&A, 518, L96 [Google Scholar]
  2. Andrews, H. C., & Hunt, B. R. 1977, Digital Image Restoration (Englewood Cliffs,nj: Prentice-Hall) [Google Scholar]
  3. Boulanger, F., Abergel, A., Bernard, J., et al. 1996, A&A, 312, 256 [NASA ADS] [Google Scholar]
  4. Cantalupo, C. M., Borrill, J. D., Jaffe, A. H., Kisner, T. S., & Stompor, R. 2010, ApJS, 187, 212 [NASA ADS] [CrossRef] [Google Scholar]
  5. Champagnat, F., Le Besnerais, G., & Kulcsár, C. 2009, J. Opt. Soc. Am. A, 26, 1730 [NASA ADS] [CrossRef] [Google Scholar]
  6. Charbonnier, P., Blanc-Féraud, L., Aubert, G., & Barlaud, M. 1997, IEEE Trans. Image Processing, 6, 298 [Google Scholar]
  7. Clements, D., Chanial, P., Bendo, G., et al. 2006, spire Mapmaking Algorithm Review Report, Tech. Rep., Astrophysics group at Imperial College London [Google Scholar]
  8. de Graauw, T., Helmich, F. P., Phillips, T. G., et al. 2010, A&A, 518, L6 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  9. Demoment, G. 1989, IEEE Trans. Acoust. Speech, Signal Processing,assp-37, 2024 [Google Scholar]
  10. Elad, M., & Feuer, A. 1999, IEEE Trans. Image Processing, 8, 387 [NASA ADS] [CrossRef] [Google Scholar]
  11. Farsiu, S., Robinson, M., Elad, M., & Milanfar, P. 2004, Image Processing, IEEE Transactions on, 13, 1327 [NASA ADS] [Google Scholar]
  12. Giovannelli, J.-F., & Coulais, A. 2005, A&A, 439, 401 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  13. Griffin, M. J. 2006, Revised Photometer sensitivity model, working version after sensitivity review meeting [Google Scholar]
  14. Griffin, M. J. 2007, The SPIRE Analogue Signal Chain and Photometer Detector Data Processing Pipeline, Tech. Rep., University of Wales Cardiff [Google Scholar]
  15. Griffin, M. J., Bock, J. J., & Gear, W. K. 2002, Appl. Opt., 41, 6543 [Google Scholar]
  16. Griffin, M., Swinyard, B., Vigroux, L., et al. 2008, in SPIE Conf., 7010 [Google Scholar]
  17. Griffin, M. J., Abergel, A., Abreu, A., et al. 2010, A&A, 518, L3 [Google Scholar]
  18. Hardie, R. C., Barnard, K. J., & Armstrong, E. E. 1997, IEEE Trans. Image Processing, 6, 1621 [Google Scholar]
  19. Idier, J., 2008, Bayesian Approach to Inverse Problems (London: ISTE Ltd and John Wiley & Sons Inc.) [Google Scholar]
  20. Künsch, H. R. 1994, Ann. Inst. Stat. Math., 46, 1 [CrossRef] [Google Scholar]
  21. Leshem, A., Christou, J., Jeffs, B. D., Kuruoglu, E., & van der Veen, A. J. 2008, IEEE Journal of Selected Topics in Signal Processing, 2 [Google Scholar]
  22. Leshem, A., Kamalabadi, F., Kuruoglu, E., & van der Veen, A.-J. 2010, Signal Processing Magazine, 27 [Google Scholar]
  23. Miville-Deschênes, M., Martin, P. G., Abergel, A., et al. 2010, A&A, 518, L104 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  24. Molina, R., & Ripley, B. D. 1989, J. Appl. Stat., 16, 193 [NASA ADS] [CrossRef] [Google Scholar]
  25. Mugnier, L., Fusco, T., & Conan, J.-M. 2004, J. Opt. Soc. Amer., 21, 1841 [Google Scholar]
  26. Nguyen, N., Milanfar, P., & Golub, G. 2001, IEEE Trans. Image Processing, 10, 573 [Google Scholar]
  27. Nocedal, J., & Wright, S. J. 2000, Numerical Optimization, Series in Operations Research (New York: Springer Verlag) [Google Scholar]
  28. Orieux, F. 2009, Ph.D. Thesis, Université Paris-Sud 11 [Google Scholar]
  29. Orieux, F., Rodet, T., & Giovannelli, J.-F. 2009, in Proc. of IEEE International Conference on Image Processing (ICIP 2009), Cairo, Egypt [Google Scholar]
  30. Orieux, F., Giovannelli, J.-F., & Rodet, T. 2010, J. Opt. Soc. Am. A, 27, 1593 [Google Scholar]
  31. Ott, S. 2010, in Astronomical Data Analysis Software and Systems XIX, ed. Y. Mizumoto, K.-I. Morita, & M. Ohishi, ASP Conf. Ser., 434, 139 [Google Scholar]
  32. Park, S. C., Park, M. K., & Kang, M. G. 2003, IEEE Trans. Signal Processing Mag., 21 [Google Scholar]
  33. Patanchon, G., Ade, P. A. R., Bock, J. J., et al. 2008, ApJ, 681, 708 [NASA ADS] [CrossRef] [Google Scholar]
  34. Patti, A. J., Sezan, M. I., & Tekalp, A. M. 1997, IEEE Trans. Image Processing, 6, 1064 [NASA ADS] [CrossRef] [Google Scholar]
  35. Pilbratt, G. L., Riedinger, J. R., Passvogel, T., et al. 2010, A&A, 518, L1 [CrossRef] [EDP Sciences] [Google Scholar]
  36. Poglitsch, A., Waelkens, C., Geis, N., et al. 2010, A&A, 518, L2 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  37. Robert, C. P., & Casella, G. 2000, Monte-Carlo Statistical Methods, Springer Texts in Statistics (New York,ny: Springer) [Google Scholar]
  38. Rochefort, G., Champagnat, F., Le Besnerais, G., & Giovannelli, J.-F. 2006, IEEE Trans. Image Processing, 15, 3325 [NASA ADS] [CrossRef] [Google Scholar]
  39. Rodet, T., Orieux, F., Giovannelli, J.-F., & Abergel, A. 2008, IEEE J. Selec. Topics in Signal Proc., 2, 802 [Google Scholar]
  40. Shewchuk, J. R. 1994, An Introduction to the Conjugate Gradient Method Without the Agonizing Pain, Tech. Rep., Carnegie Mellon University [Google Scholar]
  41. Sibthorpe, B., & Griffin, M. J. 2006, Spire Photometer Simulator, Tech. Rep., University of Wales Cardiff [Google Scholar]
  42. Sibthorpe, B., Chanial, P., & Griffin, M. J. 2009, A&A, 503, 625 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  43. Sudiwala, R. V., Griffin, M. J., & Woodcraft, A. L. 2002, International Journal of Infrared and Millimeter Waves, 23, 545 [CrossRef] [Google Scholar]
  44. Thiébaut, E. 2008, in Astronomical Telescopes and Instrumentation, Proc. SPIE, 7013, 70131–I [Google Scholar]
  45. Tikhonov, A., & Arsenin, V. 1977, Solutions of Ill-Posed Problems (Washington,dc: Winston) [Google Scholar]
  46. Twomey, S. 1962, J. Assoc. Comp. Mach., 10, 97 [Google Scholar]
  47. Vandewalle, P., Sbaiz, L., Vandewalle, J., & Vetterli, M. 2007, IEEE Trans. Signal Processing, 55, 3687 [NASA ADS] [CrossRef] [Google Scholar]
  48. Wieprecht, E., Schreiber, J., de Jong, J., et al. 2009, in ASP Conf. Ser., ed. D. A. Bohlender, D. Durand, & P. Dowler, 411, 531 [Google Scholar]
  49. Woods, N. A., Galatsanos, N. P., & Katsaggelos, A. K. 2006, IEEE Trans. Image Processing, 15, 201 [NASA ADS] [CrossRef] [Google Scholar]

Appendix A: Energy spectral density

This appendix gives the details of the calculations concerning the regularity measure used in Sect. 3 and its frequency interpretation. Based on Eq. (10), the energy of the first derivative can be written By noting the derivative , we obtain the autocorrelation of the first derivative of the generating function and we have (A.1)As there is a finite number of coefficients xij, the measure can be put in the form of a quadratic norm where the matrix D α is obtained from Ψα. Considering the invariant structure of (A.1), the matrix D α has a Tœplitz structure. The calculation is performed by discrete convolution and can be computed by FFT.

By introducing the dimension β, The quadratic regularity measure on the function with continuous variables is expressed through a quadratic regularity measure on the coefficients x .

The autocorrelation Fourier transform (FT) is the energy spectral density, i.e. the squared modulus of the FT of where is the FT of ψ. When the dimension β is introduced, the a priori energy spectral density for the sky naturally has circular symmetry (A.2)This calculation brings out the frequency structure introduced a priori for the sky according to the chosen function ψ. This is a high-pass structure since the factor tends to cancel around zero, which is consistent with a regularity measure.

Appendix B: Explicit calculation of the model

In order to integrate over time in (13), we use the expressions of (2)for pα(t) and pβ(t), which give With the bolometer response we have (B.1)with oα = cα + αij − αlm and oβ = cβ + βij − βlm. This is the integration of a truncated Gaussian since the argument of the exponential is a quadratic form w.r.t.  t.

B.1. Calculation of the argument

Here, we express the quadratic form in question Expanding and factorizing the numerator n(t) gives with the constants , and . Putting this t-quadratic form into the integral, we obtain (B.2)where the function erf is defined by This expression can be simplified by using the function erfcx(x) = exp(x2)(1 − erf(x)).

B.2. Argument of the exponential

For the sake of notational simplcity, let us note . The argument of the function exp then is and then So, by injecting this expression in (B.2), the function erfcx appears The values of S,a and b can be replaced. First of all, the argument of the exponential is (B.3)and the terms nTs /τ simplify. We then use the expression for Σ2to obtain two perfect squares. Finally the argument of the exponential (B.3)in (B.2)is (B.4)which is exactly the argument of a bivariate Gaussian. We again find the same standard deviations Σα and Σβ. However, the response of the optics, initially (oα,oβ), is now shifted by (nTs vα,nTs vβ), i.e. the pointing difference between two successive time samples.

B.3. Argument of the function erfcx and final expression

Another term is needed to know the global response. It comes from the function erfcx, which corresponds to the influence of the bolometer. The argument of the function erfcx is (B.5)where , and what is of interest here is that the same factors are found in the argument of the exponential. To know the global response, we need to bring everything together. By injecting the expressions of the arguments (B.4)and (B.5), we obtain with, similarly for α and β: , which finishes the integration of (13)over time.

Appendix C: Direct model computation algorithm

This part gives some more details on the concrete computation of a model output H x of Sect. 2.3. First of all, there are four different impulse responses whatever the number of scans. For scans in the same direction, the response is the same. Thus we can construct four different convolution matrices Hi for i = 1,2,3,4 and apply four different discrete convolutions to the coefficients x .

We can also deduce the structure of the transpose of the model . The matrix P t is a data summation/zero padding matrix (addition of the data that possess the same pointing while setting the other coefficients to zero), and corresponds to a convolution with the space reversal impulse responses.

The product by P t is very similar to the construction of a naive map except that the data are added rather than averaged. Also, the operation is done by velocity and not globally. Finally, the products by Hc and are convolutions computed by FFT.

All Figures

thumbnail Fig. 1

Factorised physical model (PSW detector, velocity of 30′′/s towards the left): map of spatial redundancies P (left) and spatial impulse response Hc (right). The spatial scales are different for better visualisation of the impulse response.

In the text
thumbnail Fig. 2

Reconstruction error ℰ  vs. regularization parameter μ for of cirrus with “standard noise”. The minimum error is ℰ  min = 0.08 for the proposed method, while ℰ  = 0.12 for μ = 0.

In the text
thumbnail Fig. 3

Comparison of results. a) Shows the true map; b) presents the proposed map and c) the coaddition. A horizontal profile is shown in d) and e) gives a zoom.

In the text
thumbnail Fig. 4

Circular means of power spectra for the three levels of noise (standard deviations: σn, 10 σn and 0.1 σn). The parameter μ is chosen to be optimal each time from the point of view of the error ℰ (Eq. (30)).

In the text
thumbnail Fig. 5

Uncertainty provided by the a posteriori standard deviation . a) Shows the map of the standard deviation for each pixel and b) gives a profile. c) Shows a profile of the true map as a solid line and the two dashed lines give a interval around the estimated map. d) Shows the power spectrum of the true map as a solid red line and the two dashed lines give a interval around the estimated power spectrum in the “standard noise” case.

In the text
thumbnail Fig. 6

Restoration of cirrus superimposed on point sources.

In the text
thumbnail Fig. 7

Restoration of galaxy.

In the text
thumbnail Fig. 8

Central part (23′ ×  23′) of NGC 7023 in the three channels PSW, PMW and PLW (left, middle and right, respectively). Top panels: coadded maps; bottom panels: proposed maps.

In the text
thumbnail Fig. 9

Profiles along the three sections shown in the top left panel of Fig. 8. Each panel shows the profiles within the PSW, PMW and PLW channels, offset for clarity from bottom to top, respectively. Left panel: horizontal profile; central and right panels: vertical profiles. Black: coadded maps, blue: proposed maps.

In the text
thumbnail Fig. 10

85′ ×  110′ rectangle in the Polaris flare for the PSW channel (the total field observed during the science demonstration phase of Herschel is presented in Miville-Deschênes et al. 2010). Left panel: coadded map. Right panel: proposed result.

In the text
thumbnail Fig. 11

Zoom on the 10′ ×  10′ green square seen in Fig. 10. Top panels and from left to right: coadded maps in the PSW, PMW and PLW channels, respectively; bottom panels: proposed maps in the three channels.

In the text
thumbnail Fig. 12

Circular means of the power spectrum of the Polaris flare in the PSW (left panels), PMW (middle panels) and PSW (right panels) channels. The bottom panels present plots on the frequency range from 5 × 10-3 to 10-1 arcsec-1. The red lines show the power law P(k) ∝  kγ adjusted in a frequency range from 10-3 arcsec-1 to 3 × 10-2 arcsec-1, with γ =  −2.7. The pink solid lines show the optical transfer functions (OTF) for each band.

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.