Revisiting the theory of interferometric widefield synthesis
^{1} IRAM, 300 rue de la Piscine, 38406 Grenoble Cedex, France
email: pety@iram.fr; rodriguez@iram.fr
^{2} LERMA, UMR 8112, CNRS and Observatoire de Paris, 61 avenue de l’Observatoire, 75014 Paris, France
Received: 13 July 2009
Accepted: 16 March 2010
Context. After several generations of interferometers in radioastronomy, widefield imaging at high angular resolution is today a major goal for trying to match optical widefield performances.
Aims. All the radiointerferometric, widefield imaging methods currently belong to the mosaicking family. Based on a 30 years old, original idea from Ekers & Rots, we aim at proposing an alternate formalism.
Methods. Starting from their ideal case, we successively evaluate the impact of the standard ingredients of interferometric imaging, i.e. the sampling function, the visibility gridding, the data weighting, and the processing of the short spacings either from singledish antennas or from heterogeneous arrays. After a comparison with standard nonlinear mosaicking, we assess the compatibility of the proposed processing with 1) a method of dealing with the effect of celestial projection and 2) the elongation of the primary beam along the scanning direction when using the onthefly observing mode.
Results. The dirty image resulting from the proposed scheme can be expressed as a convolution of the sky brightness distribution with a set of widefield dirty beams varying with the sky coordinates. The widefield dirty beams are locally shiftinvariant as they do not depend strongly on position on the sky: their shapes vary on angular scales typically larger or equal to the primary beamwidth. A comparison with standard nonlinear mosaicking shows that both processing schemes are not mathematically equivalent, though they both recover the sky brightness. In particular, the weighting scheme is very different in both methods. Moreover, the proposed scheme naturally processes the short spacings from both singledish antennas and heterogeneous arrays. Finally, the sky gridding of the measured visibilities, required by the proposed scheme, may potentially save large amounts of harddisk space and cpu processing power over mosaicking when handling data sets acquired with the onthefly observing mode.
Conclusions. We propose to call this promising family of imaging methods widefield synthesis because it explicitly synthesizes visibilities at a much finer spatial frequency resolution than the one set by the diameter of the interferometer antennas.
Key words: methods: analytical / techniques: interferometric / methods: data analysis / techniques: image processing
© ESO, 2010
1. Introduction
The instantaneous field of view of an interferometer is naturally limited by the primary beam size of the individual antennas. For the ALMA 12 mantennas, this field of view is ~9′′ at 690 GHz and ~27′′ at 230 GHz. The astrophysical sources in the (sub)millimeter domain are often much larger than this, but still structured on much smaller angular scales. Interferometric widefield techniques enable us to fully image these sources at high angular resolution. These techniques first require an observing mode that in one way or another scans the sky on spatial scales larger than the primary beam. The most common observing mode in use today, known as stopandgo mosaicking, consists in repeatedly observing sky positions typically separated by half the primary beam size. The improvement of the tracking behavior of modern antennas now leads astronomers to consider onthefly observations, with the antennas slewing continuously across the sky. The improvements in correlator and receiver technologies are also leading to techniques that could potentially sample the antenna focal planes with multibeam receivers instead of the singlepixel receivers installed on current interferometers.
The ideal measurement equation of interferometric widefield imaging is (1)where V is the visibility function of 1) u_{p} (the spatial frequency with respect to the fixed phase center) and 2) α_{s} (the scanned sky angle), I is the sky brightness, and B the antenna power pattern or primary beam of an antenna of the interferometer (Thompson et al. 1986, Chap. 2). For simplicity, 1) we assume that the primary beam is independent of azimuth and elevation, and 2) we use onedimensional notation without loss of generality. We do not deal with polarimetry (see e.g. Hamaker et al. 1996; Sault et al. 1996a, 1999) because it adds another level of complexity over our first goal here, i.e. widefield considerations. Several aspects make Eq. (1) peculiar with respect to the ideal measurement equation for singlefield observations. First, the visibility is a function not only of the uv spatial frequency (u_{p}) but also of the scanned sky coordinate (α_{s}). Second, Eq. (1) is a mix between a Fourier transform and a convolution equation. It can be regarded, for example, as the Fourier transform along the α_{p} dimension of the function, B(α_{p} − α_{s}) I(α_{p}), of the (α_{p}, α_{s}) variables. But Eq. (1) can also be written as the convolution: (2)where (3)and (4)For each u_{p} kept constant, V(u_{p}, α_{s}) is the convolution of ℬ and ℐ. Indeed, ℐ(α_{p},u_{p} = 0) = I(α_{p}), so we derive (5)i.e., the convolution equation for singledish observations.
Ekers & Rots (1979) were the first to show that the measurement equation (Eq. (1)) enables us to recover spatial frequencies of the sky brightness at a much finer uv resolution than the uv resolution set by the diameter of the interferometer antennas. Interestingly enough, the goal of Ekers & Rots (1979) was “just” to find a way to produce the missing short spacings of a multiplying interferometer. However, Cornwell (1988) realized that Ekers & Rots’ scheme has a much stronger impact, because it explains why an interferometer is able to do widefield imaging. Cornwell (1988) also demonstrated that onthefly scanning is not absolutely necessary to interferometric widefield imaging. Indeed, the largescale information can be retrieved in mosaics of singlefield observations, provided that the sampling of the single fields follows the skyplane Nyquist sampling theorem.
As a result, all the information about the sky brightness is coded in the visibility function. From a dataprocessing viewpoint, all the current radiointerferometric widefield imaging methods (see, e.g., Gueth et al. 1995; Sault et al. 1996b; Cornwell et al. 1993; Bhatnagar & Cornwell 2004; Bhatnagar et al. 2008; Cotton & Uson 2008) belong to the mosaicking family^{1} pioneered by Cornwell (1988). In this family, the processing starts with Fourier transforming V(u_{p}, α_{s}) along the u_{p} dimension (i.e. at constant α_{s}) to produce a set of singlefield dirty images before linearly combining them and forming a widefield dirty image. In this paper, we propose an alternate processing, which starts with a Fourier transform of V(u_{p}, α_{s}) along the α_{s} dimension (i.e. at constant u_{p}). We show how this explicitely synthesizes the spatial frequencies needed to do widefield imaging, which are linearly combined to form a “widefield uv plane”, i.e., one uvplane containing all the spatial frequency information measured during the widefield observation. Inverse Fourier transform will produce a dirty image, which can then be deconvolved using standard methods. The existence of two different ways to extract the widefield information from the visibility function raises several questions: are they equivalent? What are their relative merits?
We thus aim at revisiting the mathematical foundations of widefield imaging and deconvolution. Sections 2 to 7 propose the new algorithm, which we call widefield synthesis: Sect. 2 first defines the notations and it then lays out the basic concepts used throughout the paper. Section 3 states the steps needed to go beyond the Ekers & Rots scheme and explores the consequences of incomplete sampling of both the uv and sky planes. Section 4 discusses the effects of gridding by convolution and regular resampling. Section 5 describes how to influence the dirty beam shapes and thus the deconvolution. Section 6 states how to introduce short spacings measured either from a singledish antenna or from heterogeneous interferometers. Section 7 compares the proposed widefield synthesis algorithm with standard nonlinear mosaicking. Some detailed demonstrations are factored out in Appendix A to enable an easier reading of the main paper, while ensuring that interested readers can follow the demonstrations. Appendices B and C then explain how the widefield synthesis algorithm can cope with nonideal effects: Appendix B discusses how at least one standard way to cope with sky projection problems is compatible with the widefield synthesis algorithm. Appendix C explores the consequences of using the onthefly observing mode. Finally, we assume good familiarity with singlefield imaging in various places. We refer the reader to wellknown references: e.g. Chap. 6 of Thompson et al. (1986) and Sramek & Schwab (1989).
2. Notations and basic concepts
2.1. Notations
In this paper, we use the Bracewell (2000)’s notation to display the relationship between a function I(α) and its direct Fourier transform (u), i.e., (6)where (α, u) is the couple of Fourier conjugate variables. We also use the following sign conventions for the direct and inverse Fourier transforms (7)and (8)As V is a function of two independent quantities (u_{p} and α_{s}), the Fourier transform may be applied independently on each dimension, while the other dimension stays constant. Several additional conventions are used to express this. First, we introduce a specific notation to state that either the first or the second dimension stays constant: (9)and (10)Second, we use a bottom/top line to derive the notation of the Fourier transform along the first/second dimension from the notation of the original function. Third, on the Fourier transform sign (i.e. ⊃ ), we explicitly state the dimension along which the Fourier transform is computed. For instance, if D is a function of (α_{p}, α_{s}), then the Fourier transform of D along the first dimension is expressed as (11)while the Fourier transform of D along the second dimension is expressed as (12)We also use a more compact notation when doing the Fourier transform on both dimensions simultaneously, i.e., (13)Finally, the convolution of two functions G and V is noted and defined as (14)For reference, Table 1 summarizes the definitions of the symbols used most throughout the paper. With the onedimensional notation used throughout the paper, the number of planes quoted directly gives the number of associated dimensions of the symbols. Generalization to images would require a doubling of the number of planes/dimensions. Table 2 defines the uv and angular scales that are relevant to widefield interferometric imaging, and Fig. 1 sketches the different angular scales. Each angular scale (θ) is related to a uv scale (d) through θ = 1/d, where θ and d are measured in radians and in units of λ (the wavelength of the observation). In the rest of the paper, we explicitely distinguish between θ_{prim} ≡ 1/d_{prim}, the angular scale associated to the diameter of the interferometer antennas, and θ_{fwhm}, the full width at half maximum of the primary beam. The relation between θ_{prim} and θ_{fwhm} depends on the illumination of the receiver feed by the antenna optics. In radio astronomy, we typically have θ_{fwhm} ~ 1.2 θ_{prim} (see e.g. Goldsmith 1998, Chap. 6). Finally, the notion of antialiasing scale (θ_{alias}) is introduced and discussed in Sect. 4.2.
Definition of the symbols used to expose the widefield synthesis formalism.
Definition of the uv and sky scales relevant to widefield interferometric imaging.
Fig. 1 Visualization of the different angular scales relevant to widefield interferometric imaging. The notion of antialiasing scale (θ_{alias}) is introduced and discussed in Sect. 4.2. 

Open with DEXTER 
2.2. Basic concepts
Figure 2 illustrates the principles underlying 1) the setup to get interferometric widefield observations and 2) our proposition to process them. For simplicity, we display the minimum possible complexity without loss of generality. The top row displays the sky plane. The middle row represents the 4dimensional measurement space at different stages of the processing. As it is difficult to display a 4dimensional space on a sheet of paper, the bottom row shows 2dimensional cuts of the measurement space at the same processing stages.
Fig. 2 Illustration of the principles of widefield synthesis, which enables us to image widefield interferometric observations. The top row displays the sky plane. The middle row displays the 4dimensional visibility space and the bottom row shows 2dimensional cuts of this space at different stages of the processing. In panels b) to d), the scanned dimensions (α_{s} and u_{s}) are displayed in blue while the phased spatial scale dimensions (u_{p}) are displayed in red and the spatial scale dimensions (u) of the final widefield uv plane are displayed in black. The grey zones of panels b.2) and c.2) show the regions of the visibility space without measurements (missing shortspacings). In detail, panel a) shows a possible scanning strategy of the sky to measure the unknown brightness distribution at high angular resolution: for simplicity it is here just a 7field mosaic. Panels b.1) and b.2) sketch the space of measured visibilities: the uv plane at each of the 7 measured sky positions is displayed as a blue square box in panel b.1) and a blue vertical line in panel b.2). For simplicity, only 6 visibilities are plotted in panel b.1). Panels c.1) and c.2) sketch the space of synthesized visibilities after Fourier transform of the measured visibilities along the scanned coordinate (α_{s}): at each measured spatial frequency u_{p} (displayed on the blue axes) is associated one space of synthesized widefield spatial frequencies displayed as one of the red squares in panel c.1) and the red vertical lines in panel c.2). The widefield spatial scales are synthesized 1) on a grid whose cell size is related to the total field of view of the observation and 2) only inside circles whose radius is the primary diameter of the interferometer antennas. Panels d.1) and d.2) display the final, widefield uv plane. This plane is built by application of the shiftandaverage operator along the black lines on panel c.2), lines that display the region of constant u spatial frequency in the (u_{p}, u_{s}) space. Standard inverse Fourier transform and deconvolution methods then produce a widefield distribution of sky brightnesses as shown in panel e). 

Open with DEXTER 
2.2.1. Observation setup and measurement space
Panel a) displays the sky region for which we aim for estimating the sky brigthness, I(α). The field of view of an interferometer observing in a given direction of the sky has a typical size set by the primary beam shape. In our example, this is illustrated by any of the circles whose diameter is θ_{prim}. As we aim at observing a wider field of view, e.g. θ_{field}, the interferometer needs to scan the targeted sky field. We assume that we scan through stopandgo mosaicking, ending up with a 7field mosaic.
After calibration, the output of the interferometer is a visibility function, V(u_{p}, α_{s}), whose relation to the sky brightness is given by the measurement equation (Eq. (1)). Panel b.1) shows the measurement space as a mosaic of singlefield uv planes: the uv plane coverage of each singlefield observation is displayed as a blue subpanel at the sky position where it has been measured and which is featured by the red axes. We assume 1) that the interferometer has only 3 antennas and 2) that only a single integration is observed per sky position. This implies only 6 visibilities per singlefield uv plane. In panel b.2), the uv planes at constant α_{s} are displayed as the blue vertical lines. The measured spatial frequencies belong to the [−d_{max}, − d_{min}] and [ + d_{min}, + d_{max}] ranges, where d_{min} and d_{max} are respectively the shortest and longest measured baseline length. d_{min} is related to the minimum tolerable distance between two antennas to avoid collision. Here, we chose d_{min} ~ 1.5 d_{prim}. The grey zone between − d_{min}, and + d_{min} displays the missing short spacings.
2.2.2. Processing by explicit synthesis of the widefield spatial frequencies
All the information about the sky brightness, I(α), is somehow coded in the visibility function, V(u_{p}, α_{s}). The high spatial frequencies (from d_{min} to d_{max}) are clearly coded along the u_{p} dimension. The uncertainty relation between Fourier conjugate quantities also implies that the typical spatial frequency resolution along the u_{p} dimension is only d_{prim} because the field of view of a single pointing has a typical size of θ_{prim}. However, widefield imaging implies measuring all the spatial frequencies with a finer resolution, d_{field} = 1/θ_{field}. The missing information must then be hidden in the α_{s} dimension.
In Sect. 3, we show that Fourier transforming the measured visibilities along the α_{s} dimension (i.e. at constant u_{p}) can synthesize the missing spatial frequencies, because the α_{s} dimension is sampled from − θ_{field}/2 to + θ_{field}/2, implying a typical spatialfrequency resolution of the u_{s} dimension equal to d_{field}. Conversely, the α_{s} dimension is probed by the primary beams with a typical angular resolution of θ_{prim}, implying that the u_{s} spatial frequencies will only be synthesized inside the [ − d_{prim}, + d_{prim}] range. Panels c.1) and c.2) illustrate the effects of the Fourier transform of V(u_{p}, u_{s}) along the α_{s} dimension, in 4 and 2 dimensions, respectively. The red subpanels or vertical lines display the u_{s} spatial frequencies around each constant u_{p} spatial frequency.
In panels d.1) and d.2) (i.e. after the Fourier transform along the α_{s} dimension), (u_{p}, u_{s}) contains all the measured information about the sky brightness in a spatial frequency space. However, the information is ordered in a strange and redundant way. Indeed, we show that (u_{p}, u_{s}) is linearly related to (u_{p}+ u_{s}). To first order, the information about a given spatial frequency u is stored in all the values of (u_{p}, u_{s}) which verifies u = u_{p} + u_{s} (black lines on panel c.2).
A shift operation will reorder the spatial scale information and averaging will compress the redundancy (illustrated by the halving of the number of the space dimensions). The use of a shiftandaverage operator thus produces a final uv plane containing all the spatial scale information to image a wide field in an intuitive form. We thus call this space the widefield uv plane. Panels d.1) and d.2) display this space, where the minimum relevant spatial frequency is related to the total field of view, while the maximum one is related to the interferometer resolution.
Sections 3 and 4 show that applying the shiftandaverage operator to produces the Fourier transform of a dirty image, which is a local convolution of the sky brightness by a slowly varying dirty beam. As a result, inverse Fourier transform of and deconvolution methods will produce a widefield distribution of sky brightness as shown in panel e) at the top right of Fig. 2.
3. Beyond the Ekers & Rots scheme
In the real world, the visibility function is not only sampled, but this sampling is incomplete for two main reasons. 1) The instrument has a finite spatial resolution, and the scanning of the sky is limited, implying that the sampling in both planes has a finite support. 2) The uv coverage and the skyscanning coverage can have holes caused either by intrinsic limitations (e.g. lack of short spacings or small number of baselines) or by acquisition problems (implying data flagging). The incomplete sampling makes the mathematics on the general case complex. We thus start with the ideal case where we assume that the visibility function is continuously sampled along the u_{p} and α_{s} dimension. We then look at the general case.
3.1. Ideal case: infinite, continuous sampling
Starting from the measurement Eq. (1), Ekers & Rots (1979) first demonstrated (see Sect. A.1) that^{2}(15)For each constant u_{p} spatial frequency, the Fourier transform thus synthesizes a function, , which is simply related to (u_{p}+ u_{s}), the Fourier components of the sky brightness around u_{p}. (u_{p}, u_{s}) is only defined in the [ − d_{prim}, + d_{prim}] interval along the u_{s} dimension because ( − u_{s}) is itself only defined inside this interval, since ( − u_{s}) is the autocorrelation of the antenna illumination.
We search to derive a single estimate of the Fourier components of the sky brightness. Equation (15) indicates that the fraction gives us an estimate of (u) for each couple (u_{p},u_{s}) that satisfies u = u_{p} + u_{s}. However, the information about is strangely ordered. There are two possible ways to look at this ordering. 1) Starting from the measurement space, the Ekers & Rots scheme synthesizes frequencies around each u_{p} measure inside the interval [u_{p} − d_{prim},u_{p} + d_{prim}] at the d_{field} spatial frequency resolution. 2) Starting from our goal, we want to estimate at a given spatial frequency u with a d_{field} spatial frequency resolution. We thus search for all the couples (u_{p},u_{s}) satisfying u = u_{p} + u_{s}, which are displayed in panel c.2) of Fig. 2 as the diagonal black lines. It immediately results that 1) there are several estimates of for each spatial frequency u and 2) the number of estimates varies with u. We can average them to get a better estimate of (u).
This last viewpoint thus suggests averaging in the (u_{p}, u_{s}) space along linepaths defined by u = u_{p} + u_{s}. Such an operator can mathematically be defined as (16)where F is the function to be averaged and W is a normalized weighting function. Using the properties of the Dirac function, we can reduce the double integral to (17)In this equation, we easily recognize a shiftandaverage operator. The normalized weighting function plays a critical role in the following formalism, and we propose clever ways to define W in Sect. 5. In the ideal case studied here, W can be defined as

for u_{s} in [ − d_{prim}, + d_{prim}] ,

W(u_{p},u_{s}) ≡ 0
for other values of u_{s}.
In other words, we have just normalized the integral by the constant length of the averaging linepath.
3.1.1. Widefield dirty image, dirty beam and imageplane measurement equation
Section 3.2 shows that the incomplete sky and uv sampling forbid us to apply the shiftandaverage operator to the function. To guide us in this general case, we thus explore the consequences of applying this operator to in the ideal case. It is easy to demonstrate that the result is the Fourier transform of a dirty image, i.e., (18)Indeed, substituting with the help of Eqs. (17) and (15) and taking the inverse Fourier transform, we get (19)with (20)Here, I_{dirty} conforms to the usual idea of dirty image, i.e., the convolution of a dirty beam by the sky brightness: (21)In contrast to the usual situation for singlefield observations, the mix between a Fourier transform and a convolution of Eq. (1), associated with the specific processing^{3} changes the imageplane measurement equation from a convolution of a dirty beam with the product B I to a convolution of a dirty beam with I. The dependency on the primary beam is still there. It is just transferred from a product of the sky brightness distribution into the definition of the dirty beam.
3.1.2. Summary and interpretation
In summary, a theoretical implementation of widefield synthesis implies

1.
the possibility of Fourier transforming the visibility functionalong the α_{s} dimension (i.e. at constant u_{p}), which gives us a set ofsynthesized uv planes;

2.
the possibility of shiftingandaveraging these synthesized uv planes to build the final, widefield uv plane containing all the available information.
Using those tools, we are able to write the widefield imageplane measurement equation as a convolution of a widefield dirty beam (D) by the sky brightness (I), i.e., (22)We can write a convolution equation in this ideal case because the widefield response of the instrument is shiftinvariant; i.e., D only depends on differences of the sky coordinates.
It is wellknown that for a singlefield observation, the dirty beam is the inverse Fourier transform of the sampling function. The shape of this sampling function is due to the combination of aperture synthesis (the interferometer antennas give a limited number of independent baselines) and Earthrotation synthesis (the rotation of the Earth changes the projection of the physical baselines onto the plane perpendicular to the instantaneous line of sight). By analyzing via a Fourier transform, the evolution of the visibility function with the sky position, the Ekers & Rots scheme synthesizes visibilities at spatial frequencies needed to image a larger field of view than the interferometer primary beam. We thus propose to call this specific processing: widefield synthesis.
3.2. General case: incomplete sampling
Reality imposes limitations on the synthesis of spatial frequencies. Indeed, we have already stated that the visibility function is incompletely sampled both in the uv and sky planes. To take the sampling effects into account, we introduce the sampling function S(u_{p}, α_{s}), which is a sum of Dirac functions at measured positions^{4}. The sampling function cannot be factored into the product of two functions, each only acting on one plane. Indeed, the Earth rotation happening during the source scanning implies a coupling of both dimensions of the sampling function. In other words, the uv coverage will vary with the scanned sky coordinate. This leads us to a shiftdependent situation, precluding us from writing the widefield imageplane measurement equation as a true convolution. We nevertheless search for a widefield imageplane measurement equation as close as possible to a convolution because all the inversion methods devised in the past three decades in radioastronomy are tuned to deconvolve images. The simplest mathematical way to generalize Eq. (22) to a shiftdependent situation is to write it as (23)In this section, we show how the linear character of the imaging process allows us to do this. Section 3.2.1 derives the impact of incomplete sampling on the Ekers & Rots equation, and Sect. 3.2.2 derives the widefield measurement equation in the uv plane. Section 3.2.3 interprets these results.
3.2.1. Effect on the Ekers & Rots equation
The sampled visibility function, SV, is defined as the product of S and V and its Fourier transform along α_{s}, i.e., (24)and (25)Because SV_{up} is the product of two functions of α_{s}, we can use the convolution theorem to show that is the convolution of by , i.e., (26)By replacing with the help of the Ekers & Rots relation (Eq. (15)), we derive (27)As is bounded inside the [ − d_{prim}, + d_{prim}] interval, is a local average, weighted by , of around the u_{p} spatial frequency.
As expected, we recover Eq. (15) for the ideal case (i.e., infinite, continuous visibility function) because then . A more interesting case arises when the visibility function is continuously sampled over a limited sky field of view, i.e., After Fourier transform this gives (30)In this case, the local average of the sky brightness Fourier components happens on a typical uv scale equal to d_{field}. However, the sinc function is known to decay only slowly. Some observing strategy (e.g. quickly observing outside the edges of the targeted field of view to provide a bandguard) could be considered to apodize the skyplane dependence of the sampling function, resulting in faster decaying functions, hence in less mixing of the widefield spatial frequencies.
3.2.2. uvplane widefield measurement equation
Because we aim at estimating the Fourier component of , we introduce the following change of variables and , to derive (31)We then shiftandaverage (u_{p}, u_{s}) to build the Fourier transform of a widefield dirty image (32)Substituting the shiftandaverage operator by its definition and using Eq. (31) to replace , we derive (33)This uvplane widefield measurement equation can be written as (34)if we enforce the following equality (35)This is one way to define , which is convenient though unusual. It is implicit in this definition that we need to make a change of variable (u′′ = u − u′) to derive (36)In the following, we use either one or the other definition of , depending on convenience.
3.2.3. Interpretation
Appendix A.2 demonstrates that the image and uvplane widefield measurement equations (Eqs. (23) and (34)) are equivalent if (37)The imageplane widefield measurement equation (Eq. (23)) can be written as (38)Its interpretation is straightforward: the sky brightness distribution is convolved with a dirty beam, D(α′, α′′), which varies with the sky coordinate α′′. This raises the question of the rate of change of the dirty beam with the sky coordinate. This question is addressed in Sects. 4.2 and 5.
4. Gridding by convolution and regular resampling
We want to Fourier transform the raw visibilities along the sky dimension (α_{s}) at some constant value in the u_{p} dimension. The raw data, however, is sampled on an irregular grid in both the uv and sky planes. We need to grid the measured visibilities in both the uv and the sky planes before Fourier transformation for different reasons. First, the gridding in the uv plane will handle the variation in the spatial frequency as the sky is scanned, i.e., the difficulty and perhaps the impossibility of Fouriertransforming at a completely constant u_{p} value. Second, the gridding along the sky dimension allows the use of Fast Fourier Transforms. As usual, we grid through convolution and regular resampling.
4.1. Convolution
4.1.1. Definitions
We first define a gridding kernel that depends on both dimensions, . This gridding kernel can be chosen as the product of two functions, simplifying the following demonstrations: (39)We then define the sampled visibility function gridded in both the uv and sky planes as Finally, when assessing the impact of the gridding on the measurement Eq. (34), a new function, (42)and its Fourier transforms naturally appear in the equations. Defining the following Fourier transform relationships (43)and (44)we easily derive (45)and (46)Using these notations, we have before gridding, (47)and (48)
4.1.2. Conservation of the widefield measurement equation
Appendix A.3 demonstrates that the widefield dirty image is here again the convolution of the sky brightness I by a widefield dirty beam D^{α} or, in the Fourier plane, (49)with (50)where (51)We thus have equations that resemble those containing the sampling function alone, except for 1) the replacement of the generalized sampling function by its gridded version and 2) the way the variables are linked together both in the gridding of (i.e., Eq. (51)) and in the averaging of (i.e., Eq. (50)).
4.2. Regular resampling
It is well known that too low a resampling rate in one space implies power aliasing in the conjugate space (see e.g. Bracewell 2000; Press et al. 1992). Aliasing must be avoided as much as possible because it folds power outside the imaged region back into it. Table 3 defines the intervals of definition of the different functions we are dealing with (i.e., visibilities, primary beam, dirty image, and dirty beam), as well as the associated sampling rates needed to enforce Nyquist sampling. The boundary values of the definition intervals (u_{max} and α_{max}) are related to the sampling rates (∂α and ∂u, respectively) through (52)where n_{samp} is an integer characterizing the sampling. Nyquist sampling implies n_{samp} = 2. However, slight oversampling (e.g. n_{samp} = 3) is often recommended because the measures suffer from errors and the deconvolution is a nonlinear process. In this section, we examine the properties of the different functions to define their associated sampling rates.
Interval ranges of definition and associated sampling rates for the used functions.
4.2.1. The α_{s} sampling rate of the visibility function
When Fourier transforming the measurement Eq. (1) along the α_{s} axis, we derive the Ekers & Rots Eq. (15). This equation implies that (u_{p}, u_{s}) is bounded inside the [ − d_{prim}, + d_{prim}] spatial frequency interval along the u_{s} axis. As a result, the visibility function needs to be regularly resampled at a rate of only 0.5/d_{prim} to satisfy the Nyquist theorem. This was first pointed out by Cornwell (1988). This sampling rate is equal to θ_{prim}/2 or ~θ_{fwhm}/2.4. The “usual, wrong” habit of sampling at θ_{fwhm}/2 is indeed undersampling with aliasing as a consequence. Mangum et al. (2007) discuss the consequences of undersampling indepth in the framework of singledish imaging.
4.2.2. The U_{p} sampling rate of the visibility function
Now, the Fourier transform of the measurement Eq. (1) along the u_{p} axis gives (53)where (54)We use the tilde sign under V to denote the inverse Fourier transform of V along its first dimension. A wellknown Fourier transform property implies that B has infinite support because is bounded. The resampling rate along the u_{p} axis therefore depends on the properties of the product of B(α_{p} − α_{s}) times I(α_{p}) as a function of α_{p}. While no unique answer exists, three facts help us to find the right sampling rate: 1) B falls off relatively quickly; 2) the result depends on the spatial distribution of the sky brightness and in particular on the dynamic range in brightness needed to accurately image it; 3) the measure of (α_{p}, α_{s}) has a limited accuracy owing to thermal noise, phase noise, and other possible systematics (e.g. pointing errors). For simplicity, we quantify the measurement accuracy by a single number, namely the maximum instrumental fidelity measured in the image plane as defined in Pety et al. (2001). There are two cases:

1.
the maximum instrumental fidelity limits the dynamic range inbrightness. For instance, Petyet al. (2001) showed that thefidelity of interferometric imaging at (sub)millimeterwavelengths will be limited to a few hundred. In this case, (α_{p},α_{s}) aliasing can be tolerated when the amplitude of B is less than afraction of the inverse of the maximum instrumental fidelity;

2.
the maximum instrumental fidelity is much greater than the image fidelity, as can be the case at centimeter wavelengths. In this case, (α_{p}, α_{s}) aliasing can only be tolerated when the amplitude of B is less than a fraction of the inverse of the dynamic range of the image.
The criterion derived in each case gives a typical image size (θ_{alias}), which can be converted into the desired u_{p} sampling rate. To be more quantitative, Fig. 3 models the normalized antenna power patterns of an antenna illuminated by a Gaussian beam of 12.5 dB edge taper and with a given blockage factor (ratio of the secondarytoprimary diameters). The top panel presents an ideal case without secondary miror, while the middle and bottom panels present simple models of the ALMA and PdBI antennas. The largest angular sizes at which the power patterns are less than a given value, , is a firstorder estimate of θ_{alias}/2 to get a fidelity or dynamic range higher than . Table 4 gives the values of θ_{alias}/θ_{fwhm} as a function of the searched fidelity or dynamic range. This condition is sufficient but not necessary. Indeed, the aliasing properties also depend on the brightness distribution of the source.
Fig. 3 Simple models of the antenna power patterns as a function of the sky angle in units of half the primary beam FWHM (θ_{fwhm}). In the 3 cases shown, the illumination is Gaussian with an edge taper of 12.5 dB but 3 different ratios of the secondarytoprimary diameters (i.e. f_{b}, the antenna blockage factors) are considered (see e.g. Goldsmith 1998, Chap. 6). The middle and bottom panels respectively model ALMA and PdBI antennas. The red lines define the minimum angular sizes for which the antenna power pattern is less than a given fraction. 

Open with DEXTER 
Minimum sizes of the dirty beam images to get an image fidelity or a dynamic range greater than a given value.
4.2.3. The u sampling rate of (u)
We have no garantee that the sky outside the targeted field of view is devoid of signal, so the only way to ensure a given dynamic range inside the targeted field of view is to choose the image size large enough so that the aliasing of potential outside sources is negligible. This means that the dirty image size must be equal to the fieldofview size plus the tolerable aliasing size (55)The conjugate uv distance and associated uv sampling then are (56)
4.2.4. The u′ and u′′ sampling rates of (u′,u′′)
The u′′ axis must thus be sampled at the same rate as the second dimension of the definition space of , i.e., as u_{s}. Moreover, u′ has in this equation a behavior similar to u ( = u_{p} + u_{s}). It must thus have the same sampling behavior as u. This sampling rate (∂u′ = d_{image}/n_{samp}) is quite high. Some deconvolution methods (see below) allow us to relax this sampling rate.
4.3. Absence of gridding “correction”
Imaging of singlefield observations goes through the following steps: 1) convolution by a gridding kernel; 2) regular resampling; 3) fast Fourier transform; and 4) gridding “correction”. The socalled gridding “correction” is a division of the dirty beam and dirty image by the Fourier transform of the gridding kernel used in the initial convolution. This step is mandatory when imaging singlefield observations to keep the imageplane measurement equation as a simple convolution equation (see e.g. Sramek & Schwab 1989). When imaging widefield observations, as proposed here, the Fourier transform along the α_{s} dimension, followed by the shiftandaverage operation, freeze the convolution kernel into the dirty beam of the widefield measurement equation. This is why the gridding “correction” step is irrelevant here.
5. Dirty beams, weighting, and deconvolution
In radioastronomy, the dirty beam is the response of the interferometer to a point source. In the widefield synthesis framework, the response of the interferometer to a point source, D, a priori depends on the source position on the sky. D(α′, α′′) can thus be interpreted as a set of dirty beams, with each dirty beam referred to by its fixed α′′ sky coordinate. These simple facts raise several questions. What are the properties of the convolution kernel? Is it possible to modify these properties? How do we deconvolve the dirty image?
5.1. A set of widefield dirty beams
With the widefield synthesis framework proposed here, Appendix A.6 shows that (57)where (58)and (59)Δ(α_{p}, α_{s}) is the singlefield dirty beam, associated with the uv sampling at the sky coordinate α_{s}. And Ω(α′, α′′) will be called the image plane weighting function, while W(u′, u′′) is the uv plane weighting function. The set of widefield dirty beams D is then the double convolution of the image plane weighting function and the singlefield dirty beams, apodized by the primary beam at the current sky position α_{s}.
While the shape of the singlefield dirty beam is directly given by the Fourier transform of the sampling function, the shape of the widefield dirty beam depends, directly or through Fourier transforms, on the sampling function (S), the primary beam shape (B), and the weighting function (W). Moreover, the widefield dirty beam shape a priori varies slowly with the sky position, since it is basically constant over the primary beamwidth as stated in Sect. 4.2. It nevertheless varies, implying, for instance, a “slow” variation of the synthesized resolution over the whole field of view.
While the singlefield and widefield dirty beam expressions seem very different, they share the same property of expressing the way the interferometer is used to synthesize a telescope of larger diameter in the image plane. In other words, the sampling function for singlefield imaging and for widefield imaging express the sensitivity of the interferometer to a given spatial frequency. These uv space functions are called the transfer functions of the interferometer (Thompson et al. 1986, Chap. 5). Modifying the transfer function has a direct impact on the measured quantity. Once the interferometer is designed and the observations are done, the only way to change this transfer function is data weighting.
An ideal set of widefield dirty beams, D(α′, α′′), would have the following properties. All the widefield dirty beams should be identical (i.e., independent of the α′′ sky coordinate) and equal to a narrow Gaussian (its FWHM giving the image resolution). This would give the product of a wide Gaussian of u′by a Dirac function of u′′, as the ideal widefield transfer function, (u′, u′′).
5.2. Dirty beam shapes and weighting
When imaging singlefield observations, giving a multiplicative weight to each visibility sample is an easy way to modify the shape of the dirty beam and thus the properties of the dirty and deconvolved images. Natural weighting (which maximizes signaltonoise ratio), robust weighting (which maximizes resolution), and tapering (which enhances brightness sensitivity at the cost of a lower resolution) are the most popular weighting techniques (see e.g. Sramek & Schwab 1989).
In the case of widefield synthesis, a multiplicative weight can also be attributed to each visibility sample before any processing. However, the weighting is also at the heart of the widefield synthesis because it is an essential part of the shiftandaverage operation. No constraint has been set on the weighting function up to this point, which indicates that the weighting function (W) gives us a degree of freedom in the imaging process. We look in turn at both kinds of weighting. In both cases, an obvious issue is the definition of the optimum weighting functions. As in the case of singlefield imaging, there is no single answer to this question. It depends on the conditions of the observation and on the imaging goals.
5.2.1. Weighting the measured visibilities
Natural weighting consists of slightly changing the definition of the sampling function. It is now set to a normalized natural weight where there is a measure and 0 elsewhere. The natural weight is usually defined as the inverse of the thermal noise variance, computed from the radiometric equation, i.e., from the system temperature, the frequency resolution, and the integration time. Using this weighting scheme before computing the first Fourier transform along the α_{s} sky dimension makes sense because the observing conditions (and thus the noise) vary from visibility to visibility.
We propose to generalize this weighting scheme to other observing conditions than just the system noise. Indeed, critical limitations of interferometric widefield imaging are pointing errors, tracking errors, atmospheric phase noise (in the (sub)millimeter domain), etc. While techniques exist for coping with these problems (e.g., water vapor radiometer, directiondependent gains: Bhatnagar et al. 2008), they are not perfect. The usual way to deal with the remaining problems is to flag the source data based on a priori knowledge of the problems, e.g., pointing measurement, tracking errors, rms phase noise on calibrators, etc. However, flagging involves the definition of thresholds, while reality is never black and white. It can thus be asked whether some weighting scheme could be devised to minimize the effect of pointing errors, tracking errors or phase noise on the resulting image. We propose to modulate natural weighting based on the a priori knowledge of the observing conditions.
5.2.2. Weighting the synthesized visibilities
Robust weighting or tapering the measured visibilities do not make sense in widefield synthesis because the dirty image is made from the synthesized visibilities after the first Fourier transform along the α_{s} sky dimension. A weighting function W then appears naturally as part of the shiftandaverage operator. Its optimum value depends on the properties of the measured sampling function. Here are a few examples.

Infinite, continuous sampling.
This is the ideal case studied inSect. 3.1. Knowing that the Ekers & RotsEq. (15) links the quantity we want to estimate, i.e.,, to many noisy^{5} measurements, (u_{p}, u_{s}), via a product by (assumed to be perfectly defined), we can invoke a simple leastsquares argument (see e.g. Bevington & Robinson 2003) to demonstrate that the optimum weighting function is (60)with w(u_{p}, u_{s}) the weight computed from the inverse of the noise variance of (u_{p}, u_{s}). Using Eq. (20), it is then easy to demonstrate that , and then I_{dirty}(α) = I(α). The dirty image is a direct estimate of the sky brightness; i.e., deconvolution is superfluous.

Complete
sampling.
The signal is Nyquistsampled, but it has a finite support in both the uv and sky planes, implying a finite synthesized resolution and a finite field of view. In contrast to the previous case, this one may have practical applications, e.g., observations done with ALMA in its compact configuration. Indeed, the large number of independent baselines coupled to the design of the ALMA compact configuration ensure a complete, almost Gaussian, sampling for each snapshot. In this case, the best choice may be to choose the weighting function so that all the dirty beams are identical to the same Gaussian function. In this case, the deconvolution would also be superfluous.

This is the more general case studied in Sect. 3.2. The signal not only has a finite support but it also is undersampled (at least in the uv plane). The deconvolution is mandatory. The choice of the weighting function thus will depend on imaging goals. If the user needs the best signaltonoise ratio, some kind of natural weighting will be needed. It is tempting to use Eq. (60) as a natural weighting scheme. However, the main condition for derivation of this weighting function, i.e., the Ekers & Rots Eq. (15), is not valid anymore, as the noisy measured quantity () is now linked to the quantity we want to estimate () by a local average (see Eq. (31)). This is why it was more appropriate to try to get a Gaussian dirty beam shape in the complete sampling case. If the signaltonoise ratio is high enough, the user has two choices. Either he/she wants to maximize angular resolution power and needs some kind of robust weighting, or he/she wants to get the more homogeneous dirty beam shape over the whole field of view. This requirement cannot always be fully met. The Ekers & Rots scheme enables us to recover unmeasured spatial frequencies only in regions near to measured ones, because has a finite support.
5.3. Deconvolution
Writing the imageplane measurement equation in a convolutionlike way is very interesting because all the deconvolution methods developed in the past 30 years are optimized to treat deconvolution problems (see e.g. Högbom 1974; Clark 1980; Schwab 1984; Narayan & Nityananda 1986). For instance, it should be possible to deconvolve Eq. (23) with just slight modifications to the standard CLEAN algorithms. Indeed, Eq. (23) can be interpreted as the convolution of the sky brightness by a set of dirty beams, so that the only change, once a CLEAN component is found, would be the need to find the right dirty beam in this set in order to remove the CLEAN component from the residual image.
Following Clark (1980) and Schwab (1984), most algorithms today deconvolve in alternate minor and major cycles. During a minor cycle, a solution of the deconvolution is sought with a simplified (hence approximate) dirty beam. During a major cycle, the current solution is subtracted either from the original dirty image using the exact dirty beam or from the measured visibilities, implying a new gridding step. In both cases, the major cycles result in greater accuracy. The iteration of minor and major cycles enables one to find an accurate solution with better computing efficiency. In our case, the approximate dirty beams used in the minor cycle could be 1) dirty beams of a much smaller size than the image; or 2) a reduced set of dirty beams (i.e., guessing that the typical variation sizescale of the dirty beams with the sky coordinate is much larger than the primary beamwidth); or 3) both simultaneously. The model would be subtracted from the original visibilities before reimaging at each major cycle. The tradeoff is between the memory space needed to store a full set of accurate dirty beams and the time needed to image the data at each major step. Some quantitative analysis is needed to know how far the dirty beams can be approximated in the minor cycle.
It is worth noting that the accuracy of the deconvolved image will be affected by edge effects. Indeed, the dirty brightness at the edges of the observed field of view is attenuated by the primary beam shape. When deconvolving these edges, the deconvolved brightness will be less precise, because the primary beam has a low amplitude there. This only affects the edges, because inside the field of view, every sky position should be observed a fraction of the time with a primary beam amplitude between 0.5 and 1. This edge effect is nevertheless expected to be much less troublesome than the inhomogeneous noise level resulting from standard mosaicking imaging (see Sect. 7.1).
6. Short spacings
6.1. The missing flux problem
Radio interferometers are bandpass instruments; i.e., they filter out not only the spacings longer than the largest baseline length but also the spacings shorter than the shortest baseline length, which is typically comparable to the diameter of the interferometer antennas. In particular, radio interferometers do not measure the visibility at the center of the uv plane (the socalled “zero spacing”), which is the total flux of the source in the measured field of view.
The lack of short baselines or short spacings has strong effects as soon as the size of the source is more than about 1/3 to 1/2 of the interferometer primary beam. Indeed, when the size of the source is small compared to the primary beam of the interferometer, the deconvolution algorithms use, in one way or another, the information of the flux at the lowest measured spatial frequencies for extrapolating the total flux of the source. The extreme case is a point source at the phase center for which the amplitude of all the visibilities is constant and equal to the total flux of the source: extrapolation is then exact. However, the larger the size of the source, the worse the extrapolation, which then underestimates the total source flux. This is the wellknown problem of the missing flux that observers sometimes note when comparing a source flux measured by a mm interferometer with the flux observed with a singledish antenna.
Widefield synthesis does not recover the full short spacings. Let us assume that the visibility function is continuously sampled from d_{min} to d_{max}, with d_{min} ~ 1.5 d_{prim}. The length of the averaging linepath^{6}), L(u), can be interpreted as the number of measures that contribute to the estimation of . Figure 4 shows the variations of L(u) function when starting from a visibility function continuously defined in the [d_{min},d_{max}] interval along the u_{p} dimension. We can expect to recover only inside the [d_{min} − d_{prim},d_{max} + d_{prim}] interval. In particular, information on short spacings lower than d_{min} − d_{prim} (e.g. the crucial zero spacing) cannot be recovered when using a homegeneous interferometer, and the short spacings in the interval [d_{min} − d_{prim},d_{min}] are recovered with increasing accuracy from d_{min} − d_{prim} to d_{min}. Both effects imply the need for complementary instruments to accurately measure the missing shortspacings.
Fig. 4 Length of the averaging linepaths displayed as black lines in panel c.2) of Fig. 2, as a function of the spatial scale in the final, widefield uv plane. In the case of a continuous sampling of u_{p} between d_{min} and d_{max}, these quantities can be interpreted as the number of measures that contribute to the estimate of (u). 

Open with DEXTER 
6.2. Usual hardware and software solutions
To derive the correct result for larger source sizes, it is necessary to complement the interferometer data with additional data, which contain the missing shortspacing information. The IRAM30 m singledish telescope is used to complement the Plateau de Bure Interferometer. Shortspacing information can also be in part recovered with a secondary array of smaller antennas and shorter baselines (e.g. the CARMA interferometer). In the ALMA project, the shortspacing information will be derived by a combination of four 12 msingledish antennas and an interferometer of 12 antennas of 7 m called ACA (Atacama Compact Array).
From the software pointofview, two main families of algorithms exist in the standard processing of mosaics. Either the shortspacing information is combined on the deconvolved image (i.e., the interferometer data is imaged and deconvolved separately) through a hybridization in the Fourier plane (see e.g. Pety et al. 2001), or the long and shortspacing information is imaged and/or deconvolved jointly. In this category, we find the pseudovisibility technique, which produces interferometriclike visibilities from singledish maps (see e.g. Pety et al. 2001; RodríguezFernández et al. 2008, and references therein), and the multiresolution deconvolution algorithms, which work on images containing different spatial frequency ranges.
In the next two sections, we show how widefield synthesis naturally processes the shortspacing information either from singledish or from heterogeneous arrays.
6.3. Processing short spacings from singledish measurements
The singledish measurement equation can be written as (61)where I_{sd} is the measured singledish intensity, S_{sd} the singledish sampling function, and B_{sd} the singledish antenna power pattern. As already stated in the introduction, the above integral is identical to the ideal measurement equation of interferometric widefield imaging taken in u_{p} = 0. If we define a singledish visibility function as (62)we can thus write the measured singledish intensity as (63)The recognition that the singledish measurement equation is a particular case of the interferometric widefield measurement equation opens the way to treating both the singledish and interferometric data sets through exactly the same processing steps. We just have to define a hybrid sampling function, S_{hyb}, as the Fourier transform of the hybrid primary beam, , as and a hybrid weighting function, W_{hyb}, as All the processing steps described in the previous sections (including a potential gridding step of singledish, onthefly data) can then be directly applied to the hybrid data set. Using the widefield synthesis formalism, we can easily write (70)with (71)and (72)We thus see that is a linear combination of the information measured by the singledish () and by the interferometer (). There, W_{sd}(u) plays a particular role for two reasons. First, its dependency on the spatial frequency (u) enables us to filter out the highest spatial frequencies that are measured by the singledish antenna with low accuracy. Second, it is wellknown that the relative weight of the singledish to interferometric data is a critical parameter in the processing of the short spacings from singledish data (see e.g. RodríguezFernández et al. 2008). This relative weight is a free parameter within the restrictions set by the noise level (i.e., we want the singledish data to bring information and not just noise to the interferometric data), and a criterion must therefore be defined to adjust it to an optimal value. We refer the reader to the discussion of Sect. 5, which also applies here.
Definition of the symbols used to expose the processing of the short spacings.
6.4. Processing short spacings from heterogeneous arrays
A heterogeneous array is an interferometer composed with antennas of different diameters. ALMA and CARMA are two such examples. The measurement equation for a heterogeneous array is (73)where b_{i} and b_{j} are the voltage reception patterns of the antenna pair that forms the ij baseline and the asterisk denotes the complex conjugate (Thompson et al. 1986, Chap. 3). The formalism developed in the previous sections holds as long as we redefine (74)A simple application of the correlation theorem implies that (75)The use of the baseline indices ij must be generalized throughout the equations because the knowledge of the antenna type must be attached to each individual data point (visibility). As a result, the widefield synthesis formalism can be easily adapted to heterogeneous arrays at the price of additional bookkeeping.
6.5. Two textbook cases: IRAM30 m + PdBI and ALMA + ACA
Figure 5 sketches why widefield synthesis naturally handles the short spacings in two textbook cases. In the ideal case, the Fourier transform along the α_{s} dimension produces visibilities, which are related to the widefield spatial frequencies of the source brightness weighted by the transfer function of the interferometer. In this sense, Fig. 5 displays the natural weighting of the synthesized widefield visibilities at the position of each measured visibility. Handling visibilities from antenna of different sizes just implies that the natural weighting function of the synthesized visibilities will have a different shape.
Fig. 5 Sketches of the natural weighting of the synthesized widefield visibilities. Each measured spatial frequency will produce widefield spatial frequencies apodized by the transfer function () centered on the measured spatial frequency. The used transfer function depends on the telescopes used, explaining why widesynthesis naturally handles the short spacing either from a singledish antenna or from a heterogeneous array. The synthesized visibilities in the overlapping regions will then be averaged. Two textbook examples are illustrated: 1) the combination of data from the IRAM30 m singledish (red transfer function) and from the Plateau de Bure Interferometer (black transfer functions) at the top; and 2) the combination of data from ALMA 12 mantennas used either in singledish mode (red transfer function), in interferometric mode (black transfer functions) and of data from the ACA 7 mantennas (blue transfer functions) at the bottom. The minimum uv distances measured by each interferometer were set from the minimum possible distance between antennas (24 m for PdBI, 15 m for ALMA and 9 m for ACA). 

Open with DEXTER 
The top panel of Fig. 5 displays how the IRAM30 m singledish is used to complement the Plateau de Bure interferometer visibilities. The bottom panel displays how ACA is used to produce the short spacing information for ALMA. The four 12 mantennas will provide the singledish information, while the 12 additional 7 mantennas will form with ALMA a heterogeneous array. In the first design, ACA and ALMA form two independent interferometers; i.e., they are not crosscorrelated. The singledish antennas, ACA and ALMA, thus appear as three different instruments. It is thus possible to decompose the hybrid set of widefield dirty beams obtained by processing the 3 sets of data together in 3 different sets of dirty beams (76)with (77)For a multiplying interferometer, (78)This implies that contributes at u_{p} = 0 in the sum over u_{p} in Eq. (77), contributes for 9 m < u_{p} ≲ 40 m and (u′, u′′) contributes for 15 m < u_{p} < 150 m in the most compact configuration of ALMA.
7. Comparison with standard nonlinear mosaicking
7.1. Mosaicking in a nutshell
Several excellent descriptions of the mosaicking imaging and deconvolution algorithms can be found (see e.g. Cornwell 1988; Cornwell et al. 1993; Sault et al. 1996b). Here, we summarize the approach implemented in the gildas/mapping software used to image and deconvolve the data from the Plateau de Bure Interferometer. This approach is based on original ideas by F. Viallefond in the early 90s (Gueth et al. 1995).
The basic ideas of nonlinear mosaicking are 1) imaging the different fields of the mosaic independently; 2) linearly adding the singlefield dirty images into a dirty mosaic; and 3) jointly deconvolving the dirty mosaic.
7.1.1. Singlefield imaging
For simplicity, we skip the gridding convolution in the following equations because the gridding step does not change the nature of the equations. Imaging the fields individually means that we will work at constant α_{s}. We first define the singlefield dirty image of the α_{s}field as (79)where the Fourier transform of the singlefield dirty image is the product of the sampling function S(u_{p},α_{s}) and the visibility function V(u_{p},α_{s}): (80)From the previous equations, it is easily demonstrated that (81)where the singlefield dirty beam is defined as (82)We can rewrite the previous equation as (83)meaning that the singlefield dirty images can be written as a local convolution of B^{αs}I and Δ^{αs}, the singlefield dirty beam associated to the currently imaged field.
7.1.2. Mosaicking the dirty images
In gildas/mapping^{7}, the singlefield dirty images are formed on the same grid (in particular the same pixel size and the same image size covering about twice the mosaic field of view). These singlefield dirty images are then linearly averaged as (84)where (85)and w(α_{s}) is the sky plane weighting function, i.e., (86)In the previous equation, the α_{i} are the positions of each skyplane measurement, and σ_{i} is the rms noise associated with . Cornwell et al. (1993) demonstrates that the noise in the mosaic image naturally varies across the field as (87)In particular, it rises sharply at the edges of the mosaic.
7.1.3. Joint deconvolution
Standard algorithms of singlefield deconvolution must be adapted to the mosaicking case because both the dirty beam and the noise vary across the mosaic field of view. We describe here the adaptations made in gildas/mapping of the simplest CLEAN deconvolution method, described in Högbom (1974). Adaptations of more evolved CLEAN deconvolution methods are also implemented following the same basic rules.

1.
We first initialize the residual and signaltonoise maps from the dirty and noise maps(88)and (89)

2.
The kth CLEAN component is sought on the SNR_{k − 1} map instead of the R_{k − 1} map to ensure that noise peaks at the edges of the mosaic are not confused with the true signal of the same magnitude.

3.
Using that the kth CLEAN component is a point source of intensity I_{k} at position α_{k}, the residual and signaltonoise maps are then upgraded as follows: (90)and (91)Here γ(~0.2) is the usual loop gain that ensures convergence of the CLEAN algorithms.

4.
Steps 2 and 3 are iterated as long as the stopping criterion is not met.
7.1.4. Widefield measurement equation
To help the comparison between mosaicking and widefield synthesis, we now go one step further than is usually done in the description of mosaicking; i.e., we write the imageplane measurement equation as a widefield measurement equation of the same kind as Eq. (23). Substituting Eq. (81) into Eq. (84) and reordering the terms after inverting the order of the sum over α_{s} and α_{p}, one obtains (92)with (93)Taking the inverse Fourier transforms of D_{mos}, we get the mosaicking transfer function (94)with (95)
7.2. Comparison
While both mosaicking and widefield synthesis produce imageplane measurement equations of the same kind (see Eqs. (23) and (92)), the comparison of the dirty beams (Eqs. (57) and (93)) and of the transfer functions (Eqs. (35) and (94)) immediately shows the different dependencies on the primary beams (B), the singlefield dirty beams (Δ), the imageplane weighting functions (Ω), and their respective Fourier transforms (, and W). This means that mosaicking is not mathematically equivalent to widefield synthesis, though both methods recover the sky brightness. These differences come directly from the differences in the processing. If we momentarily forget the gridding steps, mosaicking starts with a Fourier transform along the u_{p} dimension of the visibility function, and most of the processing thus happens in the sky plane, while widefield synthesis starts with a Fourier transform along the α_{s} dimension, and most of the processing thus happens in the uv plane.
Moreover, both methods are irreducible to each other. Widefield synthesis gives a more complex dirty beam formulation in the image plane, which could give the impression that it is a generalization of mosaicking. Indeed, the widefield imageplane weighting function can be chosen as the product of a Dirac function of α′ times a function ω of α′′, 91This implies a widefield uvplane weighting function independent of u′; i.e., W(u′, u′′) = (u′′). This choice is a clear limitation because it enables us to influence the transfer function only locally (around each measured u_{p} spatial frequency), while weighting is generally intended to globally influence the transfer function (see Sect. 5). Eitherway, in this case, the widefield dirty beam can easily be simplified to (97)While this simplified formulation of the widefield dirty beam is closer to the mosaicking formulation, they still differ in a major way: ω(α′′– α_{s}) is a shiftinvariant function contrary to Ω_{mos}(α′′, α_{s}). This is the shiftdependent property of Ω_{mos}(α′′, α_{s}), which implies the additional complexity (integral over u_{s} in addition to the integral over u_{p}) of the mosaicking transfer function (Eq. (94)) over the widefield one (Eq. (35)).
One main difference between the two processing methods is that standard mosaicking prescribes a precise weighting function, while we argue that the widefield weighting function should be defined according to the context (see Sect. 5). Another important difference is the treatment of the short spacings, which are naturally processed in the widefield synthesis methods, but which needs a very specific treatment in mosaicking (see Sect. 6 and references therein). Finally, while mosaicking implies a gridding only of u_{p} dimension of the measured visibilities, widefield synthesis naturally requires a gridding of both the u_{p} and α_{s} dimensions. As the Nyquist sampling along the α_{s} dimension is only 0.5/d_{prim}, the gridding of the sky plane can result in a large reduction of the data storage space and cpu processing cost when processing onthefly and/or multibeam observations.
8. Summary
Interferometric widefield imaging implies scanning the sky in one way or another (e.g. stopandgo mosaicking, onthefly scanning, sampling of the focal plane by multibeams). This produces sampled visibilities SV, which depends both on the uvplane and sky coordinates (e.g., u_{p} and α_{s}).
Based on a basic idea by Ekers & Rots (1979), we proposed a new way to image the interferometric widefield sampled visibilities: SV(u_{p}, α_{s}). After gridding the measured visibilities both in the uv and sky planes, the gridded visibilities are Fouriertransformed along the α_{s} sky dimension, yielding synthesized visibilities sampled on a uv grid whose cell size is related to the total field of view; i.e., it is much finer than the diameter of the interferometer antennas. We thus proposed calling this processing scheme “widefield synthesis”.
The Fourier transform is performed for each constant u_{p} value. As many independent estimates of the uv plane are produced as independent values of u_{p} measured. A shiftandaverage operator is then used to build a final, widefield uv plane, which translates into a widefield dirty image after inverse Fourier transform, i.e., (98)where W is a normalized weighting function. Using these tools, we demonstrated that:

1.
The dirty image () is a convolution of the sky brightness distribution (I) with a set of widefield dirty beams () varying with the sky coordinate α, i.e., (99)Compared to singlefield imaging, the dependency on the primary beam is transferred from a product of the sky brightness distribution into the definition of the set of widefield dirty beams.

2.
The set of gridded dirty beams () can be computed from the ungridded sampling function (S), the transfer function (, the inverse Fourier transform of the primary beam), and the gridding convolution kernel (see Eqs. (42), (50) and (51)).

3.
The dependence of the widefield dirty beams on the sky position is slowlyvarying, with their shape varying on an angular scale typically larger than or equal to the primary beamwidth.
Adaptations of the existing deconvolution algorithms should be straightforward.
A comparison with standard nonlinear mosaicking shows that it is not mathematically equivalent to the widefield synthesis proposed here, though both methods do recover the sky brightness. The main advantages of widefield synthesis over standard nonlinear mosaicking are

1.
Weighting is at the heart of the widefield synthesis because it isan essential part of the shiftandaverage operation. Indeed, notonly can a multiplicative weight be attributed to each visibilitysample before any processing, but the uvplane weighting function (W, see Eq. (98)) is also a degree of freedom, which should be set according to the conditions of the observation and the imaging goals, e.g. highest signaltonoise ratio, highest resolution, or most uniform resolution over the field of view. The W weighting function thus enables us to modify the widefield response of the instrument. On the other hand, mosaicking requires a precise weighting function in the image plane, which freezes the widefield response of the interferometer.

2.
Widefield synthesis naturally processes the short spacings from both singledish antennas and heterogeneous arrays along with the long spacings. Both of them can then be jointly deconvolved.

3.
The gridding of the sky plane dimension of the measured visibilities, required by the widefield synthesis, may potentially save large amounts of harddisk space and cpu processing power relative to mosaicking when handling data sets acquired with the onthefly observing mode. Widefield synthesis could thus be particularly well suited to process onthefly observations.
The widefield synthesis algorithm is compatible with the uvwunfaceting technique devised by Sault et al. (1996a) to deal with the celestial projection effect, known as noncoplanar baselines (see Appendix B). Finally, onthefly observations imply an elongation of the primary beam along the scanning direction. These effects can be decreased by an increase in the primary beam sampling rate. However, it may limit the dynamic range of the image brightness if the primary beam sampling rate is too coarse (see Appendix C).
The convolution theorem, which states that the Fourier transform of the convolution of two functions is the product of the Fourier transform of both individual functions, is a special case for Eq. (15): it can be recovered by setting . Indeed, as already mentioned in the introduction, the ideal measurement Eq. (1) can be interpreted as a convolution with an additional phase term. By Fourier transforming along the α_{s} dimension, the convolution translates into a product of Fourier transforms and , while the phase term translates into a shift of coordinates: .
See http://www.iram.fr/IRAMFR/GILDAS for more information about the GILDAS software.
Acknowledgments
This work has mainly been funded by the European FP6 “ALMA enhancement” grant. This work was also funded by grant ANR09BLAN023101 from the French Agence Nationale de la Recherche as part of the SCHISM project. The authors thank F. Gueth for the management of the onthefly working package of the “ALMA enhancement” project. They also thank S. Guilloteau, R. Lucas and J. Uson for useful comments at various stages of the manuscript and D. Downes for editing their English. They finally thank the referee, B. Sault, for his insightful comments, which challenged us to try to write a better paper.
References
 Bevington, P. R., & Robinson, D. K. 2003, Data Reduction and Error Analysis For the Physical Sciences, 3rd edn. (McGrawHill) [Google Scholar]
 Bhatnagar, S., & Cornwell, T. J. 2004, A&A, 426, 747 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Bhatnagar, S., Cornwell, T. J., Golap, K., & Uson, J. M. 2008, A&A, 487, 419 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Bracewell, R. N. 2000, The Fourier Transform and its Applications, 3rd edn. (McGrawHill) [Google Scholar]
 Clark, B. G. 1980, A&A, 89, 377 [NASA ADS] [Google Scholar]
 Cornwell, T. J. 1988, A&A, 202, 316 [NASA ADS] [Google Scholar]
 Cornwell, T. J., Holdaway, M. A., & Uson, J. M. 1993, A&A, 271, 697 [NASA ADS] [Google Scholar]
 Cornwell, T. J., Golap, K., & Bhatnagar, S. 2008, IEEE Journal of Selected Topics in Signal Processing, 2, 647 [NASA ADS] [CrossRef] [Google Scholar]
 Cotton, W. D., & Uson, J. M. 2008, A&A, 490, 455 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 D’Addario, L. R., & Emerson, D. T. 2000, OnTheFly Fringe Tracking, ALMA memo, 331 [Google Scholar]
 Ekers, R. D., & Rots, A. H. 1979, in Image Formation from Coherence Fucntions in Astronomy, ed. C. van Schoonedveld (D. Reidel), IAU Proc., 49, 61 [Google Scholar]
 Frater, R. H., & Docherty, I. S. 1980, A&A, 84, 75 [NASA ADS] [Google Scholar]
 Goldsmith, P. F. 1998, Gaussian Beam, Quasioptical Propagation and Applications (IEEE Press) [Google Scholar]
 Gueth, F., Guilloteau, S., & Viallefond, F. 1995, in The XXVIIth Young European Radio Astronomers Conference, ed. D. A. Green, & W. Steffen, 8 [Google Scholar]
 Hamaker, J. P., Bregman, J. D., & Sault, R. J. 1996, A&AS, 117, 137 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Högbom, J. A. 1974, A&AS, 15, 417 [NASA ADS] [Google Scholar]
 Holdaway, M. A., & Foster, S. M. 1994, OnTheFly Mosaicing, ALMA memo, 122 [Google Scholar]
 Mangum, J. G., Emerson, D. T., & Greisen, E. W. 2007, A&A, 474, 679 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Narayan, R., & Nityananda, R. 1986, ARA&A, 24, 127 [NASA ADS] [CrossRef] [Google Scholar]
 Pety, J., Gueth, F., & Guilloteau, S. 2001, Impact of ACA on the WideField Imaging Capabilities of ALMA, ALMA memo, 398 [Google Scholar]
 Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 1992, Numerical Recipes in C, 2nd edn. (Cambridge University Press) [Google Scholar]
 RodríguezFernández, N. J., Pety, J., & Gueth, F. 2008, Singledish observation and processing to produce the shortspacing information for a millimeter interferometer, IRAM memo, 20082 [Google Scholar]
 RodríguezFernández, N. J., Pety, J., & Gueth, F. 2009, Imaging of interferometric OnTheFly observations: Context and discussion of possible methods, IRAM memo, 20092 [Google Scholar]
 Sault, R. J., Hamaker, J. P., & Bregman, J. D. 1996a, A&AS, 117, 149 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Sault, R. J., StaveleySmith, L., & Brouw, W. N. 1996b, A&AS, 120, 375 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Sault, R. J., Bock, D. C.J., & Duncan, A. R. 1999, A&AS, 139, 387 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Schwab, F. R. 1984, AJ, 89, 1076 [NASA ADS] [CrossRef] [Google Scholar]
 Sramek, R. A., & Schwab, F. R. 1989, Synthesis Imaging in Radio Astronomy, Conf. Ser. (ASP), 117 [Google Scholar]
 Thompson, A. R., Moran, J. M., & Swenson, G. W. J. 1986, Interferometry and Synthesis in Radio Astronomy (John Wiley & Sons) [Google Scholar]
Appendix A: Demonstrations
A.1. Ekers & Rots scheme
Fouriertransforming the visibility function along the α_{s} dimension at constant u_{p}, we derive with simple replacements We then use the following change of variables β ≡ α_{p} − α_{s} and dβ = −dα_{s}, to get
A.2. Incomplete sampling
We here demonstrate that Eqs. (23) and (34) are equivalent. To do this, we take the direct Fourier transform of I_{dirty}(α) (A.6)and we replace I(α′) by its formulation as a function of its Fourier transform (A.7)We thus derive (A.8)Using the following change of variables α′′ ≡ α − α′, α′ = α − α′′ and dα′′ = −dα′, the innermost integral can be written as In the last two steps, we have simply recognized two different steps of Fourier transforms of D. Finally, (A.12)
A.3. Gridding
The gridding kernel can be defined as the product of two functions, each one operating in its own dimension. We use this to study separately the effect of gridding in the uv and sky planes. We then use the intermediate results to get the effect of gridding simultaneously in both planes.
A.4. Gridding in the uv plane
We define the sampled visibility function gridded in the uv plane as (A.13)Using that the gridding is here applied on the u_{p} dimension, while the Fourier transform is applied on the α_{s} dimension, it is easy to show that the gridding and Fouriertransform operations commute: Defining the Fourier transform of the uv gridded dirty image, we derive Using Eq. (31) to replace , we can write the Fourier transform of the uv gridded dirty image as (A.18)with (A.19)and (A.20)Using (A.21)and (A.22)we derive (A.23)or (A.24)Thus, is the uv gridded version of the generalized sampling function .
A.4.1. Gridding in the sky plane
We define the sampled visibility function gridded in the sky plane as Applying the convolution theorem on the Fourier transform along the α_{s} dimension, we derive (A.27)Defining the Fourier transform of the skygridded dirty image, we derive Using Eq. (31) to replace , we can write the Fourier transform of the skygridded dirty image as (A.30)with (A.31)and (A.32)or, with the definition of (i.e., Eq. (45)), (A.33)Using (A.34)and the convolution theorem when taking the inverse Fourier transform of , we derive (A.35)Thus, is the sky gridded version of the generalized sampling function .
A.5. Gridding in both planes
Starting from the definition of (Eq. (41)), we Fouriertransform it along the sky dimension at constant u_{p}. Using that the gridding along the u_{p} dimension can be factored out of the Fourier transform, we derive (A.36)Using Eq. (A.27), we now replace in the previous equation to get (A.37)or (A.38)From this relation, it is easy to deduce that (A.39)Using the convolution theorem when taking the inverse Fourier transform of along the u_{s} dimension and replacing with Eq. (A.24), we finally derive (A.40)
A.6. Widefield vs. singlefield dirty beams
The notation (59) yields . Using this in Eq. (35) gives (A.41)Taking the inverse Fourier transform along the u′′ axis of Eq. (A.41) and reordering the integral to factor out the term independent of u′′, we can write (A.42)where (A.43)We now introduce the following definition (A.44)to derive Using the following change of variables v ≡ u′′ + u′ − u_{p}, u′′ = v − u′ + u_{p} and dv = du′′ on the innermost integral, we get Substituting this result into Eq. (A.42) and taking the inverse Fourier transform along the u′ axis, we can write (A.45)where Using the following change of variables v ≡ u_{p} − u′, u′ = u_{p} − v and dv = du′, we get (A.46)Substituting this result into Eq. (A.45) and reordering the terms, we can write (A.47)where
A simple application of the convolution theorem gives
where (A.48)Substituting this result into Eq. (A.47), we finally derive the desired expression, i.e., Eq. (57).
Appendix B: From the celestial sphere onto a single tangent plane
Equation (1) neglects projection effects, known as noncoplanar baselines. Any method which deals with interferometric widefield imaging must take this problem into account. After a short introduction to the problem, we show how widefield synthesis is compatible with at least one method, namely the uvwunfaceting of Sault et al. (1996b). This method tries to build a final widefield uv plane from different pieces, just as our widefield synthesis approach does. Another promising method is the wprojection, based on original ideas of Frater & Docherty (1980) and first successfully implemented by Cornwell et al. (2008). We did not look yet at its compatibility with widefield synthesis.
B.1. waxis distortion
When projection effects are taken into account, the measurement equation reads (B.1)In this equation, we continue to work in 1 dimension for the sky cosine direction (α_{p}), but we explicitly introduce the dependence along the direction perpendicular to the sky plane. This dependence appears in two ways, which is handled in very different ways. First, the factor can be absorbed into a generalized sky brightness function (B.2)After imaging and deconvolution, I(α_{p}) can be easily restored from the deconvolved ℐ(α_{p}) image. The second dependence appears as an additional phase, which is written as (B.3)Thompson et al. (1986, Chap. 4) shows that this additional phase can be neglected only if^{8}(B.4)The first form of the criterion indicates that the approximation gets worse at high spatial dynamic range (i.e., θ_{field}/θ_{syn} ≪ 1) while the second form indicates that the approximation gets worse at long wavelengths.
B.2. uvwunfaceting
For stopandgo mosaicking, it is usual to delaytrack at the center of the primary beam for each pointing/field of the mosaic. This phase center is also the natural center of projection of each pointing/field. Stopandgo mosaicking thus naturally paves the celestial sphere with as many tangent planes as there are pointings/fields; i.e., this observing scheme is somehow enforcing a uvwfaceting scheme. In the framework of onthefly observations with ALMA, D’Addario & Emerson (2000) indicate that the phase center will be modified between each onthefly scan while it will stay constant during each onthefly scan. This is a compromise between loss of coherence and technical possibilities of the phaselocked loop. Using this hypothesis, the maximum sky area covered by the onthefly scan must take into account the maximum tolerable waxis distortion.
The easiest way to deal with such data is to image each pointing/field around its phase center and then to reproject this image onto the mosaic tangent plane as displayed in Fig. 5 of Sault et al. (1996b). These authors point out that this scheme implies a typical waxis distortion ϵ less than (B.5)where θ_{center} is the angle from the pointing/field center and θ_{alias} is the antialiasing scale defined in Sect. 4.2. In particular, ϵ is 0 at the phase center of each pointing/field. In other words, this scheme limits the magnitude of the waxis distortion to its magnitude on a size equal to the antialiasing scale (i.e., a few time the primary beamwidth) instead of a size equal to the total mosaic field of view. This scheme thus solves the projection effect as long as the waxis distortion is negligible at sizes smaller than or equal to the antialiasing scale. A natural name for this processing scheme is uvwunfaceting because it is the combination of a faceting observing mode (i.e., regular change of phase center) and a linear transform of the uv coordinates to derive a single sine projection for the whole field of view.
Sault et al. (1996b) also demonstrate that the reprojection may be done much more easily and quickly in the uvw space before imaging the visibilities because it is then just a simple transformation of the uv coordinates, followed by a multiplication of the visibilities by a phase term. Finally, Sault et al. (1996b) note that it is the linear character of this uv coordinate transform which preserves the measurement Eq. (1). As the change of coordinates happens before any other processing, it also conserves all the equations derived in the previous sections to implement the widefield synthesis.
Appendix C: Onthefly observing mode and effective primary beam
Usual interferometric observing modes (including stopandgo mosaicking) implies that the interferometer antennas observe a fixed point of the sky during the integration time. Conversely, the onthefly observing mode implies that the antennas slew on the sky during the integration time. This implies that the measurement Eq. (1) must be written as (Holdaway & Foster 1994; RodríguezFernández et al. 2009): (C.1)where δt is the integration time and û_{p} and are the mean spatial frequency and direction cosine, defined as (C.2)In this section, we analyze the consequences of the antenna slewing on the accuracy of the widefield synthesis.
C.1. Time averaging
In all interferometric observing modes, it is usual to adjust the integration time so that u_{p}(t) can be approximated as û_{p}. To do this, it is enough to ensure that u_{p}(t) always varies less than the uv distance associated with tolerable aliasing (d_{alias}, see Sect. 4.2) during the integration time (δt) (C.3)where d_{max} is the maximum baseline length, ω_{earth} is the angular velocity of a spatial frequency due to the Earth rotation (7.27 × 10^{5} rad s^{1}), θ_{alias} and θ_{syn} are respectively the minimum field of view giving a tolerable aliasing and the synthesized beam angular values.
Definition of the symbols used to explore the influence of onthefly scanning on the measurement equation.
C.2. Effective primary beam
Assuming that condition (C.3) is ensured, we can write Eq. (C.1) with the same form as Eq. (1) by the introduction of an effective primary beam (B_{eff}); i.e., (C.4)where (C.5)Using the following change of variables (C.6)we derive (C.7)with (C.8)and (C.9)In these equations, v_{slew}(β) is the slew angular velocity of the telescope as a function of the sky position, δα_{s} is the angular distance covered during δt, A is an apodizing function, and Π(β) is the usual rectangle function, which reproduces the finite character of the time integration.
C.3. Interpretation
The form of the measurement equation is conserved when averaging the visibility function over a finite integration time, as long as the true primary beam is replaced by an effective primary beam, which is the convolution of the true primary beam by an apodizing function. To go further, it is important to return to the two dimensional case. Indeed, the convolution must be done along the slewing direction, resulting in an effective primary beam elongated in a particular direction.
Fig. C.1 Assessement of the relative error implied by the use of the true primary beam instead of the effective primary beam when analyzing interferometric onthefly data sets. Left: inverse Fourier transform of interferometer primary beam, (i.e. the autocorrelation of the antenna illumination). Right: relative error as a function of sampling rate of the primary beam. The curves of different colors show the results at different normalized uv distances (u/d_{prim}) from the center of . 

Open with DEXTER 
In principle, the equations derived in Sect. 3 can be accommodated just by replacing the true primary beam by its effective associate. In practice, the probability to take into account the effective primary beam is low because its shape varies with time. Indeed, it is often assumed that the sky is slewed along a straight line at constant angular velocity. Even in this simplest case, it is advisable to slew along at least two perpendicular directions to average systematic errors, implying two different effective primary beams. However, practical reasons may/will lead to complex scanning patterns: 1) the limitation of the acceleration when trying to image a square region leads to spiral or Lissajous scanning patterns; 2) the probable absence of derotators in future multibeam receivers (B. Lazareff, private communication) implies the need to take into account the Earth rotation in the scanning patterns of the offaxis pixels.
C.4. Approximation accuracy
In the following, we thus ask what is the tradeoff accuracy of using the true primary beam instead of the effective primary beam. The first point to mention is that using different scanning patterns somehow helps because the averaging process then makes the bias less systematic. Following Holdaway & Foster (1994), we quantify the accuracy lost in the Fourier plane. Indeed, the Ekers & Rots scheme tries to estimate missing sky brightness Fourier components from their measurements apodized by the Fourier transform of the primary beam. In the Fourier space, the above convolution just translates into a product. The Fourier transform of the apodizing function thus degrades the sensitivity of the measured visibility, V(u_{p}, α_{s}), to spatial frequencies at the edges of the interval [u_{p} − d_{prim},u_{p} + d_{prim}] . To guide us in our quantification of the accuracy lost, we now explore the simplest case of linear scanning at constant velocity, where v_{slew}(β) is constant and δα_{s} = v_{slew} δt. The Fourier transform of the apodizing function is then a sinc function: (C.10)The relative error implied by the use of the true primary beam instead of the effective primary beam is then (C.11)Figure C.1 shows this relative error as a function of the number of samples per primary beam FWHM in the image plane (i.e., θ_{fwhm}/δα_{s}) for different uv distances (in units of d_{prim}). We see that we derive a 1% accuracy at all u when we sample the image plane at a rate of 5 dumps per primary beam. However getting a 0.1% accuracy needs quite high sampling rates (about 15). This must be compared with the accuracy of knowledge of .
We note that if a better accuracy is needed than the one achievable with the highest sampling rate, it is in theory possible to replace in the correlator software the rectangle apodizing function by another function which falls more smoothly. To avoid the loss of sensitivity inherent to the use of such an apodizing function (by throwing away data at the edges of the time interval of integration), would require, for instance, to halfoverlap the integration intervals. This would imply more bookkeeping in the correlator software and some noise correlation between the measured visibilities.
All Tables
Definition of the uv and sky scales relevant to widefield interferometric imaging.
Interval ranges of definition and associated sampling rates for the used functions.
Minimum sizes of the dirty beam images to get an image fidelity or a dynamic range greater than a given value.
Definition of the symbols used to explore the influence of onthefly scanning on the measurement equation.
All Figures
Fig. 1 Visualization of the different angular scales relevant to widefield interferometric imaging. The notion of antialiasing scale (θ_{alias}) is introduced and discussed in Sect. 4.2. 

Open with DEXTER  
In the text 
Fig. 2 Illustration of the principles of widefield synthesis, which enables us to image widefield interferometric observations. The top row displays the sky plane. The middle row displays the 4dimensional visibility space and the bottom row shows 2dimensional cuts of this space at different stages of the processing. In panels b) to d), the scanned dimensions (α_{s} and u_{s}) are displayed in blue while the phased spatial scale dimensions (u_{p}) are displayed in red and the spatial scale dimensions (u) of the final widefield uv plane are displayed in black. The grey zones of panels b.2) and c.2) show the regions of the visibility space without measurements (missing shortspacings). In detail, panel a) shows a possible scanning strategy of the sky to measure the unknown brightness distribution at high angular resolution: for simplicity it is here just a 7field mosaic. Panels b.1) and b.2) sketch the space of measured visibilities: the uv plane at each of the 7 measured sky positions is displayed as a blue square box in panel b.1) and a blue vertical line in panel b.2). For simplicity, only 6 visibilities are plotted in panel b.1). Panels c.1) and c.2) sketch the space of synthesized visibilities after Fourier transform of the measured visibilities along the scanned coordinate (α_{s}): at each measured spatial frequency u_{p} (displayed on the blue axes) is associated one space of synthesized widefield spatial frequencies displayed as one of the red squares in panel c.1) and the red vertical lines in panel c.2). The widefield spatial scales are synthesized 1) on a grid whose cell size is related to the total field of view of the observation and 2) only inside circles whose radius is the primary diameter of the interferometer antennas. Panels d.1) and d.2) display the final, widefield uv plane. This plane is built by application of the shiftandaverage operator along the black lines on panel c.2), lines that display the region of constant u spatial frequency in the (u_{p}, u_{s}) space. Standard inverse Fourier transform and deconvolution methods then produce a widefield distribution of sky brightnesses as shown in panel e). 

Open with DEXTER  
In the text 
Fig. 3 Simple models of the antenna power patterns as a function of the sky angle in units of half the primary beam FWHM (θ_{fwhm}). In the 3 cases shown, the illumination is Gaussian with an edge taper of 12.5 dB but 3 different ratios of the secondarytoprimary diameters (i.e. f_{b}, the antenna blockage factors) are considered (see e.g. Goldsmith 1998, Chap. 6). The middle and bottom panels respectively model ALMA and PdBI antennas. The red lines define the minimum angular sizes for which the antenna power pattern is less than a given fraction. 

Open with DEXTER  
In the text 
Fig. 4 Length of the averaging linepaths displayed as black lines in panel c.2) of Fig. 2, as a function of the spatial scale in the final, widefield uv plane. In the case of a continuous sampling of u_{p} between d_{min} and d_{max}, these quantities can be interpreted as the number of measures that contribute to the estimate of (u). 

Open with DEXTER  
In the text 
Fig. 5 Sketches of the natural weighting of the synthesized widefield visibilities. Each measured spatial frequency will produce widefield spatial frequencies apodized by the transfer function () centered on the measured spatial frequency. The used transfer function depends on the telescopes used, explaining why widesynthesis naturally handles the short spacing either from a singledish antenna or from a heterogeneous array. The synthesized visibilities in the overlapping regions will then be averaged. Two textbook examples are illustrated: 1) the combination of data from the IRAM30 m singledish (red transfer function) and from the Plateau de Bure Interferometer (black transfer functions) at the top; and 2) the combination of data from ALMA 12 mantennas used either in singledish mode (red transfer function), in interferometric mode (black transfer functions) and of data from the ACA 7 mantennas (blue transfer functions) at the bottom. The minimum uv distances measured by each interferometer were set from the minimum possible distance between antennas (24 m for PdBI, 15 m for ALMA and 9 m for ACA). 

Open with DEXTER  
In the text 
Fig. C.1 Assessement of the relative error implied by the use of the true primary beam instead of the effective primary beam when analyzing interferometric onthefly data sets. Left: inverse Fourier transform of interferometer primary beam, (i.e. the autocorrelation of the antenna illumination). Right: relative error as a function of sampling rate of the primary beam. The curves of different colors show the results at different normalized uv distances (u/d_{prim}) from the center of . 

Open with DEXTER  
In the text 