Free access
 Issue A&A Volume 501, Number 2, July II 2009 801 - 812 Catalogs and data http://dx.doi.org/10.1051/0004-6361/200811267 05 May 2009

## I. Data release

E. Angelakis1 - A. Kraus1 - A. C. S. Readhead2 - J. A. Zensus1 - R. Bustos34 - T. P. Krichbaum1 - A. Witzel1 - T. J. Pearson2

1 - Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany
2 - California Institute of Technology, 1200 East California Boulevard, California 91125, Pasadena, USA
3 - Universidad de Concepción, Casilla 160-C, Concepción, Chile
4 - University of Miami, Department of Physics, 1320 Campo Sano Drive, FL 33146, USA

Received 31 October 2008 / Accepted 14 April 2009

Abstract
Context. We present the results of the flux density measurements at 4.85 GHz and 10.45 GHz of a sample of 5998 NVSS radio sources with the Effelsberg 100 m telescope.
Aims. The initial motivation was the need to identify the NVSS radio sources that could potentially contribute significant contaminating flux in the frequency range at which the Cosmic Background Imager experiment operated.
Methods. An efficient way to achieve this challenging goal has been to compute the high frequency flux density of those sources by extrapolating their radio spectrum. This is determined by the three-point spectral index measured on the basis of the NVSS entry at 1.4 GHz and the measurements at 4.85 GHz and 10.45 GHz carried out with the 100 m Effelsberg telescope.
Results. These measurements are important since the targeted sample probes the weak part of the flux density distribution, hence the decision to make the data available.
Conclusions. We present the table with flux density measurements of 3434 sources that showed no confusion allowing reliable measurements, their detection rates, their spectral index distribution and an interpretation which explains satisfactorily the observed uncertainties.

Key words: radio continuum: general - catalogs - galaxies: active - cosmic microwave background

## 1 Introduction

Targeted multi-frequency surveys can be very efficient in serving several fields of astrophysical research such as revealing new Gigahertz Peaked Spectrum (GPS) and High Frequency Peaking (HFP) sources, estimating higher frequency source counts from extrapolating the radio spectra and hence computing the confusion limits etc. Consequently, they can be of essential importance in the study of the cosmic microwave background radiation (CMB) through the characterization of the foregrounds. Here, we present the results of the study of a sample of 5998 NRAO VLA Sky Survey (NVSS, Condon et al. 1998) sources at three frequencies. 1.4 GHz is provided by the NVSS catalog, while 4.85 GHz and 10.45 GHz were observed with the Effelsberg 100 m radio telescope. The measurements were initially motivated by the need to estimate the emission that they could contribute at 31 GHz. This is the band in which the cosmic background imager (CBI, Padin et al. 2001) operates, as explained below. In a future publication we plan to use the extrapolated flux densities in order to compute the source counts and the confusion limits at higher frequencies and compare the results with those from other surveys.

### 1.1 The CMB contaminants

Having traveled the path between the surface of last scattering and the observer, the CMB is subject to the influence of numerous sources of secondary brightness temperature fluctuations, cumulatively referred to as foregrounds. The reliability of the information extracted from the study of the primordial fluctuation power spectrum is tightly bound to how carefully such factors have been accounted for.

The potential contaminants can crudely be classified in those of galactic and those of extragalactic origin (for a review, see Refregier 1999; Tegmark et al. 2000). Moreover, depending on their character, they influence the power spectrum at different angular scales. Galactic foregrounds could be the diffuse synchrotron emission (for a review, see Smoot 1999) attributed to galactic relativistic electrons, the free-free emission originating in H II regions and the dust emission due to dust grains in the interstellar medium that radiate in a black body manner. Extragalactic foregrounds could be the thermal and the kinematic Sunyaev-Zel'dovich effect in galaxy clusters (Sunyaev & Zeldovich 1970), manifested through the distortion of the black body CMB spectrum induced by either hot ionized gas in the cluster (in the former case), or matter fluctuations (in the latter case).

In the latter class, radio galaxies and quasars cumulatively referred to as point radio sources, populating the entire radio sky, comprise by definition the most severe contaminant affecting small angular scales. The sample studied in the current work is exactly the NVSS point radio sources that lie within the fields targeted by the CBI experiment.

### 1.2 The cosmic background imager

The CBI is a 13-element planar synthesis array operating in 10 1-GHz channels between 26 and 36 GHz (Padin et al. 2001). It is located at an altitude of roughly 5080 m near Cerro Chajnantor in the Atacama desert (northern Chilean Andes). Its task was to study the primordial anisotropies at angular scales from  5  to 0.5  ( , Padin et al. 2002). The observations of the primordial anisotropies are made in four distinct parts of the sky, separated from one another by 6 h in RA (Mason et al. 2003; Pearson et al. 2003, Table 1).

From the NVSS catalog, it is known that within the CBI fields, there are in total 5998 discrete radio sources with  mJy. Inevitably, they comprise the potential contaminants that may impose secondary fluctuations in the observed background temperature field (Readhead et al. 2004).

### 1.3 The solution

Instead of removing all potentially contaminated pixels from the CMB maps (which would unavoidably cause a significant data loss), it would suffice to identify the sources that contribute negligible flux density at higher frequencies (below the few-mJy threshold) and ignore them during the CMB data analysis. On the basis of the assumptions that:

1.
the radio spectrum is described by a simple power law of the form (with hereafter being the spectral index);
2.
the spectrum is not time variable.
This identification can in principle be done by the extrapolation of the radio spectrum as obtained at lower frequencies. The radio spectrum consists of the flux density at 1.4 GHz as extracted from the NVSS and those at 4.85 and 10.45 GHz as measured with the Effelsberg telescope.

### 1.4 The sample

Table 1:   The coordinates of the points defining the targeted fields.

The list of targeted sources includes all 5998 NVSS sources present in the CBI target fields displaying  mJy. At this limit the NVSS is characterized by 99% completeness. The sources are distributed in four sky regions - rectangular in ra-dec space - with their coordinates shown in Table 1. The criterion for the selection of these regions has been their foreground emission. It has been required that they have IRAS 100 m emission less than 1 MJy sr-1 (Pearson et al. 2003), low galactic synchrotron emission and no point sources brighter than a few hundred mJy at 1.4 GHz. It is clear therefore that the sample of the 5998 sources represents the weak part of the flux density distribution. This is the most prominent characteristic of the sample. In fact, it is readily shown in Fig. 1 that roughly 80% of those sources are of  mJy. The large galactic latitudes of the targeted fields indicate that the sample is likely to consist completely of extragalactic discrete radio sources that is, quasars and radio galaxies.

 Figure 1: The NVSS 1.4 GHz flux density distribution of our sample. Roughly 80% is below 20 mJy. This plot demonstrates clearly the choice made of radio quiet'' sky regions. Open with DEXTER

In the current work we present the results extracted from a sample of 3434 sources which as it is discussed in Sects. 3.3 and 4.2 show no confusion from field sources and hence allow reliable measurements.

## 2 Observations

### 2.1 Observing system

The flux density measurements were conducted with the Effelsberg telescope between July 2003 and July 2006. The multi-beam heterodyne receivers at 4.85 GHz and 10.45 GHz were used. Multi-feed systems use software beam-switch'' for removing mostly linear troposphere effects. Each receiver was used in two-beam mode (although the 10.45 GHz one is equipped with 4 feeds). In both cases, the beams are separated in azimuth and each delivers left-handed and right-handed circular polarisation channels (LCP and RCP respectively). Both systems are mounted in the secondary focus cabin of the 100 m telescope. Table 2 gives their characteristics.

### 2.2 Observing technique

In order to achieve time efficiency, the observations have been made with the on-on'' method. Its essence relies on having the source in either of the two beams at each observing phase whereas the other feed is observing the atmosphere off-source (the angular distance of the used feeds is given in Table 2). The subtraction of the two signals removes linear atmospheric effects. For clarity, one complete measurement cycle will hereafter be termed as one scan. Each scan, in our case, consists of four stages, or sub-scans.

In order to illustrate the exact observing technique used, we label the feeds of any of the two receivers as reference and main. Let A be the configuration of having the reference beam on-source while the main beam is off-source and B the reciprocal case. The telescope is then slewed in such a way as to complete a sequence of four sub-scans in a A-B-B-A pattern. Assuming then that the system temperature is the same in both feeds for any given sub-scan the differentiation of the two signals should remove any other contribution than that attributed to the source power. The efficiency of the method is demonstrated in Fig. 2.

 Figure 2: Demonstration of the efficiency of the observing technique ( upper panel) and a prototype detection profile ( lower panel) in the case of the 10.45 GHz receiver. Each receiver has two feeds, each of which delivers two channels (LCP and RCP), giving a total of four channels. Those are shown in the four lower panels. The green colour represents the reference horn signal and the blue the main horn signal. The left-hand side panels are the LCP and the right-hand side panels are the RCP. The plot at the top of each panel shows the final profile after subtracting the signals from each of the two feeds and averaging over LCP and RCP. If MR is the RCP of the main horn and ML the LCP in the same horn, while RR, RL are for the reference horn, the final signal is given by . It is noteworthy that despite the complete absence of even the hint of a source in the individual channels ( upper panel), after the subtraction a clear case of a 22-mK signal (roughly 17 mJy) can be seen. (This figure is available in color in the electronic form.) Open with DEXTER

Despite its performance, as it is demonstrated in Fig. 2, this technique suffers from two major disadvantages: (i) it is subject to pointing errors that may result in power loss. This has been controlled with frequent pointing checks on strong nearby sources. As shown in Sect. 2.3 these errors are negligible; (ii) it is subject to cases of confusion i.e. cases of sources that contribute power to off-source position causing a false subtraction result. The solution to that could be either to observe the target at a different parallactic angle (at which there would be no confusing source in the off position), or to correct for it if the power of the confusing source is known. This approach is discussed in Sect. 2.3.

### 2.3 Logistics

#### Thermal noise:

For both frequencies, the goal of thermal noise ( , see also Sect. 2.4) around 0.2 mJy (1 level) has been set. Had this been the dominant noise factor, setting a 5 detection threshold would allow the detection of sources as weak as 1 mJy. The total integration time for achieving this thermal noise level is 1 and 4 min at 4.85 GHz and 10.45 GHz, respectively. This time is the cumulative integration time for all four sub-scans making up one observing cycle, that is a scan (see also Sect. 2.2). However, as shown in Sect. 3.2, the dominant noise factor is the troposphere rather than thermal noise. It is shown that the practical limit is of the order of 1.2 mJy which is judged to be adequate.

#### Field coverage:

A rigid constraint is the minimisation of the telescope driving time. This was achieved by driving the telescope through the field in a zig-zag'' way (travelling salesman problem). Each field was organised in stripes parallel to the right ascension axis and roughly 0.5 degrees across in declination. The sources within such a belt have, in turn, been organised in dozens in order of monotonous right ascension change. During an observing session a field would be targeted within hour angle range from -3 to 3 h. This is a compromise between staying in one field for as long as possible as well as observing through acceptable airmasses (i.e. not very low elevations that would result in large opacities).

#### Pointing offset minimisation and calibration:

For calibration purposes, one of the standard calibrators shown in Table 3 was observed at the beginning and the end of the observation of a field, i.e. roughly every six hours. Before the beginning of the field, also the focus of the telescope would be optimised. Changes in the focal plane within those six hours were accounted for by interpolation of the sensitivity factor between the values measured at the beginning and the end of the run. To maintain low pointing offsets, cross-scans were frequently performed on bright nearby point sources. On average a pointing check was done every 30 min to 1.5 h. This sustained average pointing offsets of as low as 3-4  for the 10.45 GHz and 7-8  for the 4.85 GHz measurements. These correspond to 4.5% and 5% of the FWHM at 10.45 GHz and 4.85 GHz respectively and result in a negligible power loss of the order of 1%. As an example, Fig. 3 shows the distribution of pointing offsets for the high frequency observations.

 Figure 3: The distribution of the pointing offsets for the case of the 10.45 GHz receiver. The dashed line represents the offsets in the elevation direction while the solid one gives those in azimuth. The mean offset is around 3  corresponding to roughly 1% power loss. Open with DEXTER

### 2.4 Data reduction

Before any further discussion it must be clarified that despite the fact that the receivers deliver two circular polarization channels (namely LCP and RCP, see Sect. 2.1), the possible circular polarization has been neglected with the LCP and RCP channels being averaged (see Fig. 2 and Appendix B). This is a reasonable assumption provided that the average degree of circular polarization of these sources is expected to be low (<, e.g. Komesaroff et al. 1984; Weiler & de Pater 1983).

Figure 2 illustrates the detection pattern''. From that picture it is clear that a measurement is the difference between the average antenna temperature of the first and the second sub-scans ( ) as well as that between the third and the fourth ( ). These two differences essentially provide two independent measurements of the target source. Ideally, the results should be identical. Differences should be attributed to atmospheric fluctuations, given that the overlap of the off'' and the on'' beam is not precisely 100%, as well as confusion (field sources contributing power in the off-beam position). This effect however, comprises the most severe uncertainty in the measurement. A detailed discussion is given in Appendix A.

Throughout the data reduction process two types of errors are computed. The first, denoted by , is the result of the formal error propagation (assuming Gaussian statistics) of the data scatter around the average (error in mean), is chiefly a property of the detector and is practically computed by the radiometer formula. The second is root mean square (rms) in the antenna temperature as is measured from the first subtraction (sub-scans 1 and 2) and that from the second subtraction (sub-scans 3 and 4). That is, . Subsequently, the is taken as a first estimate of the error in the measurement. In Sect. 3.2, we describe how the final errors reported in Table 8 have been calculated.

#### 2.4.1 Corrections

Each measurement conducted as described earlier is consequently subjected to a number of corrections:

#### Opacity correction:

This process is meant to correct the attenuation of the source signal due to the terrestrial atmosphere. The computation of the opacity is done by utilisation of the observed system temperatures.

#### Elevation dependent gain correction:

Small scale divergences of the primary reflector's geometry from the ideal paraboloid lower the sensitivity of the antenna. These gravitational deformations are a function of elevation with the consequence of an elevation-dependent antenna gain. The elevation-gain'' curve is a second order polynomial of the elevation and is constructed experimentally by observing bright point-like sources over the entire elevation range.

#### Sensitivity correction:

This process is essentially the translation of the antenna temperature to Jy. That is done by observing standard calibrators (Table 3). Given a source of known flux density [Jy] and measured antenna temperature [K], the sensitivity factor will then be . However, the sensitivity factors obtained this way depend on the quality of axial focusing. This is optimised at the initialisation of a field observation. Nevertheless, it can change over the span between two such consecutive optimisations (of the order of six hours) and particularly when large temperature gradients are present throughout the telescope structure. In accounting for that, the sensitivity factors have been measured both after the first focus correction (beginning of the observation) and also before the next focus correction (end of the field observation). For an observing instant in between, the result of linear interpolation between those two values has been used. The flux densities of the calibrators are taken from Ott et al. (1994), Baars et al. (1977) and Kraus priv. comm. It must be noted that apart from NGC 7027 the sources used as calibrators are point-like for the beamwidth of Effelsberg telescope. NGC 7027 on the other hand, is extended at 10.45 GHz. At this frequency its size is roughly  (the beamwidth at 10.45 GHz is 67 ). Nevertheless, the power loss due to this effect is still less than 1% and therefore, no beam correction is necessary.

Table 3:   The flux densities and spectral indices of the standard calibrators.

#### Confusion:

A potential limitation for any observation is confusion which has been well studied since the early days of radio surveys (Scheuer 1957). Put simply, it refers to blends of unresolved sources that build up significant flux densities. Traditionally, confusion has been treated statistically in terms of expected flux density per unit area for a given frequency and for modern observing facilities it often constitutes a factor imposing more severe limits than the thermal noise itself. In the case of the currently discussed work, the problem becomes even more severe because of the beam switch technique used. In this case, any combination of field NVSS sources can be in the vicinity of the targeted source within the on'' or any of the off'' positions. That can severely affect the differentiation algorithm by contaminating the subtracted signal. Some typical confusion cases are shown in Fig. 4. The confusion status has been monitored for every observed scan on the basis of the NVSS positions and has been corrected afterwards whenever possible (see description Appendix B).

 Figure 4: Confusion examples. The left-hand column shows the detection profiles whereas the right-hand one shows the NVSS environment of each target source. There - assuming a Gaussian beam pattern - the solid line marks the 50% beam gain level while the dashed one denotes the 10% level. The red circles correspond to the on'' positions and blue ones to the off'' positions. The target sources are shown in red and the environment NVSS ones in black. The left-hand side plots show the result of the differentiation with respect to the strength of the confusers'' and their position relative to the centre of the beam. (This figure is available in color in the electronic form.) Open with DEXTER

## 3 Errors

In general, the requirement of time efficiency can be in conflict with measurement accuracy by limiting, for instance, the time invested in calibration. A careful and realistic quantification of the involved uncertainties is therefore essential. The following discussion deals with the system repeatability study which in fact sets the pragmatic limit to the reliably detectable flux density.

### 3.1 System repeatability

Given the goal of reaching the telescope's theoretically expected least detectable flux density, it is crucial to estimate the repeatability of a measurement. Let the term observing system'' collectively describe everything but the target source. Hence, it refers to the combination of the telescope, the thermal noise, the atmosphere, the confusion etc. An ideal observing system should output exactly the same result for the flux of a source independently of the number of repetitions of the measurements, as long as the source itself is not variable. If we therefore assume that the source is non-variable, the variance of its measured flux density over several repetitions can be perceived as system variability caused by any combination of the possible factors referred to previously. The estimation of the mean variance of the system as a whole sets the lower limit in the detectable flux density. Considerable observing time has been spent in monitoring exactly this property of the system.

A number of sources, hereafter called the repeaters'', have been selected to be observed during every observing run. They have been chosen to satisfy two conditions:

1.
to be intrinsically non-variable. It is known that sources of steep spectrum are unlikely to be intrinsically variable. Therefore, a number of sources with spectral index steeper than around -0.5 were chosen;
2.
to uniformly cover the whole attempted flux density space. This is essential as we expect that the system repeatability, as defined above (the rms in the repeatedly measured flux density of an intrinsically non-variable source), is a function of the flux density of the target.
Roughly 10 sources per field were selected and repeatedly observed at the beginning of each observing run of the respective field. These sources are included in Table 4 along with their average fluxes at 1.4, 4.85 GHz and 10.45 GHz as well as their low- and high-frequency spectral indices. As it is shown there, their flux densities cover the range up to a few hundred mJy. In order to extend the flux density range the pointing sources and the main calibrators were also used in the analysis (see Tables 3 and 4).

Table 4:   The sources used for pointing correction and the repeaters''.

 Figure 5: The repeatability plots. The upper plot corresponds to 4.85 GHz and the lower one to 10.45 GHz. The parameters of the fitted curves are given in Table 5. In the lower panel (10.45 GHz) the red points correspond to sources that are known to exhibit variability characteristics and have been excluded during the fitting procedure (namely, 025515+0037 and 024104-0815 in order of flux density). (This figure is available in color in the electronic form.) Open with DEXTER

### 3.2 Repeatability plots and error budget

In Sect. 2.4 it was explained that as a first estimate of the error in a measurement, has been taken the maximum between the error in the mean after the formal error propagation, and the part influenced by the atmospheric fluctuations and confusion, (i.e. ). The former is a parameter of the detector and is not expected to vary significantly. The latter on the other hand can vary even for the same target source as a function of the atmospheric conditions and the geometry of the dual-beam system with respect to the target source and its NVSS environment (confusion).

A way to statistically quantify the uncertainty in a measurement including collectively all possible factors of uncertainty, is to investigate how well the measurement of a target source, assumed intrinsically non-variable, repeats over several observations (see Sect. 3.1). For a given frequency, the measure of the system repeatability is the rms in the average flux for every repeater as a function of its average flux density, . The associated plots and are shown in Fig. 5.

The rms flux density , can be written as a function of the mean flux density S, being the Pythagorean sum of (i) the flux density independent term, and (ii) the flux density dependent term, . In particular, it is described by:

 (1)

Fitting this function to the 4.85 GHz and 10.45 GHz measurements, has resulted in the parameters in Table 5. From those fits one can readily estimate the minimum detectable flux density at each frequency. Setting the detection threshold at , the smallest detectable flux density is roughly 6 mJy. In Appendix A we discuss the comparison of the fitted values of with the error of an individual measurement and give a quasi-empirical interpretation of the measured parameters. From the discussion previously and in Sect. 2.4 the most reasonable (and rather conservative) estimate of the errors, would be:

 (2)

This definition is used to derive the final errors reported in Table 8.

Table 5:   The fitted parameters for the repeatability curves.

### 3.3 Confusion flavors

Depending on the configuration of the dual-beam system in the sky relative to the target source and the instantaneous spatial distribution of NVSS field sources, a given scan can show different confusion flavors''. On this basis, there can be three scan classes discriminated by their confusion status at a given observing frequency:

1.
Clean scans. Those are the measurements during which there were no contaminating sources within any of the beam positions (see top panel in Fig. 6). For these cases, further action need not be taken.
2.
Clustered scans. These are scans on sources that are accompanied by neighboring sources within a radius smaller than the associated beam-width (hence the term cluster). These sources cannot be discriminated (see middle panel in Fig. 6) and reliable measurement is impossible. For a given frequency only an instrument of larger aperture could resolve them (e.g. interferometer). For this reason, these scans are absent from the discussions in the current paper.
3.
Confused scans. This refers to the case of having any combination of field sources within any beam (see lower panel in Fig. 6). For these cases one must either conduct a measurement at different parallactic angle such that the confusing source will not coincide with an off'' position, or reconstruct their flux from the exact knowledge of the flux of the confusers'' (see Appendix B).
It is important to underline that this effect refers to confusion from field NVSS sources alone and not to blends of unresolved background radio sources which may contribute significant flux.

 Figure 6: Confusion flavors. From top to bottom: a clean, a cluster and a confusion case. The notation is identical to that in Fig. 4. (This figure is available in color in the electronic form.) Open with DEXTER

In Table 6 we show the detected confusion flavors for each field and frequency. From this table it is readily noticeable that the confusion becomes less important with increasing frequency. For instance, the fraction of sources that suffer neither from clustering nor from confusion effects increases from 59% at 4.85 GHz to 92% at 10.45 GHz. This is easily interpretable in terms of smaller beam-width (67  as opposed to 146  at 4.85 GHz). In fact, considering that the majority of sources show steep radio spectra (see Sect. 4.3), it is expected that in practice significantly fewer sources will suffer from confusion simply because their field sources are too weak already at 4.85 GHz and 10.45 GHz. It is important to state that in the following studies we consider only a sub-sample of 3434 sources which are either clean or have been de-confused.

Table 6:   The frequencies of confusion flavors of the observed scans (measurements) for each field and observing frequency.

## 4 Results

### 4.1 Detection rates

The essence of our task is identifying the detection rates at each observing frequency. Assuming that the detectability of a target source is solely due to its spectral behavior, the detection rates can reveal the subset of sources that exhibit flat or inverted spectra which can adversely affect CBI data (i.e. with ). The current sub-section deals with this problem. That is, essentially counting the sources that have been detected at each frequency. For both frequencies the detection threshold has been set to , with being the error in the individual measurement as defined by Eq. (2).

A supervisory way to describe the detection rates is using the 2-bit binary detection descriptor as in Table 7. That is, a two-bit binary in which the left-hand side bit describes the detection at the low frequency and the one on the right-hand side that at the high frequency with 0'' denoting a non-detection and 1'' denoting a detection. From all the sources in the sample we have selected only those that are either clean at 4.85 GHz or have been de-confused as described in Appendix B. Those sources must then also be clean at 10.45 where the beam-width is significantly smaller.

Table 7:   The detection rates.

Table 8:   The Effelsberg measured flux densities along with the NVSS ones and the computed spectral indices (it is assumed that ).

### 4.2 The measured flux densities

In Table 8, available at the CDS, we summarise the acquired Effelsberg measurements along with the computed spectral indices for each source. For the construction of this table, only clean or de-confused cases have been considered. In that table, Col. 1 lists the name of the source, Cols. 2 and 3 give the NVSS flux density and its error respectively, Cols. 4 and 5 give the flux density at 4.85 GHz and its error respectively. Cols. 6 and 7 list the flux density at 10.45 GHz and its error. The 4.85 GHz and 10.45 GHz have been measured with 100 m radio telescope in Effelsberg. Columns 8 and 9 give the low frequency spectral index between 1.4 GHz and 4.85 GHz and its error. Similarly, Columns 10 and 11 give the high frequency spectral index between 4.85 GHz and 10.45 GHz and its error. Finally, Cols. 12 and 13, give the least-square fit spectral index , from the 1.4 GHz (NVSS), 4.85 GHz and 10.45 GHz data points and its error.

As mentioned earlier, a measurement is regarded to be a detection only if it is characterized by . Whenever this is not the case an upper limit of is put, with sigma being the error computed as described in Sect. 2.4. In such cases, the associated spectral indices are not quoted in the table.

According to our discussion in Sect. 3.2, the errors quoted in Table 8 can not be <1.2 or <1.3 at 4.85 or 10.45 GHz, respectively. Yet, there exist entries that the given error is less than those limiting values. The reason for this is that in the rare cases that more than one measurements of the same source are available a weighted averaged is computed to be the flux density if the source. Then the associated error may appear smaller that the values of 1.2 and 1.3.

In that table are included all the measurements that are characterized as clean or they have been de-confused. Clustered sources or cases suffering from confusion are not included. It must be noted that concerning the CMB experiments these cases still provide useful information. Typically, they are characterized by lower angular resolutions and hence clustered sources can be treated as individual objects. All in all, the sources included there amount to about 57% of the whole sample amounting to 3434 entries.

### 4.3 Spectral indices

The motivation for the current program has been, as discussed earlier, the estimation of the extrapolated flux to be expected at higher frequency bands performed on the basis of the three-point spectral index. Here we summarize the findings of the spectral indices study. Hereafter, it is assumed that .

To begin with, Fig. 7 shows the spectral index distributions for the spectral indices in table 8. In particular, the distributions of , and least-squares fit three-point are shown. All three of those are constructed only with 5 data. For computing the three-point spectral index, an implementation of the nonlinear least-squares (NLS) Marquardt-Levenberg algorithm (Marquardt 1963) was used. That imposes natural weighting (i.e. ).

The median spectral index is around -0.71 whereas the average value is roughly -0.59 indicating the skewness of the distribution. On the other hand, shows a median value of roughly -0.69 whereas has also a median of -0.75. Of the 402 sources detected at both frequencies, 136 (34%) appear with a spectral index which implies that the majority of the sources appear to have steep spectrum ( ). Moreover, it nicely explains the large percentage of sources with 2-bit binary detection descriptor of 00 or 10. What is important about this population is that these are the sources that need not be vetoed out'' during the CMB data analysis since they are not bright enough to contribute detectable flux at the frequencies near 30 GHz at which experiments like CBI operate.

 Figure 7: The normalized spectral index distributions. The grey area histogram shows the distribution of the three-point least-square-fit spectral index, , for sources that have been detected at both frequencies with sigma-to-noise of at least 5 (402 sources, Table 7). The same sub-sample is used for the blue line distribution which denotes that of the high'' frequency spectral index, . Finally, the red line shows the distribution of the low'' frequency index, , for a number of 1104 sources detected at 4.85 GHz (see Table 7). The mean values of , and are -0.59, -0.54 and -0.69, respectively. (This figure is available in color in the electronic form.) Open with DEXTER

All the measurements have been conducted in an approximately quasi-simultaneous way. The coherence time varies between hours to days. It is therefore important to contemplate on how the lack of simultaneity influences the results. Provided that most of the sources follow a steep spectrum trend and steep spectrum sources are not expected to vary significantly, it is reasonable to assume that statistically it will be insignificant.

## 5 Conclusions

1.
The applied observing technique has been chosen to be efficient in terms of time. Its combination with the beam-switch allows a remarkably efficient removal of linear atmospheric effects. However, it suffers from analytic'' confusion (caused by sources of positions known from other surveys) as expected. Nevertheless, the confusion effect decreases fast with frequency (from to between 4.85 GHz and 10.45 GHz) thanks to the increase of the telescope angular resolution. Accounting for the shape of the radio spectrum would imply a further decrease in the number of sources that can actually cause confusion.
2.
We show that for both the 4.85 GHz and the 10.45 GHz observations the dominant factor in the smallest reliably ( ) detectable flux density has been the tropospheric turbulence. In Appendix A we show that the tropospheric factor is of the order of 0.9 mJy and 1.3 mJy for the 4.85 GHz and the 10.45 GHz observations, respectively. On the other hand, while the second most important factor for the low frequency is the confusion caused by blends of unresolved sources (see Sect. 3.2 and Table 5), for the higher frequency thermal receiver'' noise dominates. The confusion in the latter case drops dramatically by an order of magnitude to 0.08 mJy due to the smaller beamwidth and the presumed spectral behaviour of radio sources. From this discussion it is clear that the major limiting factor has been the troposphere itself setting a physical limitation in the least detectable flux density. That appears to be between and for the 4.85 GHz and the 10.45 GHz respectively.
3.
The agreement between the interpretation/formulation of the errors described in Appendix A and the observed ones from the study of the repeaters'' is noteworthy.
4.
In Appendix B an algorithm for achieving de-confusion'' is presented. That is, reconstructing a source antenna temperature on the basis of some elementary presumptions. The algorithm has been successfully used in 6 of the cases in the current study and can be easily generalised in projects demanding automation.
5.
In Appendix C we present an algorithm that is responsible for the quality check'' of every observation (a scan''). Incorporating a number of tests can be used for automatically detecting cases of bad quality data and can be generalised to be used in a blind'' mode.

Acknowledgements
The authors would like to thank the anonymous referee for the comments that significantly improved the content of the manuscript. Furthermore, we want to thank the internal referee Dr D. Graham also for his comments and suggestions. We would like to acknowledge the help of Dr I. Agudo, Mrs S. Bernhart, Dr V. M. C. Impellizzeri and Dr R. Reeves and all the operators at the 100 m telescope for their help with the observations. The author was mostly financed by EC funding under the contract HPRN-CT-2002-00321 (ENIGMA) and completed this work as member of the International Max Planck Research School (IMPRS) for Radio and Infrared Astronomy. All the results presented here have been based on observations with the 100 m telescope of the MPIfR (Max-Planck-Institut für Radioastronomie).

## Appendix A: Expected versus observed uncertainties

Here we investigate how the fitted values of and mcompare with those semi-empirically known. To begin with, the constant term in Eq. (1) can be decomposed in three constituents as follows:

 (A.1)

: the thermal noise, computable from the radiometer formula; : the confusion error, known semi-empirically (Condon et al. 1989); : variable atmospheric emission error, computable from the atmospheric opacity change.

Separately for each frequency these quantities give:

 Frequency Fitted (GHz) (mJy) (mJy) (mJy) (mJy) (mJy) 4.85 0.16 0.80 0.92 1.23 1.2 10.45 0.22 0.08 1.30 1.32 1.3

being satisfactorily close to the expected values. Similarly, the flux dependent part of Eq. (1) can be understood as a multi-factor effect. Specifically, it can be written that:

 (A.2)

: pointing offset error, easily calculable on the basis of a Gaussian-like beam pattern and from the measured average pointing offsets ( ); : instability of noise diode estimated from Intra-day Variability experiments (Kraus priv. comm.); : variable atmospheric absorption error, estimated from the Water Vapor Radiometer data (Roy 2006).

The expected factor m' as compared to the fitted one m, will be:

 Frequency m Fitted m (GHz) (%) (%) (%) (%) (%) 4.85 0.21 1.3 0.004 1.32 1.3 10.45 1.01 1.3 0.005 1.64 1.6

The term determining the detection threshold is clearly the term in Eq. (A.1). From the above discussion, it is clear that for the low frequency observations both the atmospheric and the confusion terms are significant. However, For the higher frequency there appears a decrease in the confusion term (due to the smaller beam size) and the dominant remaining factor is the atmospheric itself.

## Appendix B: Resolving the confusion

Here we present a method to partially resolve the confusion caused by known field sources. The goal is to reconstruct the flux density of a target source whenever possible from knowing the parameters of the ones causing the confusion. These parameters are known from the observations described here since they also belong to the same source sample and hence are observed. Note that in the majority of the cases (, Table 6) the de-confusion is not possible. The reason for that may be either that the confusing sources are not detected or they are confused themselves. The following analysis is entirely done in the (RA, Dec) space.

Assume a certain orientation, for instance of the 4.85 GHz dual beam system, with respect to the target source and a distribution of confusing sources as shown in Fig. B.1. There, the target is represented by the yellow star symbol. In that illustration there are three distinguishable populations of sources: The on'' population, made of sources that lie within a radius of 1 beam-width (FWHM) about the target source (Si in Fig. B.1) and each contributes antenna temperature Ti. Hence, cumulatively this group will contribute a brightness temperature:

 (B.1)

The SUB-1, 4'' population, which are the sources Si' located within a circle of 1 FWHM of the horn position during sub-scan 1 or 4. This is the position of the main horn during the 1st and 4th sub-scan. This population will contribute a brightness temperature:

 (B.2)

The SUB-2 and 3'' population of the sources occupying the beam position during sub-scans 2 and 3. This is the position of the reference horn during those sub-scans. Their contribution will then be:

 (B.3)

In Eqs. (B.1)-(B.3) Ti, Ti' or Ti'', is the brightness temperature contribution of a source at the frequency of interest after accounting for the distance from the center of the beam. Hence, a source of intrinsic brightness temperature that lies x0'' from the center of the beam of the 4.85 GHz system, will contribute:

 (B.4)

From the above it is clear that , T1-4 and T2-3 will be added to the system temperature altering the result of the differentiation method.

 Figure B.1: The horn arrangement as a function of time during the execution of an observation. In blue is the horn that is pointing off-source each time and in red the one on-source. Within each horn there might be a population of confusing sources the flux of which is represented by , or , with i=1, 2, 3, 4 ... S in yellow is the target source. The two blue circles on the left are misplaced because the sky rotates within a scan. (This figure is available in color in the electronic form.) Open with DEXTER

Preserving the notation used in Sect. 2.4, it can be shown that the real'' source brightness temperature, can be recovered from observable and , by:

 = (B.5) = (B.6)

Because the terms and in practice cannot be resolved (cluster cases, angular resolution limitation), it is meaningless to refer to them separately. This is why we refer to the cluster cases separately throughout the text and why we do not include them in Table 8. For the sake of the following discussion we refer to them cumulatively as .

This simple method has some weaknesses:

1.
Clustered confusers'': In the above discussion it is presumed that the flux densities of the members of a population (e.g. Si'), are known from the measurement of which target was themselves (all the sources we discuss are from of the same sample after all and hence have been targeted). This is true only if the distance of a pair of sources of the same population is FWHM/2. Hence, the above method has been applied only in those cases.
2.
Missing confusers'': The confusing sources are searched among the the NVSS ones. Hence, sources that are not detected by the NVSS survey which may become detectable at higher frequencies are neglected.
3.
Upper limits: As seen in Table 8, often the upper limits in the flux density are significant. However, they are not accounted for during the de-confusion algorithm.
4.
No corrections applied: The correction discussed in Sect. 2.4.1 are not applied for the confusing sources during the resolving algorithm.
5.
Inaccurate positions of beams and non-Gaussian beams: In all the above it has been assumed that the positions of the beams are precisely known and that there are no pointing offsets. Furthermore, the beam pattern is supposed to be described by a simple circular Gaussian.

## Appendix C: Data reduction pipeline''

The data volume acquired during the course of the current project has been reduced in a pipeline manner. Effort has been put into developing software beyond the standard data reduction packages used in Effelsberg that could assist the observer to reduce the data as automatically as possible at all stages. Here we attempt a rough and very brief description of only some of the steps followed. Throughout the pipeline, every system parameter is monitored and recorded. Some details are omitted in this description.

#### The front-end:

The front-end of the pipeline is the point at which counts'' (power) from the telescope are piped in the data reduction code. The input consists of four power data channels two (LCP and RCP) for each horn along with the signal from a noise diode of known temperature, for each one of them, i.e. eight channels in total.

#### RFI mitigation:

Before any operation is applied to the signal, Radio Frequency Interference (RFI) mitigation takes place. In Fig. C.1 an example is shown. Here, black represents the signal before and red the signal after RFI mitigation. In the top panel all four channels of the sky signal are shown in terms of counts''. A short-lived spike of extremely intense radiation, characteristic of RFI, is clearly seen. A routine iteratively measures the RMS in that sub-scan and removes the points above a pre-set threshold. The resulting signal is shown in red. The same procedure is followed for the noise diode signal. Finally, the bottom panel shows the final detection pattern free of RFI.

 Figure C.1: Demonstration of the efficiency of the RFI mitigation algorithm. Top panel: the sky signal before (black) and after (red) RFI mitigation. Lower panel: the final detection profile free of RFI. (This figure is available in color in the electronic form.) Open with DEXTER

#### The signal pre-calibration:

After the signals have been cleaned'' comes the stage of the comparison of each data point with the noise diode signal, both being measured in counts. The demand for achieving flux densities as low as theoretically predicted for the 100 m telescope imposes the necessity of having a noise diode signal that ideally should be constant with an rms of no more than a fraction of a percentile. However, often occurring cross-talk between different channels or other effects, may result in intra-scan instabilities (as shown in Fig. C.2) that may distort the detection pattern. The way around this problem has been the idea to normalize (calibrate'') the data to the average, over the whole scan, diode signal. The default would be a point-by-point calibration that may on the other hand significantly distort the detection pattern.

 Figure C.2: Characteristic cases of intra-scan instabilities of the noise diode signal. Each column corresponds to a different scan and each row to a different channel. The signal is in terms of counts. Open with DEXTER

#### System temperature measurement:

Having the data pre-calibrated (meaning in terms of antenna temperature), allows system temperature measurements. That in turn, allows measuring the atmospheric opacity for each particular observing session by using the system temperature of each scan. Later in the pipeline, this information is used for correcting for the opacity.

#### The corrections:

Following the previous stage is that of subtracting the signal of two feeds and the calculation of the antenna temperature. Afterwards,the opacity, gain curve and sensitivity corrections are applied as described in Sect. 2.4.1.

#### The quality check:

The conceptual end of the pipeline is the quality check subject to which has been every single scan. The term quality check'' wraps up a number of tests imposed on each scan. Some of them are:
1.
The system temperature of each channel is compared to the empirically expected one. Flags are raised at excess of 10, 20 and 30%. This test serves as an excellent tracer of weather effects, system defects etc.
2.
A second test is the rms and the peak-to-peak variation of the data in each sub-scan for each channel separately as well as for the final profile. An increase can be caused by extreme non-linear atmospheric effects as well as linear slopes present in the data. The latter is most often the result of increasing atmospheric opacity as the source is tracked at low elevations.
3.
In order to trace cases that show a clear linear drift as a result of increasing opacity, each scan has been sliced into four segments. A straight line has consecutively been fitted to each segment. A flag is raised when the slope of a segment is above some preset value.
4.
It is examined whether the final measurement profile is inverted and if so whether the absolute source flux density satisfies the detection threshold being set. It is possible in cases of confusion that a source in the off position may result in an inverted profile.
5.
It is checked whether sensitivity factor (K/Jy) applied to a scan agrees with the empirically expected value with a tolerance of 5%.

## Footnotes

... release
Full Table 8 is only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsweb.u-strasbg.fr/cgi-bin/qcat?J/A+A/501/801

## All Tables

Table 1:   The coordinates of the points defining the targeted fields.

Table 3:   The flux densities and spectral indices of the standard calibrators.

Table 4:   The sources used for pointing correction and the repeaters''.

Table 5:   The fitted parameters for the repeatability curves.

Table 6:   The frequencies of confusion flavors of the observed scans (measurements) for each field and observing frequency.

Table 7:   The detection rates.

Table 8:   The Effelsberg measured flux densities along with the NVSS ones and the computed spectral indices (it is assumed that ).

## All Figures

 Figure 1: The NVSS 1.4 GHz flux density distribution of our sample. Roughly 80% is below 20 mJy. This plot demonstrates clearly the choice made of radio quiet'' sky regions. Open with DEXTER In the text

 Figure 2: Demonstration of the efficiency of the observing technique ( upper panel) and a prototype detection profile ( lower panel) in the case of the 10.45 GHz receiver. Each receiver has two feeds, each of which delivers two channels (LCP and RCP), giving a total of four channels. Those are shown in the four lower panels. The green colour represents the reference horn signal and the blue the main horn signal. The left-hand side panels are the LCP and the right-hand side panels are the RCP. The plot at the top of each panel shows the final profile after subtracting the signals from each of the two feeds and averaging over LCP and RCP. If MR is the RCP of the main horn and ML the LCP in the same horn, while RR, RL are for the reference horn, the final signal is given by . It is noteworthy that despite the complete absence of even the hint of a source in the individual channels ( upper panel), after the subtraction a clear case of a 22-mK signal (roughly 17 mJy) can be seen. (This figure is available in color in the electronic form.) Open with DEXTER In the text

 Figure 3: The distribution of the pointing offsets for the case of the 10.45 GHz receiver. The dashed line represents the offsets in the elevation direction while the solid one gives those in azimuth. The mean offset is around 3  corresponding to roughly 1% power loss. Open with DEXTER In the text

 Figure 4: Confusion examples. The left-hand column shows the detection profiles whereas the right-hand one shows the NVSS environment of each target source. There - assuming a Gaussian beam pattern - the solid line marks the 50% beam gain level while the dashed one denotes the 10% level. The red circles correspond to the on'' positions and blue ones to the off'' positions. The target sources are shown in red and the environment NVSS ones in black. The left-hand side plots show the result of the differentiation with respect to the strength of the confusers'' and their position relative to the centre of the beam. (This figure is available in color in the electronic form.) Open with DEXTER In the text

 Figure 5: The repeatability plots. The upper plot corresponds to 4.85 GHz and the lower one to 10.45 GHz. The parameters of the fitted curves are given in Table 5. In the lower panel (10.45 GHz) the red points correspond to sources that are known to exhibit variability characteristics and have been excluded during the fitting procedure (namely, 025515+0037 and 024104-0815 in order of flux density). (This figure is available in color in the electronic form.) Open with DEXTER In the text

 Figure 6: Confusion flavors. From top to bottom: a clean, a cluster and a confusion case. The notation is identical to that in Fig. 4. (This figure is available in color in the electronic form.) Open with DEXTER In the text

 Figure 7: The normalized spectral index distributions. The grey area histogram shows the distribution of the three-point least-square-fit spectral index, , for sources that have been detected at both frequencies with sigma-to-noise of at least 5 (402 sources, Table 7). The same sub-sample is used for the blue line distribution which denotes that of the high'' frequency spectral index, . Finally, the red line shows the distribution of the low'' frequency index, , for a number of 1104 sources detected at 4.85 GHz (see Table 7). The mean values of , and are -0.59, -0.54 and -0.69, respectively. (This figure is available in color in the electronic form.) Open with DEXTER In the text

 Figure B.1: The horn arrangement as a function of time during the execution of an observation. In blue is the horn that is pointing off-source each time and in red the one on-source. Within each horn there might be a population of confusing sources the flux of which is represented by , or , with i=1, 2, 3, 4 ... S in yellow is the target source. The two blue circles on the left are misplaced because the sky rotates within a scan. (This figure is available in color in the electronic form.) Open with DEXTER In the text

 Figure C.1: Demonstration of the efficiency of the RFI mitigation algorithm. Top panel: the sky signal before (black) and after (red) RFI mitigation. Lower panel: the final detection profile free of RFI. (This figure is available in color in the electronic form.) Open with DEXTER In the text

 Figure C.2: Characteristic cases of intra-scan instabilities of the noise diode signal. Each column corresponds to a different scan and each row to a different channel. The signal is in terms of counts. Open with DEXTER In the text