next previous
Up: New algorithms for reducing spectra


   
4 Spatial and spectral decomposition of an order


  \begin{figure}
\par\includegraphics[width=6.3cm,clip]{h3260f7.eps}
\end{figure} Figure 7: Flat fielding of stellar spectra. The top panel shows a normalized flat field image. Vertical and diagonal bands of reduced sensitivity are likely caused by CCD internal structure. The middle panel shows a normalized stellar spectrum (HD 217522), for illustration purposes only, since this is not part of the normal reduction procedure. The bottom panel shows a flat fielded stellar spectrum image.

Perhaps the most important recent improvement in echelle reduction procedures is the introduction of spectral order decomposition. Recall that we define the spatial profile to be the relative illumination profile of an order perpendicular to echelle dispersion. For an ideal optical system that is perfectly aligned, the spatial profile is a monochromatic image of the illuminated entrance slit, aligned along detector columns (after re-orientation, if necessary). In this case a spectral order S can be represented as the product of a spectrum f and a spatial profile g:

 \begin{displaymath}
S(x,y) = f(x) \times g(y - y_0(x))
\end{displaymath} (3)

where the function y0(x) describes the order location. In practice, the ideal image is also convolved with the PSF and sampled with detector pixels of significant size (Fig. 3). Although detector pixels set a natural spatial scale, g must be sampled on a finer grid to reproduce the sampling "waves'' visible in Fig. 3.

The discrete version of Eq. (3) includes integration of the spatial profile over each detector pixel:

 \begin{displaymath}
S_{x,y} \approx f_x\cdot \sum_j \omega_{x,y}^j g_j
\end{displaymath} (4)

where $\omega_{x,y}^j$ are the weights, which differ from zero only for indices j that map into pixel x,y. The structure of the $\omega$ for each column deserves a closer look. Define an integer oversampling factor, M, to be the detector pixel size divided by the grid size adopted for g. Then $\omega_{x,y}^j$ for a given x,y is:
$\displaystyle \omega_{x,y}^j = \left\{
\begin{array}[c]{ll}
0 & j<j_0 \\
\text...
...bf{FRACT}\left(y_0(x) M\right) & j=j_0+M \\
0 & j>j_0+M \\
\end{array}\right.$     (5)

where j0 is the index of the first subpixel that falls (perhaps only partially) in detector pixel x,y, and FRACT is the fraction of that first subpixel that is contained in the detector pixel. This structure is exactly the same for all y at a given x! The relationship between the j0 and j0+M elements of $\omega$ reflects the periodic nature of $\omega$, due to the use of an integer oversampling factor.

During spectroscopic reduction, our goal is to decompose each observed order Sx,y into the spectrum fx and a spatial profile gj. We can accomplish this decomposition by solving the inverse problem:

 \begin{displaymath}
{\cal F}\equiv\sum_{x,y} \left[f_x \sum_j \omega_{x,y}^j g_j -
S_{x,y}\right]^2 = \mbox{minimum}.
\end{displaymath} (6)

The main advantage of this approach is that the number of data points exceeds the number of unknowns for all practical problems. To avoid an obvious scaling ambiguity for f and g, we require g to be area normalized.

Selection of the oversampling factor M needs special attention. One extreme case occurs when orders are perfectly aligned with detector rows, in which case no oversampling (M=1) is needed. As the derivative of y0(x) deviates from zero, M should increase. In order to decouple the value of M from varying order inclination (thereby making the problem tractable), we introduce an additional constraint: (6):

 \begin{displaymath}
{\cal F} + \Lambda \sum_j (g_{j+1} - g_j)^2 = \mbox{minimum}.
\end{displaymath} (7)

The second term smooths the spatial profile, restricting point-to-point variations, even if the order geometry does not constrain every point in g. In order to solve the restricted problem (7) we set the derivatives of g and f equal to zero. This produces two systems of linear equations:
 
$\displaystyle \sum_j\left({\bf A}_{jk} + \Lambda\cdot{\bf B}_{jk}\right)g_j
= {\bf R}_k$     (8)


$\displaystyle f_x = {\bf C}_x / {\bf D}_x$     (9)

with matrices given by:
 
$\displaystyle {\bf A}_{jk} = \sum_{x,y} f_x^2 \omega_{x,y}^j\omega_{x,y}^k$     (10)


$\displaystyle {\bf R}_k = \sum_{x,y} S_{x,y} f_x\omega_{x,y}^k$     (11)


$\displaystyle {\bf C}_x = \sum_y S_{x,y} \sum_j g_j \omega_{x,y}^j$     (12)


$\displaystyle {\bf D}_x = \sum_y \left[ \sum_j g_j \omega_{x,y}^j\right]^2$     (13)

where ${\bf B}_{jk}$ is a tri-diagonal matrix with -1 on both sub-diagonals and 2 on the main diagonal, except for the upper-left and the bottom-right corners, which contain 1. In the matrix equations above, we use the generic nomenclature $N=N_y\cdot M$.

The very simple structure of $\omega_{x,y}^j$ makes it easy to construct the matrix $\sum_y\omega_{x,y}^j\omega_{x,y}^k$ for every x. This block-diagonal matrix is the key to efficient computations. Each square block has M+1 elements on a side and is proportional to the product of two scalars, such as $\omega_{x,0}^j\cdot\omega_{x,0}^k$, for example. Therefore $\omega$ does not have to be computed for every value of x. Only the first M+1 elements are needed! The regular shape of $\omega_{x,y}^j\cdot\omega_{x,y}^k$ greatly simplifies the evaluation of matrix ${\bf A}_{ij}$ in Eq. (8), making it possible to use fast and efficient numerical methods to solve for the spatial profile.

Iteration is used to solve Eqs. (8)-(13). Beginning with an initial guess for the spectrum (e.g. $f_x=\sum_y
S_{x,y}$), we solve Eq. (8) to obtain the spatial profile, normalize the total area, and then solve Eq. (9) to obtain an improved estimate of the spectrum. Iteration ceases when the maximum fractional change in the spectrum becomes small.

The decomposition procedure produces the best possible spectrum and slit function, if the order location $y_{\rm o}(x)$ is accurately determined and large scale structure in the image is dominated by the spectrum. Local defects like cosmic rays, bad pixels, and noise have minimal impact on the results! This can be illustrated by comparing an observed order with a model order constructed from the derived f and g, convolved with detector pixels according to Eq. (4) (Figs. 4 and 5). The only type of artifact that adversely affects decomposition are rows of bad pixels with length and orientation similar to spectral orders. Fortunately, such defects can usually be identified a priori and ignored during decomposition, leaving no trace of the bad row (Fig. 4).

Generalization of the decomposition procedure to handle tilted or curved slit images is straightforward, if the image geometry is known. The only difference is that the mapping between f and detector pixel is a function of both x and y, making the pattern of $\omega_{x,y}^j$slightly more complicated. Nevertheless, the structure of ${\bf A}_{jk}$ remains block-diagonal. If the PSF has a 2D structure, for example when an image slicer is used, the order can still be decomposed using a 2D representation of the PSF in place of the 1D spatial profile. For UVES, deviations from a 1D representation of g cause errors smaller than the typical readout noise.

REDUCE uses the decomposition routine to (a) normalize flat field images and derive order shape functions, (b) locate the inter-order gaps and estimate scattered light, and (c) extract spectral orders.


next previous
Up: New algorithms for reducing spectra

Copyright ESO 2002