A&A 381, 848-861 (2002)
DOI: 10.1051/0004-6361:20011590
S. Calchi Novati1, - G. Iovane1 -
A. A. Marino2 - M. Aurière3 - P. Baillon4 - A.
Bouquet5 - V. Bozza1 - M. Capaccioli2 - S.
Capozziello1 - V. Cardone1 - G. Covone6 - F. De
Paolis7 - R. de Ritis6,
-
Y. Giraud-Héraud5 - A. Gould5,
- G. Ingrosso7 - Ph. Jetzer8,9 - J. Kaplan5 - G. Lambiase1 - Y. Le Du5 - L. Mancini1,8 - E. Piedipalumbo6 - V. Re1 - M. Roncadelli10 - C.
Rubano6 - G. Scarpetta1 - P. Scudellaro6 - M. Sereno6 - F. Strafella7
1 - Dipartimento
di Fisica "E.R. Caianiello'', Università degli Studi di
Salerno, and INFN Sez. di Napoli - Gruppo Collegato di Salerno, Italy
2 -
Osservatorio Astronomico di Capodimonte, Napoli, and INFN, Sez. di Napoli, Italy
3 -
Observatoire Midi-Pyrenées, France
4 -
CERN, 1211 Genève 23, Switzerland
5 -
Physique Corpusculaire et Cosmologie, Collège de
France, Paris, France
6 -
Dipartimento di Scienze Fisiche, Università degli
Studi di Napoli "Federico II'' and INFN, Sez. di Napoli, Italy
7 -
Dipartimento di Fisica, Università di Lecce, Italy
8 -
Institute of Theoretical Physics,
University of Zürich, Switzerland
9 -
Institute of Theoretical Physics, ETH, Zürich, Switzerland
10 -
INFN Sez. di Pavia, Pavia, Italy
Received 18 June 2001 / Accepted 25 October 2001
Abstract
We present the first results of the analysis of data collected
during the 1998-99 observational campaign at the 1.3 meter
McGraw-Hill Telescope, towards the Andromeda galaxy (M 31), aimed
to detect gravitational microlensing effects as a probe
for the presence of dark matter in our Galaxy and in the M 31 halo. The
analysis is performed using the pixel lensing technique,
which consists of the study of flux variations of unresolved
sources and has been proposed and implemented by the AGAPE
collaboration. We carry out a shape analysis by demanding that
the detected flux variations be achromatic and compatible with a
Paczynski light curve. We apply the Durbin-Watson hypothesis
test to the residuals. Furthermore, we consider the background of
variables sources. Finally five candidate microlensing events
emerge from our selection. Comparing with the predictions of a
Monte Carlo simulation, assuming a standard spherical model for
the M 31 and Galactic haloes, and typical values for the MACHO
mass, we find that our events are only marginally consistent with
the distribution of observable parameters predicted by the
simulation.
Key words: methods: observational - methods: data analysis - cosmology: observations - dark matter - gravitational lensing - galaxies: M 31
In the last decade much attention has been focused on the possibility that a sizable fraction of galactic dark matter consist of MACHOs (Massive Astrophysical Compact Halo Object). Since 1992, the MACHO (Alcock et al. 1993) and EROS (Aubourg et al. 1993) collaborations have looked towards the Large and Small Magellanic Clouds (LMC and SMC) in order to detect MACHOs using gravitational microlensing. This technique, originally proposed by Paczynski (1986), analyses the luminosity variation of resolved source stars, due to the passage of MACHOs close to the line of sight between the source and the observer.
The MACHO collaboration (Alcock et al. 2000) discovered 13-17 microlensing events towards the LMC. Assuming that all events are
due to MACHOs in the halo, about
of the halo dark matter
resides in the form of compact objects with a mass in the range 0.15-
.
The EROS collaboration (Lasserre et al.
2000) observed 6 microlensing events, 5 in the direction of the
LMC and 1 in the direction of the SMC. These observations place an
upper limit on the halo dark matter fraction in the form of
MACHOs. In particular, they exclude, at the
confidence
level, that more than
of a standard halo is composed of
objects in the range
-
.
Note
that the results of the two collaborations are consistent with a
halo dark matter fraction of objects
.
The OGLE collaboration (Udalski et al. 1993) originally searched for microlensing events only towards the Galactic bulge, but has now also extended its search to the LMC and SMC.
A natural extension of the microlensing observational technique consists of observing dense stellar fields even if single stars cannot be resolved, as in the case of the M 31 galaxy. For this purpose, the pixel lensing technique has been proposed (Baillon et al. 1993) and then implemented by the AGAPE collaboration (Ansari et al. 1997). Another technique, based on image subtraction, has been developed by the VATT-Columbia collaboration (Crotts 1992; Tomaney & Crotts 1994), and is used also in the WeCAPP project (Riffeser et al. 2001). The monitoring of M 31 has the advantage that the Galactic halo can be probed along a line of sight different from those towards the LMC and SMC. Furthermore, the observation of an external galaxy allows one to study its halo globally, which, in the case of M 31, has a particular signature due to the tilted disk. Accordingly, the expected optical depth for microlensing varies from the near to the far side of the M 31 disk (Crotts 1992; Jetzer 1994).
The efficiency of the pixel lensing method to detect luminosity
variations has been tested by the AGAPE collaboration on data
taken at the 2 meter Bernard Lyot Telescope in two bandpasses
(B and R), covering 6 fields of
each around
the center of M 31, during 3 years of observations (1994-1996). A
possible microlensing candidate has been observed and further
characterized by using information from an archival Hubble Space
Telescope WFPC2 image (Ansari et al. 1999). An important
conclusion of this analysis is that it is crucial to collect data
in two bandpasses over a long duration with regular sampling.
Very recently, the POINT-AGAPE collaboration (Aurière et al.
2001) announced the discovery of a short timescale candidate
event towards M 31. Additional microlensing candidates towards the
same target have been reported by the VATT-Columbia collaboration
(Crotts et al. 2000).
In this paper, we present results for the 1998-1999 campaign of observations at the 1.3 meter McGraw-Hill Telescope, MDM Observatory, Kitt Peak, towards the Andromeda Galaxy. In Sect. 2, we briefly outline the pixel lensing technique. Section 3 is devoted to the description of the observational campaign and the experimental setup. In Sect. 4 we discuss the data reduction procedure (Calchi Novati 2000) in some detail, in particular the approach used to eliminate instabilities caused by the seeing and to evaluate the errors. In Sect. 5 we present our selection pipeline (Calchi Novati 2000): bump detection (Sect. 5.1), shape analysis (Sect. 5.2) and color and timescale selection (Sect. 5.3). We select a sample of 5 light curves that we retain as microlensing candidate events and whose characteristics are given in Sect. 5.4. In Sect. 5.5 we show the light curve of a nova located inside our field of observation: the discovery of variable sources is a natural byproduct of the microlensing search. In Sect. 6 we conclude with a comparison of the outcome of our selection with the prediction of a Monte Carlo simulation.
Pixel lensing is an efficient tool for searching microlensing
events when the sources cannot be resolved. In this case, the
light collected by each pixel is emitted by a huge number of
stars. Although, in principle, all stars in the pixel field
are possible sources, one can only detect lensing events due
either to bright enough stars or to high amplification events.
Typical sources are red giant stars with
.
We
estimate that there are about 100 sources per square arc second
that fit these requirements. The main drawback of the method is
that usually we have no direct knowledge of the flux of the
unamplified source.
For images obtained by the observation of dense stellar fields,
the flux collected by a pixel is the sum of the fluxes emitted by
single stars, which all contribute to the background. If one of
these stars is lensed, its flux varies accordingly. Whenever this
variation is large enough, it will be distinguishable from the
background produced by the other stars. Denoting by
the
amplified flux detected by the pixel and by
the
background flux, the flux variation is given by
![]() |
(1) |
![]() |
(2) |
![]() |
(4) |
![]() |
(5) |
Two characteristic features of a microlensing event are achromaticity and the uniqueness of its luminosity bump,although differential amplification of extended sources can give rise to a chromatic, but still symmetric, lensing light curve (Han et al. 2000).
The data analysed in this paper have been collected on the 1.3
meter McGraw-Hill Telescope, at the MDM observatory, Kitt Peak
(USA). Two fields have been observed, which lie on the
two sides of the galactic bulge (see Fig. 1) and
have been chosen in order to be able to study the expected
gradient in the optical depth.
![]() |
Figure 1: M 31 with MDM and Agape observation fields (courtesy of A. Crotts). White stars and dots give, respectively, the position of the five microlensing candidate events (labelled as in Table 2 and where candidates 1 and 2 appear to be superimposed) and of the nova. |
Open with DEXTER |
Figure 1 shows the location of the fields and for
comparison also the smaller AGAPE field. The observations were
taken with a CCD camera of
pixels with
the pixel angular dimensions and therefore a total field size of
.
In order to test for achromaticity, images have been taken in two bands, a wide R and a near-standard I. The exposure time is 6 min for R, 5 min for I. The observations began in October 1998 and are still underway.
Here we analyse the data taken in the period from the beginning of
October 1998 to the end of December 1999. In Fig. 2
we give the time sampling of the measurements (number of nights
and images).
![]() |
Figure 2: Time sampling for the observations in the "Target'' field. |
Open with DEXTER |
Most of the observations (about )
are concentrated in the
first three months, so that, unfortunately, the time distribution
of the data is not optimized for the study of microlensing
effects. Thus, the given time distribution allows us to select
events that take place almost exclusively during the first three
months of observation. Furthermore, the time coverage of about 14
months is still not long enough to test conclusively the bump
uniqueness requirement for a microlensing event. Mainly for this
reason, we will speak in this paper only of candidate
microlensing events.
Taking into account the transmission efficiency of the filters
and the catalogued magnitudes
and
(Cousins colour
system), for a sample of 23 reference secondaries identified in
the Target field (Magnier et al. 1993), we derive the following
photometric calibrations
During each night of observation about 20 images are taken in Rand 12 in I. In principle, this allows two possible strategies for the analysis. We can study flux variations on light curves built either by a point obtained from each image, or by a point obtained by averaging over many images. In the first case, we are potentially sensitive to very short time variations. However, this sensitivity is undermined by the low signal-to-noise ratio (S/N). In the second case, the S/N is increased by the square root of the number of images we combine.
Results of the analysis of light curves built with one point per image will be discussed in a future paper. Here we concentrate on the analysis of light curves obtained after combining all the images taken in the same night, using a simple averaging procedure performed on geometrically aligned images (see below).
Data reduction is carried out as follows. After the usual corrections for instrumental effects, debiasing and flatfielding, we normalize all the images to a common reference to cope with variations induced by the observational conditions which are different from image to image (so that we get global stability conditions on each image with respect to a given one). We can distinguish three separate effects: the geometric offset of each image with respect to the others, the difference in photometric conditions of the sky and seeing effects.
By means of geometrical alignment we obtain that each pixel, on
all the images, is directed towards the same portion of M 31. We
take advantage of the fact that the mean seeing disk is much
larger than the pixel size. We follow Ansari et al. (1997), and
get a precision better than
.
Following the methods developed by the AGAPE collaboration
(Ansari et al. 1997), we then bring all the images to the same
photometric conditions in such a way that the images are globally
normalized to a common reference. The procedure is based on the
hypothesis that a linear relation exists between "true'' and
measured flux. It then follows that
The seeing effect gives rise to a spread of the received signal.
In our data the seeing varies from
up to
.
Consequently we observe fake fluctuations
on light curves obtained after photometrical alignment. In order
to cope with this effect, and thus to get reasonably stable light
curves, we follow a two steps procedure (for further details see
Ansari et al. 1997 and Le Du 2000). We begin by substituting
the flux of each pixel with the flux of the corresponding
superpixel, defined as the flux received on a square of
pixels around the central one. The value m should
be large enough to cover the typical seeing disk, but not too
large to avoid an excessive dilution of the signal. Given the
mean seeing value and the angular size of the pixel, we choose
m=5. This corresponds to
,
compared to the
average value for the seeing of
for both
R and I images. In this way we get a substantial gain in
stability since elementary pixels are strongly affected by seeing
fluctuations.
Denoting by
the flux in a superpixel, we have
![]() |
(9) |
Instead of trying to evaluate the point spread function of the
image, as a second step we apply an empirical stabilization of
the difference between the flux measured on the image and that of
the median image, obtained by removing small scale
variations with a median filter on a very large window of
pixels (in this way we get an image whose signal is
independent of the seeing value). The stabilization is then
based on the observed linear correlation, for each superpixel,
between these differences measured on the current image (after
photometrical alignment) and the reference image. Denoting with
and
the
value of the flux in a superpixel (i,j) for the reference, the
current (photometrically aligned) and the median images
respectively, we have the empirical relation
![]() |
Figure 3:
The value of the correction factor
![]() |
Open with DEXTER |
In the example shown in Fig. 4 the seeing of the
current image is greater than that of the reference image,
![]() |
Figure 4:
Plot showing the linear correlation between the
quantities
![]() ![]() |
Open with DEXTER |
While stressing its empirical character, we note that this approach is rapid and efficient. In Fig. 5 we show the effect of the correction on a given light curve.
![]() |
Figure 5: The same light curve before (top) and after seeing correction. In both cases the error bar shows just the photon noise. |
Open with DEXTER |
To construct a corrected current pixel flux as close as possible
to the reference flux,
,
we replace
in (10) by the corrected current flux
and solve for the latter:
In order to minimize the deviations from the median, we choose as
a reference image (one for each filter), an image characterized by
a seeing value equal to the average value over the period of
observations. The seeing fraction of a source in a superpixel is
then 0.87.
Another crucial point is the evaluation of the error to be
associated with the received flux. In order to give a more
appropriate, though empirical, error estimate, we renormalize the
photon noise
by introducing a correction factor
,
that depends on the image, to include all the
systematic effects over which we have less control. For a
discussion on the relation between the photon noise and others
different systematic effects such as surface brightness
fluctuations see Gould (1996).
The evaluation of
is based on the study of the
dispersion of the distribution of the normalized difference,
superpixel by superpixel, between the current and the reference
image
This distribution is expected to have zero mean (which follows
from the geometrical and photometrical alignment) and dispersion
one (which would indicate that the photon noise alone gives the
right evaluation of the error). We do find a null mean but the
dispersion is greater than one and depends on the seeing value. We
note, however, that this effect is greatly reduced by the seeing
correction (see Fig. 6).
![]() |
Figure 6:
The dispersion for the
distribution (12) calculated for each composed image
as a function of the seeing before (
![]() ![]() |
Open with DEXTER |
![]() |
(13) |
The correction factor
is then equal to the
dispersion of the distribution (12) calculated for a
sample of points, properly selected according to the criterion
that they belong to "stable'' light curves, in order
to exclude light curves which show real
stellar flux variations. If the subset is
small enough and homogeneous, the correction factor appears to
be, as expected, almost independent of the seeing value, and has
an average value
(Fig. 6,
bottom). The estimated error of the (i,j) superpixel is
obtained from the relation
A microlensing event is characterized by specific features that distinguish it from other, much more common, types of luminosity variability, the main background to our search. In particular for a microlensing event the bump
In the following we devise selection techniques that make use of these characteristics while taking account of the specific features of our data set.
As a first step we select light curves showing a single flux
variation. We begin by evaluating a baseline, i.e. the background
flux (
)
along each light curve, as defined in
Sect. 4.
Once the baseline level has been fixed, we look for a significant
bump on the light curve. This is identified whenever at least 3
consecutive points exceed the baseline by .
The
variation is considered to be over when 2 consecutive points fall
below the
level. Under the hypothesis that the points
follow a Gaussian distribution around the baseline, we use the
estimator L, the likelihood function, to measure the
statistical significance of a bump. We want to give more weight to
points that are unlikely to be found, so that we define L as
![]() |
(17) |
For each light curve we denote by L1 and L2 the two
largest deviations, respectively. We fix a threshold
,
and we require
to distinguish real variations
from noise. Moreover, we fix an upper limit to the ratio
L2/L1 to exclude light curves with more than one
significant variation. The shape analysis is then carried out on
the superpixels that have the highest values of L in their
immediate neighborhood since we find a cluster of pixels
associated with each physical variation. This method suffers from
a possible bias introduced by an underestimation of the baseline
level (which we further analyse in the next section).
We have carried out a complete analysis selecting the pixels with the following criteria:
By using these peak detection criteria, the number of superpixels
is reduced from
to
.
As a second step we determine whether the selected flux variation is compatible with a microlensing event.
The light curve of a microlensing event with amplification A(t)due to a source star with unlensed flux
(now to be
evaluated in a superpixel) is
Actually, one cannot directly and easily measure ,
the
unamplified flux of the unresolved source star. Only a
combination of the 5 parameters that characterize the light curve
can be measured in a straightforward manner:
We now refine the selection based on the likelihood estimator in order to remove unwanted light curves with low S/N and for which the available data do not allow us to well characterize the bump. To this end we perform a Paczynski 5-parameters fit and we study
The ratio Q is actually correlated with the likelihood estimator L we used in the previous step. In parallel with the cut L1>100 we then keep only light curves with Q>100.
We do not ask for the I bump to be significant.
The second point concerns the necessity to well characterize the
bump shape in order to recognize it as a microlensing event in the
presence of highly irregular time sampling of data (see Fig. 2). For this purpose we require at least 4 points
on both sides of the maximum, and at least 2 points inside the
interval
.
After this selection we are left with 1356 flux variations.
From now on we work with the data in both colors (R and I) and we carry out a shape analysis of the light curve based on a two step procedure as follows:
criterion | ![]() |
pixel left |
exclusion of resolved stars | ![]() ![]() |
![]() ![]() |
mono bump likelihood analysis | ![]() ![]() |
5269 |
signal to noise ratio (Q>100) | ![]() ![]() |
1650 |
sampling of the data on the bump | ![]() ![]() |
1356 |
![]() |
![]() ![]() |
27 |
1.54<dwR(I)<2.46 | ![]() ![]() |
11 |
t1/2<40 d or R-I<1.0 | ![]() ![]() |
5 |
id | ![]() |
![]() |
t1/2 (d) | t0 (J-2449624.5) | ![]() |
R-I |
![]() |
dwR | dwI |
1 | 00![]() ![]() ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
1.25 | 1.78 | 1.65 |
2 | 00![]() ![]() ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
1.37 | 1.57 | 1.65 |
3 | 00![]() ![]() ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
1.48 | 1.97 | 1.67 |
4 | 00![]() ![]() ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
1.42 | 1.68 | 1.95 |
5 | 00![]() ![]() ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
1.16 | 1.82 | 1.99 |
The first point is taken into account by performing the
non-degenerate Paczynski fit in both colors simultaneously, so
that we check also for achromaticity of the selected luminosity
variations. In particular, we require that the three geometrical
parameters that characterize the amplification (
and t0) be the same in both colors. We get,
therefore, a 7 parameters least
non linear fit:
![]() |
(20) |
The application of the
criterion test reduces the sample
of light curves from 1356 to 27.
As a further step we apply the Durbin-Watson (Durbin & Watson 1951) test to the residuals with respect to the 7-parameters non-degenerate Paczynski fit. With the DW test we check the null hypothesis that the residuals are not timely - correlated by studying possible correlation effects between each residual and the next one against type I error (i.e. against the error to reject the null hypothesis although it is correct, e.g. Babu & Feigelson 1996). We require a significance level of 10%. As time plays a fundamental role in the DW test, we perform this test on each color separately.
We call dwR and dwI the coefficients for the DW test on the full data set. In order to retain a light curve we require 1.54<dwR(I) < 2.46, appropriate for 40-42 points along the light curve.
This statistical analysis reduces our sample of light curves from 27 to 11.
It is worthwhile to note that some light curves, showing a real microlensing event superimposed on a signal due to some nearby variable source, and passing the previous selection criteria, could be excluded by the DW test, sensitive to timely correlated residuals.
In order to test our efficiency with respect
to the introduction of the DW test we have done
a study on "flat'' light curves performing a "constant flux''
fit, and selecting light curves requiring a reduced
.
By applying the DW test we then reject
about 50% of these light curves, i.e., more than the 10%
we could expect if we had just random fluctuations. In the
discussion of the Monte Carlo simulation we duly
take into account this effect.
By far, the most efficient way to get rid of multiple flux variations due to variable stars is to acquire data that are distributed regularly for a sufficiently long period of time. Unfortunately, at present, the data cover with regularity only the first three months of observation, and the total baseline is less than 2 years long.
For this reason, in addition to the analytical treatment that looks for the compatibility with an achromatic Paczynski light curve (efficient, for instance, to reject nova-like events), we introduce another criterion based on the study of some physical characteristics of the selected flux variations.
In particular we note that long period red variable stars (such
as Miras) could not be completely excluded by the selection
procedure applied so far. By contrast, short period variable stars
are eliminated thanks to the cut on the second bump of the
likelihood function. A preliminary analysis of the period, the
color and the light curves of long period red variable stars,
taken from de Laverny et al. (1997), lead us, with a rather
conservative approach, to exclude those light curves that present
at the same time a duration
t1/2>40 days and a color
.
A more detailed analysis aimed at a better
estimation of that background noise is currently underway.
We are aware that this last selection criterion could eliminate
some real microlensing events. The probability that this might
happen is however low because the microlensing timescales are
expected to be uncorrelated with the source color. In fact, a
combination of a large t1/2 and color
is
extremely unlikely for microlensing events, but quite common for
red variables.
This last criterion further reduces the number of candidates from 11 down to 5.
We now summarize (Table 1) the different steps in the selection, give the number of surviving pixels after the application of the indicated criteria and the details of our set of microlensing candidate events.
We take these 5 light curves, whose characteristics we are now going to discuss, as our final selection of microlensing candidate events.
In the following table (Table 2) we give their position, the estimated
t1/2 in days, the time of maximum amplification t0 (J-2449624.5),
the magnitude at maximum
and the color
.
We then
give the values of the fit: the reduced
and the
values of the Durbin-Watson dwR(I) coefficients.
The corresponding light curves are shown in Fig. 7.
![]() |
Figure 7:
Light curves of the 5 candidate microlensing events.
On the y axis,
![]() |
Open with DEXTER |
Our data contain many more varying light curves that are due not
to microlensing events but to other variable sources. The study of
these variable stars is an interesting task in itself. Clearly,
pixel lensing is well suited for this research. We give here (see
Fig. 8) only the light curve of an event
characterized by a very strong and chromatic flux variation,
![]() |
Figure 8:
The R and I light curve of the nova event in
![]() ![]() |
Open with DEXTER |
In order to gain insight into the results obtained, we compare our 5 candidate microlensing events with the prediction of a Monte Carlo simulation that takes into account the experimental set up and the time sampling of the observations. We assume a standard model (isothermal sphere with a core radius of 5 kpc) for the haloes of both M 31 and the Milky Way. The total mass of M 31 is assumed to be twice that of our Galaxy. MACHOs can be located in either haloes. Moreover, we consider also self-lensing due to stars in the M 31 bulge or disk. We fix the lens masses at different values for MACHOs in the halo and stars in the bulge or disk. The model of the bulge is taken in Kent (1989), the luminosity function in Han et al. (1998). The luminosity function of the disk is determined considering two models: the one developed in Devriendt et al. (1999) and the model obtained considering the data of the solar neighborhood taken in Allen (1973) corrected for high luminosities (Hodge et al. 1988). The results we obtain are almost insensitive to the particular choice between these two models.
We choose the mass of the lenses in the bulge to be
,
and the mass for MACHOs in the
haloes equal either to
or to
.
In both cases about
of
the expected events are due to lenses located in M 31. Taking into
account our selection criteria, the results for the expected
number of events for a halo fully composed of MACHOs, including a
contribution due to lensing by stars of the bulge and disk of M 31
of
1 event independent of halo parameters, is
4and
9, respectively. The Monte Carlo simulation does not
yet include the effect of secondary bumps due to artifacts of the
image processing (alignment, seeing stabilization) and to
underlying variable objects. From the data, we estimate that
these effects reduce the number of observed events by at most
30%, and this for the shortest events.
We expect, and this is confirmed by simulations, that the
sources of most detectable events are red giants and
have very large radii. Therefore, finite size and
limb darkening effects are important, in particular
for low mass lenses. These effects are included in
the simulations.
However, in both the real and simulated analysis, we do not
include finite size and limb darkening effects in the
amplification light curve fitted to the events. This results
in a loss of detection efficiency smaller than 5%, which
is taken into account in the simulations.
Locating our candidates in the parameter space predicted by the
simulation is more meaningful. We report the expected values of
t1/2 and on the R magnitude at maximum. In Figs. 9 and 10 we give plots of their
functional relation and of their projected distributions.
![]() |
Figure 9:
Results of Monte Carlo simulations,
![]() |
Open with DEXTER |
![]() |
Figure 10:
Results of Monte Carlo simulations,
![]() |
Open with DEXTER |
Looking at the distributions we notice that for the
and
cases,
of the light curves
are expected to have a time width at half maximum
t1/2<24 and
t1/2<10 days, respectively (of course, shorter events are
expected when the MACHO mass is smaller). In both cases,
of the events are predicted with a magnitude at maximum
.
Our candidates have
days and
(somewhat at the limit of the expected
distributions) and therefore most of them are probably not
microlensing events. Still, we expect
1 self-lensing
event and it is possible that one or two of them are true
microlensing events. In any case, from the t1/2distribution, we are led to exclude that the microlensing
candidate events are due to MACHOs of very small mass (only
of events with
are expected
to have
t1/2>15 days). This is indeed in agreement with the
results found by the MACHO and EROS collaborations: they find
lens masses in the halo within the range 0.2-0.6
(Alcock et al. 2000; Lasserre et al. 2000).
We are not yet in a position to tell what kind of varying objects generate our events if they are not due to microlensing. They may be irregular or long period variable giants, but only a longer time baseline, and/or observations of the object at minimum light, will allow us to conclude.
The MDM analysis is not yet complete. The results from the
analysis of data acquired in the other field (located on the
opposite side with respect to the major axis of M 31 of the Target
field analysed here) and results from new observations scheduled
for October and November 2001 will give us the opportunity to gain further
insight into the still open question of the composition of dark
haloes. At the present time, with the caution suggested by the
just mentioned problems, the analysis discussed in this paper
tends to confirm that only a minor fraction of dark matter in the
galactic haloes is in the form of MACHOs within the mass range 0.01-0.5
under the assumption of a standard model of the halo
and given source luminosity functions.
Acknowledgements
We thank M. Crézé, S. Droz, L. Grenacher, G. Marmo, G. Papini and N. Straumann for useful discussions and suggestions. Work by AG was supported by NSF grant AST
97-27520 and by a grant from Le Centre Français pour l'Accueil et les Échanges Internationaux.