A&A 399, 2332 (2003)
DOI: 10.1051/00046361:20021762
E. Zackrisson  N. Bergvall
Astronomiska observatoriet, Box 515, 75120 Uppsala, Sweden
Received 13 March 2002 / Accepted 21 November 2002
Abstract
By comparing the results from numerical microlensing simulations to the observed longterm variability of quasars, strong upper limits on the cosmological density of compact objects in the

range may in principle be imposed. Here, this method is generalized from the Einsteinde Sitter universe to the currently favored
,
cosmology and applied to the latest observational samples. We show that the use of highredshift quasars from variabilityselected samples has the potential to substantially improve current constraints on compact objects in this mass range. We also investigate to what extent the upper limits on such hypothetical dark matter populations are affected by assumptions concerning the size of the optical continuumemitting region of quasars and the velocity dispersion of compact objects. We find that mainly due to uncertainties in the typical value of the source size, cosmologically significant populations of compact objects cannot safely be ruled out with this method at the present time.
Key words: cosmology: dark matter  gravitational lensing  quasars: general  cosmology: miscellaneous
Despite much effort to rule out cosmologically significant populations of compact bodies in the stellar to planetary mass range, such objects still remain viable candidates for the dark matter of the universe. Although many of the compact dark matter candidates proposed are baryonic (e.g. red, white and brown dwarfs, neutron stars, stellar black holes, gas clouds) and therefore constrained by standard Big Bang nucleosynthesis and the determination of the primeval deuterium abundance to contribute no more than (e.g. Burles et al. 2001) to the cosmological density, several candidates do circumvent these constraints, either by being nonbaryonic or by making their baryonic content unavailable by the time of nucleosynthesis: e.g. primordial black holes (Hawking 1971), quark nuggets (Alam et al. 1999), mirror matter MACHOs (Mohapatra 1999) and aggregates of bosons or fermions (Membrado 1998). A large number of methods do however exist to constrain the cosmological densities of such populations (see Dalcanton et al. 1994 and Carr & Sakellariadou 1999 for reviews).
In this paper, we will be concerned with compact objects in the potentially very interesting mass range , where indirect detections of cosmologically significant populations have been suggested (e.g. Hawkins 1996). In this region, the currently most powerful constraints on the cosmological density of such objects (regardless of type), , come from Dalcanton et al. (1994) and Schneider (1993, hereafter S93), and are both based on theoretically predicted effects of quasar microlensing.
By comparing the equivalent width distribution of quasar emission lines predicted by microlensing scenarios to that of observed samples, Dalcanton concludes for  compact objects in a critical universe. Taken at face value, the constraints from S93 are however even stronger at the lowest masses: for  .
The method of S93 is based on the argument that large populations of compact objects should statistically induce variations in quasar light curves larger than those actually observed. The limits derived this way do however rely on the premise that the many parameters going into the microlensing simulations can be sufficiently well constrained by observations or reasonable assumptions, effectively making the only free parameter. The aim of this paper is to investigate whether this may actually be accomplished at the present time. Of particular importance is the illdetermined typical size of the UVoptical continuumemitting region of quasars, , which S93 assumes to be m when deriving the constraints quoted above.
Here, the method of S93 will be generalized from the Einsteinde Sitter (EdS) universe to the currently favored , cosmology and applied to the latest observational samples (Hawkins 2000, hereafter H2000). The sensitivity of the constraints to uncertainties in and the velocity dispersion of compact objects will be evaluated using recently developed methods to better approximate the magnification in the case of large source microlensing.
The method used to derive the statistical properties of light curves of quasars microlensed by a cosmological distribution of compact objects is based on the machinery outlined in S93, extended to arbitrary FriedmannLemaître cosmologies using the angular size distances of Kayser et al. (1997) and to the case of largesource microlensing using the magnification formula of Surpi et al. (2002).
In this technique, the multiplicative magnification approximation (Ostriker & Vietri 1983) is assumed to adequately reproduce the statistical probability of variability. In this case, the magnification,
,
due to i microlenses is equal to the product of the individual ones:
We follow conventions commonly found in gravitational lensing literature and use ,
and
to denote the angular size distances from observer to lens, observer to source and from lens to source, respectively. If
is the separation of the lens to the line that joins the source and observer, the dimensionless impact parameter y (in units of the Einstein radius of the lens) then becomes
In the case of an extended distribution of lens masses, l_{y} should be set equal to whichever of the two expressions in (8) that produces the highest value when evaluated for the maximum of any mass considered.
The extent of the lens plane along the xaxis is derived by adding l_{y} to a term which ensures that no lens originally outside the lens plane will move sufficiently close to the source during the time span
of the simulation to give a nonnegligible contribution to the magnification (see S93):
For the lens population, a constant comoving density is assumed. In the case where all compact objects share the same mass, the number of lenses populating a lens plane becomes:
In this paper, a dominated cosmology will be assumed, in which , and H_{0}=65 km s^{1} Mpc^{1}. The mean density inside the simulated volume is assumed to be equal to the mean density of the universe, which implies a homogeneity parameter (Kayser et al. 1997) . All compact objects are furthermore considered to be randomly distributed.
When demonstrating how the constraints on depend on different parameters, we will assume all compact lenses to have the same mass . Unless stated otherwise, the velocities of compact objects perpendicular to the lineofsight are assumed to be normally distributed with 2D velocity dispersions km s^{1}.
Since the quasars used in this investigation are all located in the direction , , a universal observer velocity will be used. Adopting the velocity of the Sun relative to the cosmic microwave background derived by Lineweaver et al. (1996), the velocity of the observer perpendicular to the lineofsight becomes 307.9 km s^{1}.
The implementation of the algorithm has been subjected to several tests. It passes the test concerning the mean number of lenses with impact parameter
as outlined in S93. When the magnification formulas ((4) and (5)) are modified to accord with the definition of magnification peaks used in Alexander (1995), the characteristic time scale of microlensing events as a function of lens redshift also agrees well with the analytical expressions in that paper. Finally, the standard deviation in magnitudes
around the mean magnification of the simulated light curves, calculated as
Figure 1: Analytical (Surpi et al. 2002) compared to derived from simulated light curves. At each quasar redshift, , the derived from simulations has been based on 50 generated light curves, each spanning 2000 years and sampled at intervals of 1 year. Lines represent the analytical predictions for a dominated universe ( , ) with (solid) and (dashed). In both cases, we have assumed m. Circles and triangles indicate the corresponding results from simulations.  
Open with DEXTER 
For the current investigation, 30 lens planes have been used. The adequacy of this number has been verified by increasing the number of lens planes by a factor of two without any significant impact on the results.
Magnification, as used here, is defined relative to the smooth FriedmannLemaître universe, which is not a consistent treatment in terms of flux conservation. Even though this inconsistency has no effect on the variability of simulated light curves, the properly normalized scale of magnification becomes important when simulating the effects of amplification bias. To force
when taking the average along all lines of sight to a source, we follow a recipe developed by Canizares (1982) in which the corrected magnification
is calculated from the uncorrected
using
Even though the observed optical variability of quasars on time scales of a few years (e.g. Hook et al. 1994, Véron & Hawkins 1995, Cristiani et al. 1996) could be a combined effect of intrinsic (e.g. accretion disk instabilities, supernova explosions) and extrinsic (e.g. microlensing) variations, combining the two proposed mechanisms can however only increase the probability of flux variations. By assuming that all variability is due to microlensing and comparing the predictions from microlensing scenarios to the observed variation probabilities of quasar light curves, upper limits on the cosmological density of compact object may therefore be imposed. This technique was first implemented in S93 to constrain compact dark matter populations in the mass range  for an EdS universe, using the observational sample of Hawkins & Véron (1993, hereafter HV93).
The method for constraining
outlined in S93 is based on the amplitudes, ,
of quasar light curves. Here, the amplitude is defined as the difference between the minimum and maximum yearly magnitudes observed in a quasar within the duration of the monitoring programme. Each yearly magnitude is furthermore the average of roughly four intrayear measurements. For the case of variations induced purely by microlensing, this reduces to:
When comparing observed and synthetic quasar samples, a simple statistical analysis is used. The cumulative probability
predicted by microlensing simulations of finding amplitudes higher than
is used to calculate the expected number of objects in an observed sample of size
with amplitudes higher than :
In principle, the statistical properties of synthetic samples depend on a large number of parameters (for the background cosmology , , H_{0} and ; for the lens population , and parameters describing the mass spectrum of compact objects; for the quasar population and the distribution of in addition to parameters describing the luminosity function and the  relation; for the observer the velocity perpendicular to the lineofsight). If most of these can be constrained by measurements or reasonable assumptions, upper limits on may be imposed. In order to facilitate the visualization of the constraints imposed by this method, and still enable a direct comparison to the results of S93, a threshold value of has been adopted to delimit allowed and rejected regions of the microlensing parameter space. When falls below this value, the microlensing scenario responsible is considered to be inconsistent with the observations and can be ruled out. A limit of corresponds to the case where the microlensing scenario has predicted the presence of 2.3 quasars with amplitudes higher than some arbitrary value in a sample, yet none is observed.
What impact does the transition from an EdS ( , ) to a dominated universe ( , ) have on the constraints derived? Since the dominated cosmology implies a higher angular size distance out to a particular redshift, more lenses will contribute to the light curve for a particular value of . In Zackrisson et al. (2002) we showed that the constraints on were significantly strengthened due to this effect.
Since the approximation (4) used in S93 underestimates the magnification in the limit of large sources, Surpi et al. (2002) predicted that the implementation of (5) should improve constraints of S93 type. In Zackrisson et al. (2002), it was however shown that in the range  this correction only had a modest impact on the upper limits on in a part of parameter space ( , m) for which no interesting constraints could be inferred.
As becomes evident when comparing the constraints from Zackrisson et al. (2002) to those of S93, there exists a systematic difference between the cumulative probabilities derived when analyzing the samples of HV93, even when the simulations are matched as closely as possible given the published information. It seems that the model of S93 systematically predicts somewhat higher probabilities of large amplitudes. For this reason, the constraints derived for the dominated cosmology are actually only slightly stronger than those originally presented in S93. The origin of this discrepancy is not known, but could be due to the redshift resolution or the value of H_{0} used.
The microlensing magnification of a distant source is highly sensitive to the source size assumed. Even though the maximum magnification from a single lens decreases with increasing source size, more lenses may at the same time contribute to the microlensing effect. As shown in Surpi et al. (2002), the variability amplitude does however drop as is increased for a fixed lens mass within the parameter space explored.
The typical value of the parameter is far from welldetermined, but certain constraints do exist. If we assume the UVoptical continuum of quasars to originate in the accretion disc surrounding the central black hole, we may expect to be at least a few times larger than the gravitational radius of the central mass, which for an objects amounts to m.
Tight upper limits on are also available for a few wellstudied quasars for which measurements of duration or magnification of microlensing events have been made. For the gravitational lens system QSO 0957+561, Refsdal et al. (2000) have been able to place the constraint m at a significance level of 10%. For the same system Pelt et al. (1998) have argued that m (within a factor of 3). Shalyapin (2001) has similarly inferred m for QSO 2237+0305, with a most likely value at m. It is however not clear how typical these values are of the quasar population as a whole.
The constraints on quoted from S93 are based on the assumption that m. In the following, we will however investigate to what extent the constraints on are affected when is allowed to take on higher and lower values inside the range 10^{14} m, which is an interval commonly considered in studies of quasar microlensing (e.g. Tadros et al. 1998).
In addition to the parameters that determine the variability of an individual quasar, distributions of , , and also need to be considered when building the synthetic quasar samples used for comparison with the observed amplitude distributions.
Even though the clustering properties of quasars as a function of redshift are poorly determined, the tendency to cluster does not appear sufficiently strong to jeopardize a comparison with the kinematics of galaxies. We will therefore use galaxy data to estimate the velocity dispersion of quasars perpendicular to the lineofsight.
In an investigation of synthetic galaxy catalogues from the Virgo consortium (Coil et al. 2001) one finds a typical lineofsight velocity dispersion of galaxies within a 1 Mpc box at z=1 of km s^{1}. If large scale motions are included, the value increases to roughly 300 km s^{1}. Empirical results from the Las Campanas survey (Baker et al. 2000), corresponding to low z, result in slightly lower values. Based on these numbers, and taking into account the fact that quasars are predominantly located at high z, we will adopt a universal quasar velocity dispersion of km s^{1}.
A quasar that lacks sufficient intrinsic brightness to be included in a fluxlimited sample may still be magnified through gravitational microlensing to reach above the threshold for detection. A fluxlimited sample will therefore contain an enhanced fraction of highly magnified objects. The correlation between minimum magnification and amplitude pointed out by S93 implies that a fluxlimited sample should also display an enhanced probability for large variations. When deriving for a synthetic sample, the effects of amplification bias should therefore be taken into account.
In order to simulate the impact of amplification bias on the fluxlimited samples of HV93 and H2000, we have assumed the optical quasar luminosity function (LF) derived by Boyle et al. (2000) for a
,
cosmology:
Using this LF, the probability for a quasar to have a certain as a function of z may be derived.
When building a synthetic sample subject to amplification bias, a large number of light curves is first generated with uniform distribution within the redshift span of the observational sample. For each of these, an absolute magnitude following the derived probability is then randomly generated. These are converted into intrinsic apparent magnitudes assuming a powerlaw continuum with slope for all quasars when calculating the kcorrection. The effect of emission lines and deviations from a simple powerlaw continuum on the kcorrection has been neglected, since at least for objects the kcorrection closely resembles the average kcorrection derived from observations (Wisotzki 2000).
Since the observed quasar magnitudes are defined differently in the HV93 and H2000 samples, the
associated with each synthetic light curve is corrected for the effects of microlensing using different formulas depending on the observational sample used for comparison:
This procedure of generating synthetic samples with simulated amplification bias differs from the one adopted by S93 in the assumed LF, the enforced flux conservation and the more realistic and distributions of the final samples. A comparison between derived from synthetic samples with and without enforced observational flux distributions shows that the amplification bias produced by this method turns out to be quite modest, and most closely resembles the s=0 (weakest bias considered) scenario explored in S93.
In simulating the amplification bias, we have assumed the observed LF to be identical to the intrinsic one, thereby neglecting the effects that lensing could have on the observed LF itself. Since the effect of lensing by compact objects as well as isothermal galaxies and clusters is to decrease the proportion of lowluminosity quasars compared to the intrinsic LF (e.g. Pei 1995), we are in fact underestimating the amplification bias by adopting the observed LF. This ensures that the upper limits on derived are conservative.
In relating the intrinsic luminosity of a quasar to the size of the UVoptical continuumemitting region, two different models have been considered.
First, we have considered one in which is assumed unrelated to and treated as a free parameter in the range 10^{12}10^{14} m.
We have also experimented with the thin accretion disk relation of Czerny et al. (1994) which, assuming a powerlaw continuum with index
,
may be written (SI units):
When generating the grid of synthetic samples used in the comparison with the observations, all combinations of the discrete parameter values listed in Table 1 have been used, giving a total of 150 simulation configurations. To ensure that the magnification probabilities derived from each sample is statistically significant, a large number of light curves must be generated for each such parameter combination. In this study, 35 000 light curves with a uniform redshift distribution in the range 3.6 ( between each bin) have been considered adequate. Simple tests show that this number results in derived from the final samples which are sufficiently stable for the present purpose, i.e. to establish the approximate border between rejected and allowed regions of the microlensing parameter space.
0.05  0.1  0.15  0.2  0.25  0.30  
4  3  2  1  0  
12  12.5  13  13.5  14 
In HV93, two observational quasar samples are described: a FOCAP sample with a limiting magnitude of and a bright sample with threshold . All objects in these samples have been monitored for a period of 10 years. Following S93, these two samples have been combined into one, consisting of 117 objects in the redshift range 3.23. In the context of deriving upper limits on from light curve amplitudes, this HV93 sample is however not ideal, since the variability is not explicitly expressed as an amplitude. Instead, a related parameter s is used, which may only approximately be transformed into an amplitude through the relation . Because of this complication, the constraints on derived from the HV93 sample should be regarded as less certain than those inferred from the samples of H2000, and are only included here to facilitate a comparison of this study to that of S93.
When calculating the synthetic distribution, the time sampling must also be carefully considered. Even though the measurements of HV93 span 10 years, the s parameter is only derived from the 7 years when more than one plate was taken. This implies that the amplitudes of the synthetic light curves should be derived from 7 data points with the same separation as in HV93 (Hawkins 2002, private communication), not from the 11 evenly spaced data points assumed in S93. The effect of using too many yearly data points is to increase the probability of observing large variations, thereby making the constraints inferred on too strong. When calculating the amplitudes, S93 furthermore neglects the averaging over intrayear magnitude variations described in Sect. 3.1, thereby inferring to strong constraints on scenarios in which the typical time scale of variations is smaller than one year.
In Fig. 2 we display the limits on inferred from the HV93 sample when the more realistic time sampling described above is used and assuming that may be considered a free parameter. As seen, the efficiency of this method to impose upper limits on is very sensitive to the value of assumed. For m, strong constraints of 0.1 may be imposed on compact objects in the mass interval  . At m however, the upper limits are significantly weaker and at m no meaningful constraints may be imposed.
In Fig. 6 we also indicate the constraints on when the  relation (22) is used. The constraints are identical to those produced for m, which is explained by the strong peak located at this value in the predicted distribution. Figure 7 illustrates the distribution of the synthetic HV93 sample when and . The distributions for other parameter combinations only differ slightly due to the effect of amplification bias.
Figure 2: Upper limits on for five different lens masses inferred from a comparison with the combined samples of HV93 when is assumed unrelated to . The different lines represent m (thick solid), m (thick dashed), 10^{13} m (solid), m (dashed) and 10^{12} m (dashdotted). Slight offsets from the parameter values of Table 1, for which the formal upper limits are derived, have been introduced to prevent overlapping lines. The maximum upper limit of is here set by the background cosmology ( , ).  
Open with DEXTER 
We now shift our attention to the larger samples of H2000, which contain a total of 386 objects in the range 1363.58, monitored for a period of 20 years. H2000 contains three subsamples: UVX, VAR and AMP.
The UVX sample, which contains 184 objects in the redshift range z=0.2422.21 is formed from a criterion of ultraviolet excess, UB < 0.2 and a limiting magnitude of .
The first variabilityselected sample VAR, containing 298 objects in the redshift range z=0.1363.58, has a limiting magnitude of and an amplitude threshold of . The second variabilityselected sample AMP, containing 66 objects in the redshift range z=0.142.07, has a limiting magnitude of and an amplitude threshold of .
The constraints on that may be derived from a comparison of microlensing simulations to the statistical properties of these samples depend strongly on their selection criteria and redshift distributions. In order to find the most powerful upper limits, several combinations of the different subsamples have been tested.
In the following, the light curves of the synthetic samples have been sampled at 21 evenly spaced yearly data points, each constituting an average of four intrayear magnification points, in fair agreement with the observational procedure (Hawkins 2002, private communication).
In Fig. 3, we present the constraints on inferred from the UVX data set when assuming to be unrelated to . The constraints are weaker than those produced from a comparison to HV93 (Fig. 2). The most important reason for this is the different observational selection criterion used, which prevents high redshift objects from entering the UVX sample. At z>2.2 the Lyman forest enters the Uband, rendering the quasars red and out of reach of the UB<0.2 criterion.
Figure 3: Same as Fig. 2 for the UVX sample of H2000.  
Open with DEXTER 
It is reasonable to expect that a comparison to the variabilityselected samples should result in more powerful upper limits on than those inferred from UVX, since these samples may extend to much higher redshifts, and the probabilities of high amplitudes increase with redshift in the microlensing paradigm, but not  as shown in H2000  in the observations. The main obstacle of this approach lies in the amplitude thresholds, which overestimates the true variation probability by rejecting lowamplitude objects. If the amplitude threshold is too high, no meaningful constraints on can be imposed.
One may also combine UVX and variability selected samples to form a data set which both extends to high redshifts and is less likely to substantially overestimate the variation probability of quasars. In Fig. 4 we indicate the constraints on inferred from the combined UVX+VAR sample when is assumed unrelated to . Due to the extension to higher redshifts, the upper limits are significantly stronger than those inferred from UVX. The upper limits derived from the VAR sample alone are somewhat weaker than those presented in Fig. 4.
The AMP sample is not useful for these purposes, both because of its very high amplitude threshold and its limited redshift span.
Figure 4: Same as Fig. 2 for the UVX+VAR samples of H2000.  
Open with DEXTER 
Since the probabilities of high amplitudes increase with redshift in the microlensing scheme, a possible way to improve the efficiency of this method even further may however be to conduct a comparison between a highredshift subset of the VAR sample and the microlensing simulations.
H2000 shows that the higher redshift objects in the VAR sample display less variability than their low redshift counterparts. This is attributed to the fact that the higher redshift quasars also have higher luminosities and the hypothesis that increases with . However, by stretching the assumption that these two parameters are unrelated, the fact that the high redshift quasars of VAR show smaller amplitudes than their low redshift counterparts should not hamper the exclusive use of high redshift objects in the comparison. Figure 5 shows the substantially improved constraints when all objects with z<1.5 have been removed from the VAR sample. This leaves 144 objects, which still provides better statistics than the samples of HV93. The constraints inferred this way are very strong: for and for as long as m. Even at m competitive constraints may be derived for .
Even though it may be possible to construct a  relation which mimics the inverse correlation between luminosity and amplitude seen in the observed samples, while allowing essentially no amplitude dependence on redshift for z>0.5, the  relation explored here does not have this property. In Fig. 6 we indicate the constraints on derived from a comparison to the z>1.5 part of VAR in the case when the  relation (22) is used. In Fig. 7, we show the corresponding distribution of the sample generated when and . Despite the higher redshift, the typical value actually lies somewhat lower than that predicted for the HV93 sample. The low tail not seen in the HV93 distribution is a result of the stronger amplification bias induced at high redshift, where the average intrinsic quasar luminosity lies further below the limiting luminosity of the sample.
As indicated in Fig. 5, the method of using the longterm variability of quasars provides excellent possibilities to constrain , especially with the use of highredshift quasar samples. For certain combinations of and , falls far below the threshold value of 10% even at the lowest values of considered, indicating that the method could possibly be refined to constrain cosmological densities even below . Similarly, we note that it is not impossible that constraints on may also be imposed. Due to the very CPUdemanding simulations necessary at such low masses (many lenses contributing to the light curve), we have however not investigated this possibility here.
Despite the improvement of the upper limits imposed here compared to those derived in S93, several uncertainties still prevent us from drawing definite conclusions about in the mass range explored.
As already shown, the constraints are highly sensitive to the typical value of the parameter. Even though the few available measurements of indicate values inside the region for which powerful constraints may be derived, extrapolation of these results to the whole quasar population could be hazardous.
Other parameter uncertainties also deserve consideration. The upper limits on presented so far rely on the assumption that km s^{1}. This parameter essentially describes the preferred whereabouts of the compact objects. If the compact objects cluster only weakly, i.e. are mainly located in the field, a value of km s^{1} should be more appropriate. If they cluster more strongly, i.e. are mainly distributed inside rich galaxy clusters, much higher values could in principle be considered. An estimate of the upper limit to this value may be obtained from model predictions by Sheth et al. (2001) under the assumption that the lenses dynamically behave similar to cold dark matter particles. Based on the values they derive for different spatial scales we choose as an upper limit km s^{1}.
Figure 5: Same as Fig. 2 for the z>1.5 part of the VAR sample of H2000.  
Open with DEXTER 
Figure 6: Upper limits on inferred for different from the HV93 sample (thick solid) and the z>1.5 part of VAR (thick dashed) when the Czerny et al. (1994)  relation is applied.  
Open with DEXTER 
In the context of microlensing, the effect of varying is to alter the characteristic time scale of source crossing and the width of the corresponding magnification peak. Higher velocities imply shorter time scales and, in principle, an increased probability of detecting high amplitudes since more compact object may pass the source during the time span of the observational programme. The impact of this effect is however weakened by the finite sampling rate of the light curves.
By expanding the grid of synthetic samples defined by Table 1 to other at m, we have investigated in what way the upper limits on inferred from the z>1.5 subset of the VAR sample are affected by variations in the range 600 km s^{1}. The effect turns out to be very undramatic and only relaxes our upper limits from to 0.15 at in the case of km s^{1}. All other constraints at this source size are unaffected.
Additional uncertainties stem from approximations inherent in the microlensing model used. The model assumes no shear and a circular source with uniform brightness. The inclusion of shear terms is expected to lower (Refsdal & Stabell 1997), thereby making the constraints on weaker. Due to the lack of analytical approximations for in the case of both nonzero shear and more realistic (e.g. Gaussian) source brightness profiles, the quantitative impact of such features can however presently only be tested with more timeconsuming algorithms like backwards raytracing. The model also assumes all compact objects to be randomly distributed, i.e. that correlations between lenses are unimportant. However, for the most extreme combination of lens parameters considered in this study, the simulations include compact objects initially as far away as two light years from each other in the direction perpendicular to the lineofsight. This means that compact objects which are bound together on scales smaller than this (e.g. in clusters or binary systems) will have correlated positions in the lens plane. Simple tests show that correlations of this kind generally increase the probability of high amplitudes, which indicates that the upper limits derived from random lens distributions should be conservative.
Figure 7: Examples of the distributions of the synthetic samples generated to match the HV93 sample (solid) and the z>1.5 subset of VAR (dashed), when the Czerny et al. (1994)  relation is applied. In both cases we assume , and use identical bin widths when plotting the distributions.  
Open with DEXTER 
In this paper, the analysis of the upper limits on the cosmological density of dark matter in the form of compact objects inferred from the longterm variability of quasars has been improved compared to that of S93 in several ways:
When the strong dependence of the upper limits is compounded with additional uncertainties discussed under Sect. 5, we are forced to conclude that the method of using the longterm variability of quasars to place upper limits on cannot be used to reliably rule out any cosmologically significant populations of compact objects in the stellar to planetary mass range at the present time. This situation may of course change as more and better measurements of become available.
Acknowledgements
We gratefully thank Patrik Gullerström for his contributions to the programming of the microlensing code.