Документ взят из кэша поисковой машины. Адрес оригинального документа : http://star.arm.ac.uk/preprints/2011/572.pdf
Дата изменения: Fri Jan 14 15:31:29 2011
Дата индексирования: Tue Oct 2 02:55:57 2012
Кодировка: Windows-1251

Поисковые слова: comet
Mon. Not. R. Astron. Soc. 000, 000-000 (0000)

Printed 12 January 2011

A (MN L TEX style file v2.2)

Stellar variability on time-scales of minutes: results from the first 5 years of the Rapid Temporal Survey (RATS)
Thomas Barclay1,2, Gavin Ramsay1, Pasi Hakala3 , Ralf Napiwotzki4 , Gijs Nelemans5, Stephen Potter6 , Ian Todd7
1 2 3 4 5 6 7

Armagh Observatory, College Hill, Armagh, BT61 9DG, Northern Ireland, UK Mullard Space Science Laboratory, University College London, Holmbury St. Mary, Dorking, Surrey, RH5 6NT, England, UK Е Finnish Centre for Astronomy with ESO, University of Turku, Vaisalantie 20, FI-21500 PIIKKIO, Finland Е ЕЕ Centre for Astrophysics Research, University of Hertfordshire, College Lane, Hatfield AL10 9AB, UK Department of Astrophysics/IMAPP, Radboud University Nijmegen, PO Box 9010, NL-6500 GL Nijmegen, The Netherlands South African Astronomical Observatory, PO Box 9, Observatory 7935, Cape Town, South Africa Astrophysics Research Centre, School of Mathematics & Physics, Queen's University, University Road, Belfast BT7 1NN

12 January 2011

ABSTRACT

The Rapid Temporal Survey (RATS) explores the faint, variable sky. Our observations search a parameter space which, until now, has never been exploited from the ground. Our strategy involves observing the sky close to the Galactic plane with wide-field CCD cameras. An exposure is obtained approximately every minute with the total observation of each field lasting around 2 hours. In this paper we present the first 6 epochs of observations which were taken over 5 years from 2003-2008 and cover over 31 square degrees of which 16.2 is within 10 of the Galactic plane. The number of stars contained in these data is over 3.0 Ч 106 . We have developed a method of combining the output of two variability tests in order to detect variability on time-scales ranging from a few minutes to a few hours. Using this technique we find 1.2 Ч 105 variables - equal to 4.1 per cent of stars in our data. Follow-up spectroscopic observations have allowed us to identify the nature of a fraction of these sources. These include a pulsating white dwarf which appears to have a hot companion, a number of stars with A-type spectra that vary on a period in the range 20-35 min. Our primary goal is the discovery of new AM CVn systems: we find 66 sources which appear to show periodic modulation on a time-scales less than 40 min and a colour consistent with the known AM CVn systems. Of those sources for which we have spectra of, none appears to be an AM CVn system, although we have 12 candidate AM CVn systems with periods less than 25 min for which spectra are still required. Although our numbers are not strongly constraining, they are consistent with the predictions of Nelemans et al. Key words: surveys - stars: variables: other - Galaxy: stellar content - methods: data analysis - techniques: photometric

1 INTRODUCTION In recent years much progress has been made in increasing our knowledge of the variable sky. The advent of wide-field CCDs has brought with it a new parameter space that is only now beginning to be exploited. Variability over the course of days to weeks is well



Based on observations made with the Isaac Newton and William Herschel Telescopes operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofsica de Canarias and also observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile, proposal 075.D-0111. E-mail: tsb@arm.ac.uk c 0000 RAS

served for by surveys, such as, Pan-STARRS (Kaiser et al. 2002). On shorter time-scales a few experiments, such as SuperWASP (Pollacco et al. 2006), are able to detect variability on time-scales as short as a few minutes. However, these wide-angle experiments are unable to reach stars fainter than g 15. The Rapid Temporal Survey (RATS) addresses this by using wide-field cameras on 2-m class telescopes to detect short-period variability in stars as faint as g = 23. One class of object that is known to vary on time-scales shorter than a few 10's of minutes are the AM CVn binaries. These systems have orbital periods shorter than 70 min and spectra practically devoid of hydrogen (see Nelemans 2005; Solheim 2010, for reviews). They are composed of white dwarfs accreting from the hydrogen exhausted cores of their degenerate companions. They are predicted


2

Barclay et al.

to be the strongest known sources of gravitational wave (GW) radiation in the sky (e.g. Stroeer & Vecchio 2006; Roelofs et al. 2007, 2010). Further, they are amongst a small number of objects which future gravitational wave observatories will be able to study in detail and for which extensive complementary electromagnetic observations exist. Modest progress has been made in discovering more AM CVn systems in recent years - there are currently 25 known systems. Those which have been discovered recently have been identified using spectroscopic data from the SDSS archive (e.g. Roelofs et al. 2005; Anderson et al. 2005, 2008; Rau et al. 2010). However, all the SDSS sources have orbital periods in the range 25-70 min. Systems with periods shorter than this are predicted to have higher mass transfer rates and be stronger gravitational wave sources. AM CVn systems with orbital periods less than approximately 40 min show peak-to-peak intensity variations of between 0.01 (V803 Cen, Kepler 1987) and 0.30 mag (HM Cnc, Ramsay et al. 2002) on time-scales close to their orbital period. One route to the discovery of such systems is through deep, wide-field, highcadence photometric surveys. Our survey, RATS, is currently the only ground-based survey which samples this parameter space. The Kepler satellite is able to observe a similar parameter space (Koch et al. 2010). However, the targets for which data are downloaded are predefined (currently high cadence data are obtained for around 500 sources), and as such are likely to miss the majority of sources which vary on short time-scales. We outlined our strategy and our initial results from our first epoch of observations, which were obtained using the Isaac Newton Telescope (INT) on La Palma in Nov 2003, in Ramsay & Hakala (2005). Since then we have obtained data from an additional four epochs using the INT and one epoch using the MPG/ESO 2.2m telescope on La Silla Observatory, Chile. Additionally, we have made significant revisions to our data reduction procedure. In this paper we provide an overview of our initial results from data covering 6 separate epochs and outline our new reduction procedure. The emphasis of this work is on sources which have been found to show an intensity modulation on periods 40 min and 25 min in particular. We make a preliminary estimate of the space density of AM CVn systems based on the results of the observations. Future papers will focus on sources with longer periods, such as contact and eclipsing binaries, as well as non-periodic variability such as flare stars.

Figure 1. The position of the field centres of all the fields observed during the first five years of the RATS project. The fields are plotted in Galactic coordinates using an Aitoff projection. Many of the fields are spatially close and so appear as only a single point in this figure.

Table 1. Summary of the six epochs in which our observations were taken. The number of stars column refers to sources with at least 60 photometric data points. Epoch ID INT1 INT2 E S O1 INT3 INT4 INT5
a b

Dates 20031128-30 20050528-31 20050603-07 20070612-20 20071013-20 20081103-09

# of fields 12 14 20 26 29 9

Galactic latitudes > |16 a b < |15 < |10 < |10 |

Total stars 45572 234029 750109 1223803 678025 112788

Filters B B B U U U V V V g g g i i I r r r +He

| | |

II

3 fields < |10 | and 11 fields > |22 | 4 fields < |10 | and 16 fields > |16 |

2 OBSERVATIONS AND IMAGE REDUCTION Our data were taken at six separate epochs: five using the Wide Field Camera (WFC) on the INT and one using the Wide Field Imager (WFI) on the MPG/ESO 2.2m telescope (see Table 1 for details). In our first epoch of observations, the fields were located at Galactic latitudes with 20 < |b| < 30 . Since then our fields have been biased towards |b| < 15 (see Fig. 1 and 2). All fields are selected in such a way that no stars brighter than g 12 are present. In addition, fields are typically chosen to be close to the zenith to reduce differential atmospheric diffraction. As during our first epoch of observations, we take a series of 30 second exposures of the same field for 2 hours. With a dead-time of 30 s for the INT observations and 110 s for the ESO/MPG 2.2-m fields, this results in approximately one observation every minute and every 2 minutes for the INT and ESO/MPG 2.2-m fields, respectively. To ensure the highest possible signal-tonoise we do not use a filter.

Before the white light sequence commences we observe the field in a number of filters. At earlier epochs we used Bessell B and V and either Bessell I or SDSS i filters whilst at later epochs we used RGO U and SDSS g and r filters, with the addition of a HeI I њ 4686 A narrow-band filter at the latest epoch (cf. Table 1). For the INT2 and ESO1 observations an autoguider was used, for the other epochs no autoguider was used. The WFC on the INT consists of four 4096Ч 2048 pixel CCDs and has a total field of view of 0.28 square degrees. The four CCDs њ have a quantum efficiency which peak at 4600 A and an efficiency њ above 50 per cent from 3500-8000 A. The WFI on the MPG/ESO 2.2m has eight 4098Ч 4046 CCDs which are optimised for sensitivity in the blue and has a total field of view of 0.29 square degrees. We have observed a total of 110 fields which cover 31.3 square degrees of which 16.2 square degrees are at low galactic latitudes (|b| < 10 ). Images were bias subtracted and a flat-field was removed using twilight sky-images in the usual manner. In the case of our white light, I and i band images, fringing was present (in the absence of thin cloud). We removed this effect by dividing by a fringe-map made using blank fields observed during the course of the night.
c 0000 RAS, MNRAS 000, 000-000


Results from the first 5 years of RATS

3

Figure 2. The galactic latitudes of all the stars observed during the first fiveyears of the RATS project that have more than 60 photometric data points. Our fields are biased towards the Galactic equator: 78 percent of the stars in our sample lie within |b| < 10 .

Figure 3. The rms noise of each light curve plotted against g magnitude. The grey-scale refers to the number of sources in each bin. The red line is the best-fit exponential function to the expected rms - that is, the mean error of each light curve

3 PHOTOMETRY 3.1 Extracting light curves In our first set of observations (Ramsay & Hakala 2005) we used the S TA R L I N K1 aperture photometry package, AU T O P H O T O M . This technique provides accurate photometry at a reasonable computational speed for fields with low stellar density, but proved to be unsuitable for fields containing more than a few 1000 stars due to the computational processing time involved and - for very crowded fields - its inability to separate blended stars. For this reason we now use a modified version of DA N D I A (Bond et al. 2001; Bramich 2008; Todd et al. 2005) - an implementation of difference image analysis (Alard & Lupton 1998) - which is more suited to crowded fields and takes into account changes in seeing conditions over the course of the observation. The source detection threshold is set to 3 above the background. We split each CCD into 8 sub-frames and calculated the point-spread function (PSF) for each sub-frame using stars that are a minimum of 14 above the background and have no bad pixels nearby. A maximum of 22 stars are used in calculating the PSF. We use the four images with the best seeing in each field to create a reference frame. For each individual frame we degrade the reference frame to the PSF of that image and subtract the degraded reference frame. After subtraction we perform aperture photometry on the residuals. We do this for every frame and create a light curve for every star made up of positive and negative residuals. As expected, our data suffer from systematic trends caused by effects such as changes in airmass and variation in seeing and transparency (for a discussion of systematic effects in wide-field surveys see Collier Cameron et al. 2006) which can cause the spurious detections of variable stars at specific periods. These periods are typically half the observation length, although they can occur at other periods and are field dependent. In order to minimise the effects of these trends we apply the S Y S R E M algorithm (Tamuz et al. 2005). The S Y S R E M algorithm assumes that systematic trends are correlated in a way analogous to colour-dependent atmospheric

extinction, which is a function of airmass and the colour of each source. The colour is unique to each light curve and the airmass to each individual image though it does not necessarily refer to the true airmass but any linear systematic trend. These terms are minimised globally - and the trend removed - by modifying the measured brightness of each data point. We de-trend each CCD individually and use the method described in Tamuz et al. (2006) for running a variable number of cycles of the algorithm depending on the number of sources of systematic noise in the data, though we run a maximum of six cycles as we find that more than this starts to noticeably degrade signals in high-amplitude variables. To determine the quality of the resulting light curves we calculated the root mean square (rms) from the mean for each light curve. When calculating the rms we sigma-clip each light curve at the 5 level in order to remove the effects of, say, single spurious data points. In Fig. 3 the measured rms is shown as a function of the g mag for all stars in our sample. The mean rms of all the data is 0.046 mag with sources brighter and fainter than g = 21.0 having a mean rms of 0.024 and 0.051 mag, respectively. If we look at the mean rms of each field individually, we find two fields in our whole data-set with mean rms outside of 3 standard deviations which we attribute to very large variations in atmospheric transparency during these observations. We show the best-fitting exponential function to the expected rms (equivalent to the average error on each light curve) in Fig. 3 and find it to be consistent with the measured rms except of the very faintest stars (g > 22).

3.2 Determining colours When conditions appeared photometric we obtained images in different filters of a number of Landolt standard fields (Landolt 1992). We made use of data kindly supplied by www.astro-wise.org who give the magnitude of Landolt stars in a range of different filters. We assumed the mean atmospheric extinction co-efficients for the appropriate observing site. The resulting zero-points were very similar to that expected2 . For our target fields we initially used S E X T R AC T O R (Bertin

1 S TA R L I N K software and documentation can be obtained from http://starlink.jach.hawaii.edu/

2

e.g www.ast.cam.ac.uk/wfcsur/technical/photom/zeros/

c 0000 RAS, MNRAS 000, 000-000


4

Barclay et al.

& Arnouts 1996) to obtain the magnitude of each star in each filter. However, in comparison to DAO P H OT (Stetson 1987), S E X T R AC T O R gave systematically fainter magnitudes for faint sources. Since the photometric zero-point for DAO P H OT is derived from the PSF (and therefore different from field-to-field) we calculated an offset between the magnitudes of brighter stars determined using S E X T R A C T O R and DAO P H O T . We then applied this offset to the magnitudes derived using DAO P H OT . To convert our B V i data (Table 1) to g r magnitudes we used the transformation equations of Jester et al. (2005). Although our light curves were obtained in white light we note the depth of our observations as implied in the g filter. For stars with (g - r ) 1.0, the typical depth for fields observed in photometric conditions and with reasonable seeing (better than 1.2 arcsec) is g 22.8 - 23.0, while for redder stars (g - r 2.0) the depth is g 23.6 - 24.0. To test the accuracy of our resulting photometry, we obtained a small number of images of SDSS fields (York et al. 2000). For stars g < 20 we found that for gRATS - gSDSS , =0.12 mag and (gRATS - rRATS ) - (gSDSS - rSDSS ), =0.22. For stars 20 < g < 22 we find for gRATS - gSDSS , =0.29 mag and (gRaT S - rRAT S ) - (gSDSS - rSDSS ), =0.27. Given our project is not optimised to achieve especially accurate photometry these tests show that our photometric accuracy is sufficient for our purposes, namely determining an objects brightness and approximate colour.

Figure 4. The distribution of ber where the LS-FAP is for contours refer to the number binned and has a bin size of respectively.

the LS-FAP statistic in magnitude and numthe highest peak in the frequency range. The of stars in each bin where the data was been 0.2 and 0.4 in magnitude and log(LS-FAP),

4.1 Periodic variability detection We use two algorithms in order to identify variable sources: analysis of variance (AoV), and the Lomb-Scargle periodogram (LS). From these algorithms we determine the analysis of variance formal false-alarm probability, AoV-FAP; and the Lomb-Scargle formal false-alarm probability, LS-FAP. We use the VA RT O O L S suite of software to calculate these parameters (Hartman et al. 2008). The Lomb-Scargle periodogram (Lomb 1976; Scargle 1982; Press & Rybicki 1989; Press et al. 1992) is an algorithm designed to pick out periodic variables in unevenly sampled data. As a test of variability we use the LS-FAP. Its distribution as a function of magnitude is shown in Fig. 4. This parameter is a measure of the probability that the highest peak in the periodogram is due to random noise. If the noise in our data were frequency independent the LS-FAP would refer to the probability of the detected period being due to random noise. However, our data are subject to sources of systematic error which we attribute to red noise: these include the number of data points in the light curve and the range in airmass at which a star is observed. Hence, we use it as a relative measure of variability. We use a modified implementation of the analysis of variance periodogram (Schwarzenberg-Czerny 1989; Devor 2005). The AoV algorithm folds the light curve and selects the period which minimises the variances of a second-order polynomial in eight phase-bins. A periodic variable will have a small scatter around its intrinsic period and high scatter on all other periods. The statistic AoV is a measure of the goodness of the fit to the best fitting period, with larger values indicating a better fit. In order to be consistent with the LS-FAP we calculate the formal false-alarm probability of the detected period being due to random noise (AoV-FAP) - with the same caveats as with the LS-FAP - using the method described by Horne & Baliunas (1986). We show the distribution of the AoV-FAP statistic in number and as a function of magnitude in Fig. 5. The AoV algorithm, while similar to the LS method, should allow better variable detection as it fits a constant term to the data as opposed to subtracting from the mean as is done in the LS routine (Hartman et al. 2008). However we find that AoV has a number of negative features. It suffers from severe aliasing at periods
c 0000 RAS, MNRAS 000, 000-000

3.3 Astrometry As part of our pipeline we embedded sky co-ordinates into our images using software made available by Astrometry.net (Lang et al. 2010). This uses a cleaned version of the USNO-B catalogue (Barron et al. 2008) as a template for matching sources in the given field. The only input we provide is the scale for the detectors and the approximate position of the field, which is taken from the header information in the images. The Astrometry.net software works well in either sparsely or relatively dense fields. By comparing the resulting sky co-ordinates of stars with matching sources in the 2MASS (Jarrett et al. 2000) the typical error was 0.3-0.5 arcsec.

4 VARIABILITY Due to the large data-set, it is necessary for us to automate the detection of variable sources. We find that no single algorithm is appropriate for the detection of all types of variable sources present in our data and in most cases for the detection of even a single class of variable source since the false positives are unacceptably high if we use just one algorithm. We therefore, typically use at least two independent algorithms to detect each class of variable object. Before passing the light curve data to the variability detection algorithms we remove light curves which contain less than 60 data points. Of the initial 3.7 million stars, this leaves 3.0 million. We remove light curves with relatively few data points because our variability detection algorithms can produce spurious results when a significant amount of data are missing. This process prevents the discovery of transient phenomena, an aspect which we will investigate in more detail in the future. In future work we will discuss sources such as contact and eclipsing binaries and flare stars whose variability is not periodic over a two hour time-frame. However, in this paper we will concentrate on the detection of periodic variables.


Results from the first 5 years of RATS

5

we use 200 MADs, for INT1, INT4 and INT5 we use 800 MADs. These number of MADs above the median are used as they provide an appropriate balance between low amplitude detections and false positives - which we discuss in  4.2 and  4.3 - we attribute the need for different numbers of MAD above the median to the use of an autoguider on INT2 and ESO1 and not on the other epochs. Due to different epochs having different distributions of variability parameters we calculate the median independently for each epoch. Both AoV and LS algorithms produce a periodogram; from the highest peak in the periodogram we calculate the most likely period of a given light curve. We take all the candidate variables and test whether the period detected by AoV matches that detected by LS. We class a period as a match if PAoV + PAoV = PLS PLS
Figure 5. A similar plot to Fig. 4 but this time showing the distributions for AoV-FAP. The data is binned and the bin size is 0.2 and 0.4 in magnitude and log(AoV-FAP), respectively.

(2)

where PAoV and PLS are periods detected by the two algorithms AoV and LS, respectively. P is the error in measured period. We determine P using an approximation of equation (25) in Schwarzenberg-Czerny (1991) whereby we assume that P /P 2 k. (3)

of 2-3 min and for this reason we only search for periods longer than 4 min. In addition there is a tendency to detect a multiple of the true period when the true period is less than 40 min. In tests with simulated light curves we found approximately 10 per cent of sources with a period of 20 min were detected by AoV as having 40 min periods. The main weakness of the LS algorithm is that if a source has a periodic modulation in brightness for only a small amount of the total light curve then - according to the LS-FAP - it is detected as significantly variable. This leads to a large number of false-positive detections which are probably due to random noise. To combat this we have developed a technique to combine the AoV and LS algorithms. Our technique, which combines the LS-FAP and AoV-FAP statistics is a multi-stage process. The first step is to determine if the source is detected as significantly variable by both the LS and AoV algorithms: we then test whether the period each algorithm detects is the same. Shown in Fig. 6 are the AoV-FAP and LS-FAP statistics plotted against the period that is measured by the respective methods. We can see here that both algorithms suffer from deficiencies: the distributions of AoV-FAP and LS-FAP are not constant with period, but tend to higher significance at longer periods. In order to account for the bias of the distribution we use an approach whereby we bin the data in period with each bin 2 min wide. A source passes the first two stages of the algorithm if it is above a specific significance in both AoV-FAP and LS-FAP relative to the other sources in the period bin. In order to determine this significance we use the median absolute deviation from the median (MAD, Hampel 1974) which is defined for batch of parameters {x1 , . . . , xn } as MADn = b medi |xi - medj xj | (1)

This assumption holds for all but the lowest signal-to-noise detection of variability. In order to determine an appropriate value for the constant, k, we inject sinusoidal signals of various periods into non-variable light curves and measure the standard deviation on |PAoV - PLS |. We set the constant, k, in Eq. 3 so as to give a P at a given period equal to twice the standard deviation of |PAoV - PLS |. We find k = 0.002 to be appropriate. The AoV algorithm has an annoying habit of detecting a multiple of the true period, so for this reason we modify Eq. 2 to PAo n
V

+ PAoV = PLS PLS

(4)

where n = {1, 2, 3, 4}. Sources that have matching periods and have been classified as candidate variable sources by both AoV-FAP and LS-FAP are then regarded as 'significantly' variable sources. We detect 124334 stars which show variability on a timescale of 4-115 min: the distribution of the measured periods are shown in Fig. 7. We caution that this technique can detect variables that are not truly periodic - many flare stars have detected periods near the observation length - or may have periods longer than that detected by our method - contact binaries typically have a true period twice the measured one. If a period of less than half the observation length is measured then this is likely to be a true period. However, longer periods detected by the LS and AoV algorithms indicate only that the source varies significantly on time-scales less than 2 hours.

4.2 False positives In order to determine the false positive detection rate, that is, the chance of a source with variability due to noise being identified as a real variable, we pick a light curve at random from the whole data set and construct a new light curve using a bootstrapping approach. The light curve consists of three columns of data: time, flux and error on the flux. We keep the time column as it is, and for a light curve with N individual photometric observations, randomly select N fluxes and errors from the N points in the original light curve. We do not limit the number of times a flux-error combination is selected. The reason for reconstructing a light curve in this fashion is that any periodic variability which is present in the original

where b is a constant which makes the parameter consistent with the standard deviation. For a Gaussian distribution b = 1.4826 (Rousseeuw & Croux 1993) which we use for simplicity. We use the median, as using the mean is not appropriate when the first moment of the distribution tail is large (Press et al. 1992); the large tails in the distributions of AoV-FAP and LS-FAP are shown in the right-hand plots in Figs. 4 and 5. The median and MAD are more robust statistics. We vary the number of MADs a source must be above the median to be detected depending on epoch as the distributions of LSFAP and AoV-FAP parameters are different. For INT2 and ESO1
c 0000 RAS, MNRAS 000, 000-000


6

Barclay et al.

Figure 6. LS-FAP and AoV-FAP statistics in the left and right plots respectively, plotted against the period measured by those statistics. Spurious detections of variability are obvious at very short periods in AoV-FAP, while LS-FAP is less sensitive to longer period variability.

4.3 Sensitivity tests as a function of amplitude and period

Figure 7. The technique for size of 0.01 m the number of

amplitude and period of sources classes as variable using the combining LS-FAP and AoV-FAP. The grey-scale uses bin ag in amplitude and 5 min in period and the colour refers to sources in that bin.

To determine the space densities for different classes of sources which vary on time-scales of less than 2 h it is essential that we determine our sensitivity to different brightnesses, periods and amplitudes. To do this we inject sinusoids of known period and amplitude into non-variable light curves and then attempt to detect it using our LS-FAP + AoV-FAP test. We split the sources into a bright and faint groups - brighter or fainter than g = 21.0. - and for each brightness range we inject a periodic signal into a non-variable light curve. The non-variable light curve is drawn randomly from a pool of light curves that have AoV-FAP and LS-FAP statistics within 0.5 median absolute deviations of the median AoV-FAP and LS-FAP of all light curves with g greater than and less than 21.0 for the bright and faint groups, respectively. The periodic signal injected is drawn from a grid of period-amplitude combinations where the periods range from 4 - 120 min and amplitudes from 0.02 - 0.2 mag. We define amplitude as peak-to-peak. The advantage of using real (non-variable) light curves over simulated data is that it preserves the noise values which may be non-Gaussian. We run the entire grid 100 times for each brightness range which allows us to build up confidence of a variable with a given period and amplitude being detected. The results of this are plotted in Fig. 8 and show that in the brighter sample, sources with a period less than 90 min have a 70 per cent chance of being detected if they have amplitudes larger than 0.05 mag, rising to above a 90 per cent chance for amplitudes greater than 0.10 mag. Stars fainter than g = 21.0 with an injected period less than 90 min have 50 per cent detection chance if they have an amplitude greater than 0.08 mag and only sources with a period less than 40 min and an amplitude greater than 0.15 mag have a 90 per cent chance of detection. From these results we find that our LS-FAP + AoV-FAP method is relatively good at identifying variables with periods less than 90 min in bright sources but is weaker at identifying periodic variability in the fainter sources. The advantage of this method is the small number of false-positives expected to be detected.
c 0000 RAS, MNRAS 000, 000-000

light curve is removed, allowing us to measure the chance that any variability present is due to noise. We reconstruct 105 bright and 105 faint randomly selected light curves and attempt to detect variability using our method for combining the AoV and LS algorithms. Bright and faint refers to sources brighter than and fainter than g = 21.0, respectively where 21.0 is approximately the median g magnitude. For the bright sample we class 10 stars as variable and for the faint sample this increases to 17. This equates to a false positive rate of 0.01 and 0.02 per cent for bright and faint sources, respectively. To find the improvement in the false positive detection rate we run the same routine but using only one of the statistics, i.e. LS-FAP or AoV-FAP. The method of detection is the same as the first stage of the two algorithm method - is a source found to have a variability statistic above the detection threshold for its period. When using both AoV-FAP and LS-FAP the false-positive rate is around 0.5 per cent. When using the two algorithm method the number of false positives is re