Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.atnf.csiro.au/research/AT20G/MOVED_TO_WIKI/files/AT20G03.doc
Дата изменения: Mon Aug 16 18:04:51 2004
Дата индексирования: Wed Dec 26 01:26:39 2007
Кодировка:

Поисковые слова: п п п п п п п п п п п п п

WBC 2003
o Observing Narrabri 9-16 Oct 2003 [RDE updated 21Oct2003]
o Status of observing and data recording
> Trellis pattern at 15deg/min started with 11deg scans from -59.5 to
-70.5 at UT 05:00 on Oct 9. interleave with 2.3' separation
> 3 baselines (2-3=30m, 3-4=30m, 2-4=60m) recorded with UTC, Tsys at 17650
and 20450
> sampled at 54msec, 4.5msec of data blanked each scan
> at 15 deg/min we have 48" per sample and 3.2 samples per (2.6') HPBW
> We observe calibrators by tracking them through transit (+- 5 min)
o Status of calibration
> The bandpass is known (spectrum analyser) to have a 15db slope across
the band. 6-7GHz of useable bandwidth.
> Warrick transforms 16 "lag" channels into 8 complex frequency channels
> the resulting "bandpass" is un-calibratable and unphysical
. while tracking a calibrator through transit the amplitude of any
channel has a fast oscillation (2sec) which is 20-30% of the
correlated flux, and a slow (30sec) variation which is 100% of the
correlated flux!
> time transform bandpass
. we can also measure the bandpass through each lag channel by taking
the FT of the time sequence while tracking a source through transit.
In this case we have a physical delay changing as the earth
rotates.
. the frequency spectrum has to be transformed back to lag space so
each multiplier channel can be analysed separately
. These bandpasses look sensible and show only small changes between
multiplier channels (about 5%)
> calibrating the correlator
. since the transform of the correlator lag channels (at any point in
time) is not physical, the "lag" channels are not pure lags. They
could have incorrect delay, or complex gain errors.
. both need to be corrected before calculating the spectrum
. note that a delay error in a channel will cause a phase slope with
frequency going through 0 phase at 0 frequency, a phase error in a
channel will be a constant offset with frequency
. Lister finds that the phase gradients for each channel fit the
expected delays
o corrupt data points
> there were 8 single 50msec points with random (phase closing) numbers in
all baselines in a 10 hour scan. These will have to be filtered out.
They have the wrong time response (less than a beam) and wrong spectral
response.
o triple correlations
> the triple product between 2-3,3-4,2-4 measures a triple amplitude which
is the same as the amplitude for a point source and the phase closure
which is 0 for a point source
> test on the calibrator with no correlator calibration gave a phase
closure of approx -180deg for spectral channel.... at transit (complex
variations with time either side of transit
> if the phase closures can be corrected and stay at 0 we can use this as
a source detection algorithm. The triple amp is an optimum estimate of
the flux for a point source. We can scalar average the triple amplitudes
across spectral channels and between scans even with no phase
calibration!
. this result seems odd - eg there would be no need to calibrate
bandpass, and there would be no bandwidth smearing of a triple
product???
. maybe this only works if the S/N per triple product is significant
> run the triple correlation detection algorithm on the two good spectral
channels
o Mike on bandwidth determination/calibration
> track a calibrator across transit
. 200 time stamps and 16 delay channels
. take 16 lags of real data and cross correlate with calibrator gives
best estimate for source detection as function of delay
. rms one baseline and one time stamp is 60mJy
. Roberto claims 19mJy on triple averaged over the primary beam
> recover bandwidth by convolving the lag outputs by the transform of the
inverse of the bandpass

o Data directories
> Gridded FITS images
- /DATA/MEDIA_1/lstavele/mjk

> C1049 data files
- /DATA/CORVUS_1/ric301/2003-11-02/

> RDE area
- /DATA/AMBROSIA_1/rekers/

o Book keeping
> Lister: directory /DATA/MULTI_8/C1049/miriad/inter_? has a "files.txt"
which lists all the file names relevant to that interleave and any
relevant notes such as weather.
> various files in WBC/book_keeping_Nov03
. copy of each interleave note from Lister
. copy of Roberto's scan rms

o Followup statistics (memo sent out 12 Nov 2003
> Statistics on our success rate with the follow-up. This is still crude
but indicates some very obvious trends.
> I am, comparing the Pilot, the first day of follow-up, Mike's + Mark's
gridded template coefficients and Roberto's triple correlation along
scans. I am also testing triples projected to phase closure zero -
which is good for point sources. The follow-up was done on Mikes list.
> Above 0.5 Jy:
. Everything agrees except for a few confused galactic regions and
about 20 spikes that got left in Mikes list.
> 0.1 to 0.5 Jy:
. Follow-up has very high success rate and we add about 100 new
sources which were missed in the pilot survey. All detection lists
agree pretty well except for the triples projected to closure zero -
I suspect this method is loosing many extended sources so we
wouldn't want to use it in this range.
> 0.1 to 0.06Jy:
. Disappointing - Mike's list drops rapidly to 25% success rate and we
only get 30 new real sources. In this range Roberto's list looks a
lot better but since we didnt followup on his positions we dont know
what would have happened - it may have had 100-200 sources by the
look of the counts. In retrospect a source count analysis on Mikes
list would have shown the problem since it has a deep minimum in
this flux range.
> < 0.06JY:
. Mikes list is now essentially random positions with follow up
success dropping to a few %. The large increase in his counts in
this range is all noise. Roberto's 4 sigma cutoff on single scans
is at about 70mJy so he adds nothing either. It looks like we
didn't gain as much as we hoped from the scan combinations. This
will need more work.
. Below 50mJy the follow-up is acting as an independent high
sensitivity survey of a small area of sky (selected randomly) down
to 2mJy. It has 40 real sources which will be an interesting sample
in its own right. With the full follow-up analysed the limit should
drop to 1mJy and this number may double.
> Conclusions:
. We needed more time between the survey and the followup!
. If we could coherently combine scans we would gain x2
. If we had the full bandwidth we would gain x2
. This suggests that the existing data should be able to reach 25mJy
and add 1000 more real sources. [We now have 341 sources].
. Now we have a better understanding of the follow-up strategy we
could do it in 1-2 days. The hybrid with 5 antennas worked very
well even with one cut.