Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.naic.edu/alfa/ealfa/meeting2/note_spekkens.txt
Дата изменения: Sat May 13 00:11:11 2006
Дата индексирования: Sat Dec 22 13:47:24 2007
Кодировка:

Поисковые слова: сферическая составляющая галактик
ALFA meeting, AO, 8-9 May 2004 (KS, MH)

**The notes reflect as closely as possible what actually happened, but
because they have been produced from written notes of the proceedings, they
are intended to provide a summary, not a transcript.**

Abbreviations of names used:
SS - Steve Schneider
ST - Steve Torchinsky
BB - Bob Brown
TH - Trish Henning
RM - Robert Minchin
SL - Suzanne Linder
JD - Jon Davies
Chris - Chris Salter
MP - Mary Putman
RA - Robbie Auld
EdB - Erwin de Blok
MM - Martin Meyer
WF - Wolfram Freudling
KO - Karen O'Neil
ErikM - Erik Muller
AS - Amelie Saintonge
MH - Martha Haynes
RG - Riccardo Giovanelli
BC - Barbara Catinella
CS - Chris Springob
Sabrina - Sabrina Steirwalt
BK - Brian Kent
KS - Kristine Spekkens
LH - Lyle Hoffman
JR - Jessica Rosenberg
Desh - A. Deshpande

********************
Saturday, 8 May 2004
********************

Welcome and Overview 8:30-9:30am (SS)
-------------------

Welcome: BB
- I am a proponent of conducting alfa in consortia
- The role of national center is to strengthen University groups
- While there is a partnership b/w NAIC and universities, the
basic work should go on in Universities
- Also has to be a partnership between universities... these meetings pave
the way for these partnerships
- In addition, it costs a lot to undertake EALFA surveys, both in human
resources and in telescope time
- In recent years, NSF has shifted from funding agencies like NAIC to funding
universities; we therefore need to lean on people at universities to provide
access to financial resources that universities have access to.
- I'd like to talk to people from various institutions about how funds
may be raised, particularly those outside the US
- It would be nice if, tomorrow, proposals weren't only written to NAIC but
also to NSF.
- A multi-institutional proposal would likely be better received than one from
a single institution.


Meeting Overview: SS
- Everybody here has a lot of expertise and have similar goals... we
need to work effectively together
- We can do everything that we want to do, and do it in a variety of ways. Our
job is to figure out which way is best.
- Throughout this meeting, I'd like to approach each topic in the agenda and
make sure there is a consensus. If we achieve that, we can move onto the next
item. If simple clarifications aren't enough, we will defer to tomorrow to
keep things moving.
- I hope this meeting will be productive, so that by tomorrow we can draft
potential plans for EALFA observing as well as commensal observing with other
groups
- There is a lot to do!
- We need a set of organisational guidelines for the group, which is what Karen
will discuss next.


Consortium Rules and Guidelines: KO
- Updated consortium rules from white paper are in the handout.
(http://alfa.naic.edu/extragal/meeting2/handouts/consort_rules.shtml)
- They are simple and incomplete, but hopefully good for laying
groundwork... we can decide on other details later.
- A late addition - since AO is an NSF-funded institution, we should have
one person at a US institution on the steering commitee.
DISCUSSION:
RG: There is lots of good content in the proposal, but each EALFA survey
will do things in drastically different ways; so, different groups in
the consortium will have different goals and data products
- There should be some sort of cross-disciplinary ALFA panel that bridges
across the GALFA, PALFA, EALFA surveys.
- Each of the surveys then runs itself, instead of having one set of rules;
in this case there is no EALFA steering committee.
KO: How how do you propose to change the guidelines that we have?
RG: For example: there will be 6 proposals in EALFA, and each has different
legacy products, distribution, and outreach. Why not have a steering
committee oversee all of them?
JD: I worry that whole consortium will fragment into lots of different groups.
What we need is a pro-active steering committee that coordinates the
surveys
RG: Right now we have two different proposals - we just need to make sure
that they don't overlap. Coordination is even more important among
different consortia than amongst the EALFA surveys.
MP: Perhaps there needs to be a super, cross PALFA/GALFA/EALFA steering
committee.
RG: Maybe that committee should be made of one representative from each
survey.
MH: For each project there has to be a PI, that PI will be responsible for its
success, make progress reports, etc. What do we do with them?
KO: So there are a few different possibilities:
A - steering committee, as before...
B - have a committee that is inter-disciplinary among PALFA and GALFA;
make recommendation to NAIC that this needs to be done
C - propose expanded steering commitee to actively drive all science
JD: I propose C - need cross-survey interaction within EALFA in addition
to among the different consortia
MH: The concern is a disconnect between PI of surveys and steering committee.
RG: Is there a difference between options B & C?
KO: Yes. B is focussed on surveys, whereas there is still a steering
committee in C
MH: We could make a "D", that is combination of B+C: a steering committee
that includes but is not limited to the PIs.
RG: Call it a coordinating committee, since the committee isn't actually
telling the survey PIs what to do.
JD: I agree with the "coordinating" wording.
RG: I propose that the rules be recommended to each survey team, but that each
survey team should feel free to add and subtract depending on specific
survey needs.
MH: The language of the rules just needs to be updated to be more inclusive
RM: The consortium rules don't specify who is part of the actual
surveys that will be undertaken. We could adopt rules specific to EALFA,
but then surveys have their own guidelines.
MP: This type of issue came up before with HIPASS - in the end, people
were just dropped off lists, and eventually we iterated to a set of
rules like these - it's good to have them from the start
KO: Could MH draft a plan that we can circulate tomorrow?
MH: Sure.
SS: That sounds like a good plan.


ALFA Project Overview 9:30-10:20am (ST/Desh)
----------------------------------

Receiver performance: ST
- March: acceptance testing in Australia
- We put an absorber for warm load and cold sky for cold
load - you can see the feeds here but there is an RFI skirt now.
- Plot of Tsys includes sky + load Tsys - sky assumed to be
about 5 degrees; the receiver temperature alone is a few degrees.
- ALFA arrived at AO on April 1st: exactly when they said it would!
- ALFA first went into the transmitter lab, and the first thing we did was
to take it apart, since none of the AO observers had seen it before.
- Hopefully we won't have to open it up again.


Receiver commissioning: ST
- WAPP is initial backend for everybody
- GALFA contracted to Berkeley (delivery time: jan 2005)
- PALFA is being done at AO (delivery time: jan 2005)
- EALFA has the longest to wait; construction will start at AO after the
PALFA backend is done.
- EALFA machine will be identical to PALFA except different software (delivery
time: jan 2006)
- PALFA schedule is slipping, but that shouldn't affect EALFA backend since
the technology is similar
MH: What is the cost of the backend?
ST: I don't know off hand, but there is money to build it.
KO: Will the EALFA backend be a real 200 MHz band?
Desh: Should be real, with 25 kHz channels
KO: If we can get better than 25kHz easily, that would be good from an RFI
excision point of view
RG: How long after delivery of the GALFA backend will commensal observing
begin?
Desh: We expect that GALFA will be comissioned by June 2005; it will be
delivered in Jan but will take till June to get running
MH: What about EALFA? If it is delivered in Jan 2006, when can we
begin observing? 6 months makes a difference because that's a
season of the extragalactic sky
Desh: Commisionning shouldn't be a problem, since the software is not that
different from the PALFA software, but I am not sure when observing
will begin.
ST: We are starting to do some tests. We are not doing them linearly, because
some will be easier to do later. The shared-risk proposals in July will
also help us to do some of them. Remember that these are shared-risk -
you may not get too much science done but it will be useful to us at AO.


Backends description - Desh
- Starting to do a number of tests on the backends. There are four aspects to
worry about: channel similarity, cross-talk, stability and dynamic
range.
- If we put same signal in all 14 channels, then we should get the same thing
out - you can see that all but one channel shows this. A correlation of
0.65 between the channels is consistent with our expectations.
- extra noise that the digital correlator produces is completely
uncorrelated
- when ALFA is actually operational, then each pixel is looking at a
different part of the sky. In that case, the average correlation
between the channels is expected to be 0; statistics are as expected.
- We find that the dynamic range is about 28 dB.
RG: Why are the sidelobes on either side of the main beam different for the
central pixel?
Chris: We still need to adjust ALFA's position on the turret floor. This
should even them out.
RG - The sidelobes look very low, which is good.
Some characteristics:
- gain vs. ZA: gain is over 10 for low ZA, slightly smaller for high ZA
- Tsys is ~30K over most of the ZA range
- SEFD is 2.85 for the central pixel: almost exactly as predicted
- HPBW: 3.3 arcmin in AZ, 3.8arcmin ZA; within 0.1 arcmin of what German
Cortes predicted!


Discussion with ALFA Project Managers 10:40-11:30am
----------------------------------------------------

SS: We have an open phone line in case anyone wants to call in.
- I asked a question during break. It is in azimuth that the beams are
asymmetric, so that should be fixable.
KO: What is the difference in gain for outer beams vs. inner beam? Even
if you don't have the data can you tell us the predicted values?
Desh: We have the data. What you see at the bottom are the relative gains.
The beams are within 10 - 20 % of the main beam.
RG: Can you comment on the impact of the tertiary shirt?
Desh: We're not certain, but Tsys should move comfortably below 30K
(will likely lower by 4K over present values)
JD: Can you compare the ALFA pixels with the present L-wide receiver?
Desh: Right now, the central pixel is almost as good as current L-wide. The
tertiary skirt will make it at least as good as L-wide, and the other
beams are within 20%
Desh: The "dark side" is the stability (we are working on this). We need to
test this for very deep integrations.
- Cross-pol performance is expected to be better than 20dB isolation.
Chris: That's a good starting point for continuum work.
SS: What was zenith range used in the tests?
Desh: The telescope was moving almost rise to set.
RG: Do we have information about the change in sidelobe structure with ZA?
Desh: No. We do not yet have enough data to do that since we are not
scanning fast enough
RG: So coma lobes shown may depend strongly on sky position?
Desh: Yes.
SS: One issue is the change of sidelobes with postition for mapping, but I
that will come with more testing.



Surveys - Update and Discussion 11:30-12:30am (JD)
----------------------------------------------

ALFALFA: RG
- MH circulated a document that has details of all the science we intend to
do, so I'll highlight only a bit here.
(see http://alfa.naic.edu/extragal/meeting2/handouts/high_gal_lat_prop.pdf)
- Redshift surveys show excellent agreement with large-scale structure
predicted in simulations of structure formation in a LCDM Universe, but
on small scales things don't match as well
- For instance, there is a sub-halo problem, where there seem to be fewer
low-mass halos than predicted.
- HI Mass Function (HIMF): Derived from AO data in the late 90s, and the
slopes of the faint end differ by an order of magnitude among different
surveys. The HIPASS result comes somewhere in between these extremes.
- Summary of science: some goals have to do with HIMF but others are quite
different: ie. HI absorbers, OH megamasers at high z. This kind of survey
will sample lines of sight to thousands of sources.
- In considering the impact of the surveys, one must keep in mind that most
of the sources detected by HIPASS were not resolved: it is therefore
meaningless to compare column densities. When scaled to the same velocity
resolution, ALFA has more than 4 times the sensitivity of HIPASS.
- When we compare the number of low-mass galaxies detected in surveys with
12, 60 and 300 sec/beam for the same total integration time, it is clear that
the most low-mass sources are obtained for the shallowest survey covering
more area: longer integration time means fewer low-mass galaxies.
- I strongly beleive that an all-sky survey of important legacy value.
- in Zwaan et al. 1997, there are only a dozen objects below 10^8 solar masses:
we don't know where they are!
- For ALFALFA, we only detect the lowest mass galaxies out to 5 Mpc
- We need to compare the HI sizes with the beam sizes of Parkes and AO: most of
the galaxies were unresolved by Parkes, but will be resolved with ALFA.
DISCUSSION:
Desh: Will you really get to 10^6 solar masses?
RG: Yes, if they're there.
KO: What telescope parameters did you use to get the numbers you quoted?
RG: SEFD of 2.85 in centre, and 3.1 in outside
MM: HIPASS only 1000 biggest galaxies: noise should go down to 10 mJy for
HIPASS with re-analysis of the data.
RG: Also comment that we have been conservative with detection limit: if we
push the detection limit below S/N of 5, we could detect many more sources
JD: What will ALFALFA do that's different? I am not convinced that ALFALFA
will do anything better than what has already been done
MH: Remember that for the nearest galaxies, we can get primary distances if
they have resolved stellar populations. For example, Karatchensev has a
program on the ACS to get more.
JD: At face value, area is the best choice for getting at low-mass end, but
with large peculiar velocities I am not so sure.
JD: Big issues that need to be discussed:
1 - how is ALFALFA better than HIPASS?
2 - which is better: depth vs. areal coverage
3 - scans - number
4 - RFI rejection
5 - early science


Ultra-deep survey: WF
- Want to go really deep: we have the biggest dish, so we should.
- We'd like to get at the SFR history of the Universe: the different relations
come from differenced in extinction correction
- What we will look with at AO is a size of a tickmark on this plot!
- However, if you plot SF history vs. time and not z, then you get
substantial fraction of the Universe's age in the redshift range that ALFA
will probe.
- Our knowledge of evolution of HI comes mainly from damped lyman alpha
absorbers (DLAs): at low-z end, we what don't know how the DLA counts
evolve with redshift
- DLA models don't match the data at the low-z end; they predict that that gas
density should evolve, but observations indicate the opposite.
- To differentiate between an evolving and non-evolving HI density, we need
to be able to discern a 50% change in the density at z~0.15
- Need 40 galaxies with M~10^9.5 Msun to get a signature at the 99% confidence
level.
- HIPASS is complete to 4000 km/s; ALFA will be complete to 15,000km/s; we can
also use the higher spatial resolution to identify counterparts to the HI
detections in other bands
- Survey parameters:
0.2 mJy at z~0.16; rms= 0.05 mJy/beam. area=0.36 deg^2, at 70hrs/beam.
volume probed = 8000 Mpc^3. We will need 1000 hrs.
- The main limitation will be RFI. If it works at all, we will need 9-level
sampling.
- We plan on choosing a field at a dec of 17.5 or 19.5deg , |b|>30 deg. It
could be near existing deep field.
- Will it work? It will depend on RFI! Does sigma decrease as sqrt(t),
and what will the efficiency be?
- We submitted a precursor proposal, in which we asked for half the desired
integration time per beam. It was placed in category C, so it will not be
scheduled. We need to revise and resubmit.
DISCUSSION:
RG: What fraction of the HI masses that you find will be identified?
WF: We don't know.
JR: It could be high since in our survey there was lots of optical
counterparts; they are just not spirals.
KO: The observations will also pin down the redshift of optical counterparts
RG: How does the volume you plan to sample compare with that which has
already been done over time, with pointed observations along many lines
of sight?
JD: This would give you a manifestation of the "no-evolution" model?
RG: Right. It might be interesting to look into.
MH: We find that you can actually go to 1188 MHz (lowest allowed by
waveguides), and that there are some clean windows below 1225 MHz.
Will the 2nd generation spectrometer go below the canonical ALFA range?
KO: We have been doing work on RFI excision at low frequencies at the GBT. We
find that the ground-based radar can be removed, but sometimes you just
get blasted out.
MH: Will the new spectrometer do it? Can we sample fast enough to remove
this RFI?
KO: As a compromise, you would want to dump at 1ms intervals. Faster is
better, but need to create reasonable amounts of data.
MH/BC: But there are parts of the spectrum below 1200 MHz that are accessible.
Should we push below 1225 MHz with ALFA?
BC: SDSS has made a new release. We could use it to figure out where to do the
survey.
JD: Big issues that need to be discussed:
1 - RFI
2 - does sigma go down as sqrt(t) all the way?
3 - baselining
4 - observing efficiency
5 - Chris - tracking
6 - SDSS areas to target

Medium-depth survey: JR
- HIMF less affected by distance uncertainties if the survey is a little
deeper
- We can compare the relative characteristics of different surveys of
different different depths in this table. The number of independent volumes
that each samples is an important indicator. We can go out to greater
distances with longer integration times.
RG: Why are the numbers different with the 60s and VAVA columns?
SS, JR: it's because a different area was used in each column.
RG: For a given total amount of survey time, for a given mass the total
volume sampled should be smaller for a deeper survey, but here it is
bigger. Why?
(There is some discussion regarding the contents of the table here. It was
determined that they required further consideration).
- But where do you survey? Virgo region might be complementary to ALFALFA.
There is also Liese van Zee's SMUDGES survey area, and regions around
nearby galaxies.
- What survey strategy do we adopt? Do we do a patch of sky or a strip?
- There are also several data-taking strategies: the driftscan technique is
well-tested, but we could also step and stare. This might work best with
piggy-backing.
- Does all of this fit into a single survey? It will need a lot of time to
be done right. It's unclear whether this should be a commensal project, or
an EALFA priority.
DISCUSSION:
JD: Big issues that need to be discussed:
1 - What can we do in a given time?
2 - what region should we target?
3 - what area covered for each position?
4 - make sure that the table is correct, and explain it
MP: what is an "independent volume"?
JR: We define it as 10 Mpc^3.
RG: But then the number of independent volumes changes if you go to
smaller volumes.
JR: but those volmues might not be independent regions.



ZOA surveys: TH
- Obscuration due to dust and high stellar density in the galaxy blocks 20% of
optical extragalactic Universe.
- Need all-sky map of surrounding mass inhomogeneity to understand the Local
Group's motion and dynamical evolution.
- An HI survey can map the large-scale structure (LSS) in the regions of
worst obscuration.
- ZOA Parkes survey: has an RMS of 6mJy, instead of HIPASS's 14mJy.
- ZOA fills in with galaxies from the Parkes ZOA survey!
- ZOA in the AO sky cuts through some important known superclusters. Some have
already been surveyed by the northern extension of Parkes.
- Due to likely pressure on popular, low-b portions of AO sky, the best bet
for a ZOA survey is commensal observing with GALFA or PALFA.
- 1. GALFA survey parameters: single of double drift
uniform sky sensitivity
Nyquist sampling with double drift - 460 hours?
- What do you get over Parkes? Better positional accuracy, different area,
not much deeper.
- 2. PALFA survey parameters: galactic plane b<5, 32 enormously deep! but observing mode introduces
complications from varying feed geometry
- Really would want the 200 MHz EALFA backend for this survey.
DISCUSSION:
RG: We could put pressure on PALFA to change position rapidly, to get better
baselines. The more passes the better from a ZOA point of view.
Desh: They are definitely thinking of 2 passes.
RG: 3 would be better
JD: Big issues that need to be discussed:
1 - commensal or not?
2 - better than Parkes ZOA?
3 - 200 MHZ bandpass?


Coordinated high-b drift survey: MH
- Other consortia are more interested in low latitudes than we are, so we
should dictate the higher latitudes.
- This only requires 100MHz because the surveys are shallow.
- This is a very broad-based group: not many are here since many work in very
different areas
- This survey should also lay the groundwork for early science
- It will do M33, which is south of M31, which should confirm the existence
of the HVCs detected by Thilker et al. 2004. It will also cover Leo 1, a
swath perpendicular to the supergalactic plane. It might also find HI
absorption. A few 100 strong continuum sources will be mapped in the 1st two
years of the survey.
DISCUSSION:
MH: Big issues that need to be discussed:
1 - what will the survey do and what won't it do?
2 - how can we actually see that young people are involved?
3 - how can we coordinate with other alfa projects?
4 - are detection estimates correct?
Desh: how is this different from ALFALFA?
MH: It's part of both ALFALFA and VAVA. We also stated that we wanted to do a
300 sec box somewhere else, but that we don't know where to put it yet.
SS: There seems to be a lot of overlap between medium-deep and this survey.
MH: Yes, and we're not committed to any specific path either. It just shows
that we have a plan.


Surveys - Update and Discussion, Cont. 13:30-14:30 (JD)
----------------------------------------------

KO: Desh asked me to say that we have 8 100Mhz bpasses that can be
used as 4 x 200 MHz.
MP: Can we really do all of these surveys?
MH: We should let PIs take these surveys and run with them, and see what
happens
JD: Agreed. Should we assume that the PIs of the surveys are the heads of the
groups. We assume that there are 4 surveys that are our baselines: ALFALFA,
Medium-deep, ZOA, Ultra-deep.
- Let's start with ALFALFA: is this really better than HIPASS?
RM: I circulated an alternative plan, where there are two passes of ALFALFA.
The mass you detect goes down. There are two other advantages: a) 2
passes yield a better rejection of interference, b) you get smooth
coverage of sky, not corrugated.
- Is there an optimum area of sky to get the most galaxies?
MH: Strategy is very important in the AO range, because of the strong large
scale structure. We want to do a Virgo - anti-Virgo survey (VAVA). There
will be many more galaxies in the springtime than in the fall.
RG: I like RM's suggestion: it would get rid of the scallopping. The depth of
the survey goes up like tint to the 1/4th power, so we don't gain much in
depth by integrating longer, which is the main thrust in this alternative.
Limitation: we will get into problems about the selection of the area.
It will likely be commensal, so if we are selective about what we want that
will make things difficult. Best time to decide this is after the
precursor proposal.
JD: So should we say that we will use the precursor observations to define
the survey later on.
JD: There is a strong case for doing the equatorial strip; that region of the
sky will be extremely well-studied.
MH: The Virgo region will be done by SDSS, so it will be well-studied too. In
the equatorial region Tsys goes up and the sidelobes go up because of the
large ZA. So equatorial region might not be good.
SS: Something from Lister: there is a part of AO sky that was not covered by
HIPASS: a band above 25deg. You can make a science case for the northern
part of the sky. The serendipity is likely to be greater in that part of
the sky.
LH: Good points have been raised on both sides of the issue.
KO: I prefer the deeper survey.
SS: There is another advantage to ALFALFA: it sets the stage for other deeper
observations that might want to be done. It will be a useful precursor to
these.
MP: Single scan is worrisome: we had this problem with HIPASS. With 2 passes
you get rid of RFI and other problems.
MH: Maybe we will need to modify this in later years, since it is not clear
whether this will also be the case with ALFA given the better spatial and
velocity resolution.
RG: Determining the optimum S/N above which to re-detect is an interesting
problem because AO slews so slowly.
RM: We can change to two passes later on, after 2 years
JD: Can we trust the PI to make a case that we will get much more out of this
than HIPASS?
SS: But the precursor proposal won't do much science...
MH: And it is a shared-risk project, too.
SS: We should get more out of a single, long strip.
MH: But you don't get the low-mass galaxies.

JD: Let's move onto the medium-deep survey.
RG: I have a question about volumes: what governs whether you list an
independent volume in the table or not?
JR: You sample pieces of volumes - it's the number of volumes that you cut
through that I listed in the table.
RG: Then the number of volumes that you list is skewed, because when it's
skinny in one direction you get another number than when it's skinny in
the other directions.
JR: Yes.
MH: Then you can't compare that to the large-area surveys
RG: It doesn't make sense to talk about independent volumes of 10 Mpc; makes
more sense to talk about 1 Mpc scales.
EdB: Coming from CDM bias, we need to push down the HIMF, and it doesn't
matter how you do this, whether you use a shallow or deep survey.
MM: I agree. I want to push down the HIMF too.
EdB: I am thinking of Sculptor group in HIPASS; where they pushed down to 10^7
and didn't find anything.
MP: There are low-mass galaxies in the vicinity of bigger ones, like M33 and
M31.
MH: But you can't ignore galaxies with recessional velocties that are less
than 350 km/s.
RG: People have gone deep before, and they have found 10^6 solar masses
objects. The problem is that we don't know how many of them there are or
where they are.
JD: Action item: We want someone in charge of doing both patches and a strip,
and they can report back.

JR: A question about the ultra-deep survey: In setting this up to try and
understand the evolution of galaxies, comparing the number density of
galaxies you detect with number density of DLAs might be flawed: these may
be different populations, so you're not necessarily comparing the same
things. Is there a way to check this internally?
WF: This is a well-defined sample in and of itself; we can look at evolution
internally. Whether there are DLAs in the sample too remains to be seen.
RG: Have you looked at how many continuum sources there are likely to be in
your survey area, to look for absorption?
MP: Will you point at a quasar?
WF: We haven't really thought about it.
JD: It is something new and different, but that you have to keep track of. It's
unclear whether the noise will decrease as the square root of t_int or not.
You need to talk with AO tech staff, because this is a technical problem.
We also need to choose the area properly. Should we use SDSS?
BC: There are 400 000 SDSS galaxies in the latest data release.
SS: There are the Spitzer areas that are interesting too. We could target
those.
RG: With such a small survey area, you have to watch out for shot noise in
SDSS and in SWIRE.
WF: How are SWIRE fields selected?
RG: For their low-IR background.
WF: Then it should be random for extragalactic purposes.
BB: Do you get just one paper at the end out of this, or others in between?
WF: Who knows? You might just get one.
JD: Action item: find an interesting part of sky to point at.


JD: Now to ZOA survey. Which survey is it commensal with?
TH: Is is commensal to GALFA or PALFA? The depth of PALFA is desirable, but
PALFA has a complicated tile pattern for its data-taking strategy.
Chris: There is also the GALFA recombination line survey with possible
commensality with PALFA, though the data-taking methods are different.
RG: I don't see any reason why we can't piggy-back on both GALFA and PALFA.
One part could be in drift mode, and then you go to 300sec with
PALFA too and go deeper. The key here is to make sure that the data
is not compromised.
MH: Big data reduction effort in ZOA, since have all this continuum emisssion.
We need someone else to step forward and deal with this. But it's free in
some sense if it's a commensal survey.
JD: Action item for the PI of a ZOA survey: coordinate with GALFA.
MH: It will be difficult to tell them where to point
JD: Yes. We will have to get what we can out of their choice.
SS: With PALFA, I am worried about the beam-spacing. They haven't decided
whether they will Nyquist sample or not. We could still influence this.


Precursor Proposals: Objectives and Organization 14:30-15:30 (KO)
----------------------------------------------

Drift Scan: RG
- I don't have a powerpoint presentation.
- The proposal was submitted in February. It was reviewed and it will be
scheduled.
DISCUSSION:
JD: How can we get involved in helping out with this proposal?
RG: We need to know when the observations are taking place. They will likely
be spread out over 30 -35 days and we'll need help just getting the data.
- We should ask BB about the possibility of travel support.
- Where we go from there will depend on who will be willing to help; there are
some tasks that need to be done that are outlined in the proposal.
JD: Where in the sky will you look?
MH: One block is near M33, where we have some single pixel AO mapping data.
JD: Could some of the data be over the same region that HIPASS covered?
MH: Not all of us have access to the HIPASS data.
JD: But we do, so we could find a good area to survey.
MH: It would also be good to get a source with VLA data.
JD: Can we get RM working on this.
RM: Agreed.
RG: We also have been writing some data reduction software, and will need help
on this.
SS: Is there a mode where we can get at data without coming here? So we can
run our routines on it too.
RG: The data will be public right away, so that shouldn't be a problem.


Ultra-deep precursor proposal: WF
- We asked for half of 70 hours/beam, which is the target for the actual
survey.
- The proposal was reviewed and assigned to category C: it was judged too
technical and not scientific enough.
- We should ask BB about what to do about this, since the main goal of the
proposal is to test the technique.

KO: What is needed in terms of other proposals?
- Some things are covered in the precursor surveys, some not. One thing we
won't get: since we want a 200 MHz bandwidth, we need to test this.
MH: To do that, you don't need ALFA: you could just use a single pixel.
KO: But could use ALFA and test other things at the same time.
SS: There is an 8th channel that can always be put on the central pixel?
Desh: Yes.
KO: We need to make a switching box to make this happen; we should present an
argument to NAIC about this.
KO, SS: Can we "stack bandpasses" to go farther than 200 MHz?
Desh: yes.
SS: Then you could see what the RFI is like for a 200MHz band and beyond.
Desh: You can also get 200 MHz over four beams with ALFA with the current
100MHz WAPPs.
KO: This would be good for the ultra-deep survey precursor proposal.
- We could share data among the groups: we could get 4 beams going for
ultra-deep and explore the RFI for the 200MHz band.
MH: What about point and stare modes? Should we rotate the feed or the arm?
How do the sidelobes look in each case?
SS: Was this being done with Ultra-deep?
WF: No. The proposal was for "drift and chase" mode.
MH: We need another precursor proposal to answer these questions: for instance,
use the PALFA observing modes and analyse spectral line data.
Desh: What resolution do you need?
MH: We need the spectral resolution of the surveys, otherwise we won't be
able to tell which mode is better.
SS: Should there be proposal to look at very very short dump times, for RFI?
RG: There is already a proposal to do that by Jim Cordes, at both AO and the
GBT. Jim would be happy to share his data.
KO: I've been involved with the project, and I'm not sure that the resolution
of the AO data is adequate for us. The test shouldn't take that long, about
an hour: we might not have to write a proposal for this.
Chris: What about radar blanking?
Desh: The WAPPs can be radar blanked.
BC: Will it be possible to go down to 1100MHz with AlFA?
MH: Nominally, ALFA goes to 1225 MHZ; the waveguides are limited to about
1188 MHz. But what happens when you go down to 1188 MHz? How does the
sensitivity, stability drop off?
KO: The gains are actually coming out of ALFA.
Desh: Are people thinking of using a blinking cal to get a cal?
RG: Not in drift mode. We want a low duty-cycle, fast-blinking cal.
Desh: We could make a low duty-cycle cal. You could fire a strong cal for a
very short period of time and it would contribute negligibly to Tsys.
KO: But doesn't it take awhile for the cal to ramp up?
Desh, Chris: At higher frequencies yes, but there is no problem likely for
ALFA.
RG: It depends on what fidelity you want and whether you move or not. If
you drift, then you can fire cal sporadically.
Desh: Even if you move the telescope, you can tie together the cals and still
fire sporadically.


Software Requirements 16:00-17:30 (MH)
-----------------------------------

NAIC responsibilities and plans: ST
- Does NAIC intend to deliver all the level 1 data products, as defined in
the EALFA white paper?
- Data acquisition, user interface, backend, telescope control, data recording,
bandpass calibration: yes
- Radar blanking - some
- We plan to archive the data and all the tools necessary to get to level
1.
- There is some ambiguity with bandpass calibration since the observing method
will govern which algorithm to use.
BK: What will the data format be?
ST: BDFITS: "big dish fits"
- Position registration: yes
MH: Will the encoder information be included?
ST: Yes. Also RA, dec in header.
- Desh showed the beam pattern; we will provide a beam pattern as much
as possible.
- Archiving: Arun wrote a memo. For GALFA and EALFA archiving will be done at
AO. If you want more details, the memo is on the web. We haven't worked out
exactly what the access will be; MySQL maybe.
(see http://internal.naic.edu/alfa/archiving.html)
- We will provide documentation for whatever we produce.
- Online monitoring: a prototype ALFA data display window is being written by
Mikael Lerner.
- Online instrument monitor: running now, in an engineering window. Other
stuff will be done for an observer window later.
- Monitor display: yes
- Statistics tools: we should discuss this
- RFI removal, NVSS cross-reference, quick-look sky maps, bandpass and
continuum subtraction, flux calibration, "level 1" sidelobe cleaning,
"level 1" maps and cubes: NO
- Astrometry: yes
Desh: We will do all of the data products at the level of data monitoring
MH: The whitepaper stated that NAIC will produce level 1 software, NOT
data monitoring quality software.
RG: If tools are made by other people, can NAIC host them to
allow other observers to actually use them?
ST: The NAIC priority will be the data monitoring software.
RG: But we can work together to make level 1 data products happen.
ST: Some of these tasks we want to pass along to you.
Desh: The only difference between data products and archived raw data that is
pushed through software tools is that the product itself is not stored.
As things change, then it benefits to have the tools and not the data
products. We will have tools that will do the level 1 data reduction
on the fly.
MH: So you're not doing the creating data products? They are not stored?
Desh: No, we are doing it on-the-fly.
RG: Might be the best way to do this is. If someone clicks on a web form,
you need to produce a data-product, whether it's on-the-fly or not.
It's important that it gets done.
JD: In my experience, the best people to write the software are the ones that
are intimately connected with intruments themselves.
MM: I spoke to David Barnes about whether it is possible to adapt the HIPASS
data reduction routines for this data? The short answer is yes.
RM: The implementation of the Parkes data reduction software to the Jodrell
bank data was easy. Could we do this here?
MH: I received an email from Lister Stavely-Smith: he indicates that his
director would like to charge us to use their software.
MP: That's just because they are not eligible for NSF money.
WF: We have to weigh writing software from scratch against the cost of paying
for existing software.
MH: While some of HIPASS uses AIPS++, it's not clear that that is the way to
go...
RG: At Cornell we had trouble maintaining AIPS++. For us, IDL is much easier
to maintain and many routines already exist to process AO data in IDL.
ST: The CIMA display is working now for ALFA.
- We could think about having a web interface too, but we'll see how long that
takes.
- The data will be in a standard format, since we don't want to lock people
into a data reduction package.
RM: Will you rotate the feeds and do beam switching between pixels, as was
done with the HIPASS observations?
ST: We can do this.
Chris: But you might not get good baselines with this technique, and at AO the
central beam has a higher gain than the others.


Platforms and Options: KO
- The US has 2 radio telescopes: they should have similar data formats, but
currently they don't.
- They shouldn't be identical in in every detail, but same keywords should be
used for similar parameters, for the sake of astronomers who want to reduce
their data.
- Goal: create a data format which is easily accessible, that can be read with
the majority of data reduction packages. (Mathematica, IDL, AIPS++, Python)
- GB currently writes fits files, but a lot them, while AO only writes one fits
file per session
- For more information, go to wiki.gt.nrao.edu, and click on "data".
- BDFITS = big dish fits. It's a binary fits format, nothing fancy, except that
we are trying to match the keywords.
- If there is a program out there can be used manipulate the data in a desired
way, then we should use it. The observer doesn't even need to know that this
is happening.
- There are 3 types of data reduction strategies:
- prepackaged black box
- "roll your own": do everything yourself
- something in between
- Most people want the 3rd option, so we are trying to write routines that
take the telescope specifics out.
- We want to write routines that can be called from a variety of platforms.
- The other key is to document everything that is out there.
- We also need to take advantage of what is already done - if HIPASS has done
something, let's not repeat it.


Software plans of the Ultra-Deep Group: WF
- Not thought about ultra-deep specifically, but we need an infra-structure
from which to work
- We have looked at what HIPASS can do for us (from Martin Zwaan).
- here are a list of things that are and are not done:
- Deconvolution, RFI excision, level 1 maps and cubes, cleaning,
detailed documentation: NO.
- Some effort will be needed to adapt the software to ALFA: in particular, we
will need a reader to convert between sources, plus the things mentioned
above.
- The ultra-deep group will need to concentrate on cleaning and RFI excision.
- The data rate is significantly higher for ALFA than for Parkes; we should
consider this too.
MP: Also, at the end of the data reduction process it reads everything back
into fits format.
SS: For RFI exision, really need to dump data at a high rate (ms); is it
possible to store all this?
BB: AO will have 100 terabytes of storage soon, so this shouldn't be a problem.

A Possible IVO/NVO Portal: BK
- We will want some sort of platform-independent way to access the data, as
well as simple analysis tools, filters, etc.
- It will also be important to cross-correlate with other databases:
SDSS, SIRTF, etc.
- The VO is NOT a massive repository for data. Anyone can set up a link as
long as they follow the rules.
- Handout: list of websites that you can go to to learn about protocols.
(see http://alfa.naic.edu/extragal/meeting2/handouts/IVO_kent_handout.pdf)
- Why do something like this?
-Synergy with VO standards: many are dictated by optical considerations,
and they want to expand into the radio
-The future of astronomy will be data mining: we have to understand the
tools and products to use the VO.
- There are pros and cons for both Microsoft and Linux software development:
- If we want to be able to put data on the web, lots of code exists in
Microsoft already.
- Linux pro: open source and free! But there is little/no support.
ST: Java for Linux not the only option.
BK: Can PHP get you there? I'm not sure if PHP can be interactive.
ST: Even so, for the application side of things there is not just Java.
SS: What do you mean by "large" dataset?
BK: The limitation for Microsoft is 1 TB; it is smaller than that for MySQL.
- concept demonstration: put simple information into a MYSQL database, so that
you can click on a map and get values of position and total power.
JD: Does this setup access the raw data?
BK: No, it accessed the reduced data.
JD: Where does the reduced data sit?
BK: That's the nice thing about it, it can be anywhere. As long as you have
a webserver, you can link it to the VO.
ST: You can also take raw data and have PHP commands to reduce data - it
doesn't need to be reduced in order to run this type of application.
RM: Have you talked to the Australian VO about what they have done?
MM: All of the HIPASS data are IVO compliant.
RG: We need to get funding to look into this further. We have proposed to the
NSF for money to send graduate students to the NVO summer school this
September.
KO: One of the main reasons for changing the AO/GBT data format is to make it
compatible with IVO standards.

Software plans of the Driftscan Group: RG
- I will discuss some work that we have been doing to reduce the data
- A lot of software has already been developed by Phil Perrilat at AO, in IDL
- We have started processing data in the IDL environment
- This may not be the way that we should all do the data processing: for each
specific survey there will be a different approach.
- One of the good things about the FITS format is that you can read it out
and take it somewhere else.
- The bandpass calibration needs to take the bandpass shape into account.
- Ways to do this: "on - off" is the simplest, and you get something that is
relatively good.
- But we will be taking data in drift mode so no there is proper "off":
instead, accumulate a running off.
- Question: is it better to do bandpass calibration in 1 dimension (spectral)
or in 2 dimensions (spectral+time)?
- We will need to do 2D at some level, since there are continuum sources.
- Also the bandpass will slowly change as time goes on, from variations due
to standing waves and system drift, etc.
- We have made IDL proceedures that anyone can download that do bandpass
calibration and RFI excision.
- Even with no bandpass subtraction, the baseline is pretty flat in a 12sec
integration.
- However, with 100 sec integrations you start to see standing waves, etc.
- We have also started to develop a cross-correlation signal extractor.
- It is more sensitive than a peak-finding algorithm. There is no need to
smooth the data in a cross-correlation.
- We will eventually develop algorithm to do 3D searches (spectral + space).
- Over the summer, AS will construct an extraction algorithm that will go to 3D
- A word about the tile size we adopted: it is designed to be processable by
current commercial computers. So, you fon't need to spend more than
$2000-$3000 for a computer that will reduce the data.
RM: Can the NVSS be used for continuum subtraction?
RG: We thought of using NVSS as a lookup table; but unless the source is a Jy
or more, the bandpass calibration can deal with it. When the data are
taken, we could set flags in the vicinity of strong sources
Chris: The people who are going to stare for 70 hours will have a lot more
trouble with continuum sources.
SS: JR applied Sextractor to our data: it was an interesting exercise.
Desh: Do you modify your profile shape as you go along?
AS: Gaussians seem to work the best and they are easy to implement, so we use
these
MM: We found that just using a boxcar was best in HIPASS.
SS: I recall 5 or 6 years ago that there was some work by Frank Briggs on
extractin standing waves via FFT.
MH: That paper is linked on the driftscan mapping site.
RG: Standing waves shouldn't be a problem for ALFALFA, but for the medium-deep
they might be.


********************
Sunday, 9 May 2004
********************

NAIC Review Process: 8:30 - 9:30 (BB)
-----------------------------------------

- I've worried about consequences of ALFA on the AO observing program for
a long time... the surveys will need a lot of telescope time, and this
has big consequences on the program overall.
- People who are affected most are other users, eg. single-pixel, other
frequency bands; their time will be restricted
- But, we are used to competing for telescope time. We hope that people will
be tolerant. ALFA will benefit astronomy even if some AO users aren't
directly benefitting.
- That being said, keeping the proposal process transparent and competitive is
important.
- I have talked to all committees about this, and have set up an executive
committee called the ALFA System Advisory and Planning (ASAP).
- The proposal writing process is extraordinarily important... it is a sobering
experience to think that although the science is first-rate, you will need to
compete for time.
- There is no ALFA entitlement
- Let's do a little calculation to convince ourselves that special hardware
(like a new spectrometer) is needed: AO has an annual budget of $12M. The
total observing time per year is 6000 hours. Therefore, observing costs
about $2000/hour. A new backend costs about 100 000$. That's just two days
of observing! If we can give you what you need to speed up the observing
process, then it's worth it.
- Who do you need to convince in the precursor propoal? The referee! NAIC will
help, that's why they are shared risk.
- There are multiple EALFA, PALFA (2) and GALFA (~5) surveys, and the
preparation required for each is different
- Current WAPPs can support some of the observations (ie. ALFALFA), but not all
- Commensal observing will be important! But we don't yet have 2 or more
backends.
- Capability for commensal observing needs some development: how do we do
this? Do we need different software, or can the same program
"split the bits"?
- NAIC needs to demonstrate to the NSF that early survey science is being done,
and that we're not just taking data.
- CONSEQUENCES: 1. NAIC will build backends as quickly as possible.
2. NAIC will allocate the needed telescope time to precursor observations.
3. NAIC will get surveys underway as early as possible. This is
scientifically and politically very important.
(At this point, BB puts up the draft proposal; download it at:
http://alfa.naic.edu/extragal/meeting2/handouts/NAIC_guidelines.pdf)
- We do, however, have a bit of a leg up since we have not overpromised about
when ALFA will start to do surveys: we are even a few months ahead because of
the superior performance of the front-end.
- Action: reserve time for competitive allocation to the ALFA precursor
proposals that were received: 100 hours in July, 150 in August, 150 in
September.
- This will allow us to schedule all proposals rated B or better by the end of
September.
- This is just an hours calculation - it might not be feasible from a "part of
sky" point of view.
- We plan to accept additional ALFA precursor proposals at 1 June 2004
deadline for Oct-Jan 2005 observing. These will be reviewed by current NAIC
process, and will remain shared risk.
- Accept revisions or extensions to any existing precursor proposals at Oct 1
deadline for observations starting in February, if we receive an explanation
of why they want more time. The proposals will not be re-refereed externally.
The time will be assigned on a competitive basis.
- Proposals must feature development of techniques in addition to science.
- Additional reserved telescope time: 150 hours in October, 150 in November,
175 in December, 175 in January.
- This should be enough time to get the new proposals from Oct 1 deadline
underway.
SS: What fraction of the total telescope time does this translate to?
BB: Atmospheric science gets 1200 hours/year, and this proposal has ALFA
taking 1/2 of the radio/radar astronomy allocation (which is 400
hours/month).
- At October deadline: we will accept proposals for real surveys that can be
done with the WAPPs. These will not refereed by just the regular NAIC TAC,
though the process is not firm yet. There will be an extra panel of
"skeptical reviewers".
- Not all of these points are meant to be your responsibility - software, for
instance, may be partnered with NAIC. We're prepared to shoulder our share of
the burden.
- Annual progress reports, to be refereed, are a mandatory requirement for the
survey to continue.
JD: These will be long-term proposals, ie. multiple years?
BB: Yes. but they will be re-assessed every year.
- Up to 200 hours of telescope time in Feb. 2005 for ALFA: other programs will
be squeezed by factor of 2.
- Feb 1 2005 deadline: single investigators and teams (those not part of
ALFA consortium groups) may propose too. This gives you in the consortia
a leg-up.
RG: for the June 1 and October 1 deadlines, will there be a cap on total
requested hours of 100?
BB: For precursor proposals, yes, but not for those doing actual surveys.
- What makes a good precursor proposal? Need to help you define how to make the
surveys happen.
- For instance, comparisons with HIPASS would be a really great precursor
proposal. Do a strip that HIPASS has done, and compare results. This allows
for the development of tools, but you also get science.
SS: What will be the transition as other backends come online?
BB: If a next generation backend is available, then users should use it.
We will accept proposals to do science with the available backends.
SS: So should we write proposals for GALFA, PALFA backends for Feb 1 deadline?
BB: I think that June 1 2005 is a good assumption for when these backends will
be ready.
SS: So we should get together with other consortia to propose for this
deadline.
ST: Even when the dedicated backends are online, the WAPPs will be there too.
We will run multiple backends at same time. Proposals for the WAPPs should
also be ready to work in shared-risk mode for other backends.
(There was some discussion about adding hardware to allow for simultaneous
use of the WAPPs and the new EALFA spectrometer)
SS: So you can use 200MHz at the same time and extend to lower frequencies
with the WAPPs?
ST: Yes
BB: But there is only one LO, so we will have to work accordingly.


Consortium Rules and Guidelines 9:30 - 10:30 (SS)
-----------------------------------------
SS: We need to decide on the guidelines. Following the discussion yesterday,
MH has put together a document for us to review; if we approve of them,
we will allow the entire consortium to vote on them in the next week.
(At this point, Chris hands out copies to those assembled.)
(see http://alfa.naic.edu/extragal/meeting2/handouts/Guidelines_v2.txt)
MH: This draft starts off with definition of consortium and project team.
- When does a project become a project? If could be an accepted proposal, or
just a project in the works.
- Coordinating committee: drives the consortium (as we said yesterday) but
also worries about coordinating with other ALFA consortia.
- An active member is someone who lives up to their commitment to a particular
survey for a term of one year: each PI has to keep track of their members.
- Members can be voted out, but this is rare
- If someone leaves a survey team in good standing, there should be no
prejudice against them coming back.
SS: Is there anybody who objects to these rules?
BB: Example: what if you have a gridder, and someone wants to write one. Does
that person join a survey team or not?
RG: It's up to the team leaders of that survey to decide.
LH: There was a new note added to the guidelines yesterday about one member
of the coordinating committee being at a US institution. Should this be
added to these guidelines?
BB: Yes.
SS: With these changes, does everyone agree to put forward these consortium
guidelines to the members of EALFA for a vote?
(SS requests a show of hands in favor of the guidelines and against.
Agreed, unanimously).


Discussion and Action Assignments 10:30 - 12:30 (SS)
-----------------------------------------
KO: We have 5 surveys that need to be discussed.
JD: My understanding is that the October 1 deadline is for the proposals listed
above, so there should be 4. Is the high-B survey a survey in itself, or a
precursor proposal?
MH: It is not a precursor proposal, but the first stage of ALFALFA/VAVA.
(A discussion ensues about the high-b survey in light of ALFALFA/VAVA and the
medium-deep surveys.)
MP: We would like to propose another precursor proposal, to map a region
around NGC2903. It is at a distance of 8.3 Mpc, and would cover about
0.5 degrees squared on the sky.
RG: What would this proposal this lead into?
MP: It probably fits into the medium-deep survey, but if it doesn't fit
then we would propose separately.
KO: We should make a list of comissioning needs. We should request that AO get
a 2nd LO for tests.
- There are basically three new precursor proposals that will cover all of the
comissioning science: medium-deep, ultra-deep and a ZOA proposal
RG: What about testing the 300MHz possibility?
KO: We can do 200MHz first, and then see whether it's worth going to
300MHz later.

RG: We need to start talking about funding: we need students, computers,
experts. We are trying to do these surveys on the cheap: we need to show
NSF that we won't run out of money. We submitted two proposals to NSF,
and one was funded for 3 years but this was granted shortly after the
1st EALFA meeting. It would be a great use of time to talk about this now.
SS: The deadlines to the NSF are in november, whereas the NAIC deadlines will
be in June: the funding efforts should come out of the proposals
RG: We can also tap BB for ideas about working creatively with NAIC
BB: The funding winds have shifted dramatically: while NAIC can be used as
a partner, it would be disaster to put it as a collaborator on the cover
page.
RG: What will be done with NSF proposals that include colleagues from abroad?
BB: That will also add to the strength of those proposals.
SS: Having a letter from NAIC that endorses the EALFA science would be useful
in securing funding.
JD: The Cardiff people are very interested in participating in EALFA.
I applied for a grant for travel funds. I am tied to PPARC funding because
I am in a rolling PPARC grant. My intention at this point is that when the
grant expires in 18 months, a large part of the next proposal will be for
EALFA funding.
BB: This would be very well received by NSF.
MH: We should also come out of this meeting with an interim committee that
is responsible for the surveys. We could start with the 4 PIs.
SS: I would like to see at least on floating member
MH: Agreed. I would like to see one person chosen to oversee outreach,
commensal observing, etc.
RG: However, you don't want to make an unbalanced committee, want to make
sure that there is no conflict of interest in choosing that 5th person.
MH: Let the 4 PIs put forth a name for the 5th person, and we can go from
there.
SS: Good idea.
SS: I propose that at this point we break up into groups: have a check-in
shortly after coffee at 4:00pm and see where we're at. WF will be in
charge of the ultra-deep group, and MP will be in charge of the
medium-deep group.
(The groups meet separately. The medium-deep contingent split into one group
working on a proposal to map a nearby galaxy and one considering more
general 300 sec surveys.)


SS: Move towards wrapping things up.
- We had discussions of a few different proposals: it would be good to mention
what was discussed.
(see also http://alfa.naic.edu/extragal/meeting2/handouts/break_out_summary.ps)

Medium deep survey: (SS)
- JD will take the lead and will direct the proposal writing process
- There are a few observing modes to consider:
- Pulsar modes (3): TH in charge of contacting PALFA and investigating
whether those mode will be good useful to us
- Leapfrog: Fix-AZ,ZA of telescope for observing, then jump from piece to
piece of the sky that you want to cover. SS will consider this.
- Multi-drift: Very deep drift scans that will
coordinate with the ALFALFA precursor proposal. JD will consider this.
- Science that we would like to address:
- searching for ultra-compact and low surface density HVCs (MP)
- Cross-cuts through nearby galaxies; won't be full mapping but might
reveal some interesting structure (KS)
- Various targeted environments (JD)
- the HIMF past Virgo (SS & JR)
- Possibilities for continuum absorption (Erik)

Ultra-deep survey: (WF)
- We discussed the re-submission of the proposal; we will make the science
case more explicit this time.
- Emmanuel Momjian will look into absorption specifically.
- Pointing strategy:
EdB: the pointing strategy is to us 4 beams with a 200 MHz bandwidth. We want
to test the point-and-stare mode.
WF: the main task is to find a target field that has good science; BC will
look into this.
- we also want to do some simulations of the results we expect, but there
is very little time left before the deadline.
- WF is the interim PI

Galaxy Mapping: (MP)
- General strategy is to do a drift map over 2 square degrees. It will be done
in "limited AZ" mode.
- LH determined that we can ask for 40 hours, 18 hours of which are to map out
the beam via a bright quasar.
- For reference, the RA and DEC of NGC 2403 are 5h30, 21.
Chris: What bandwidth would you use?
MP: We would use full bandwidth to suit other people's needs, but if there
was no overlap with other surveys then we might narrow in on the galaxy.

SS: Lots of work to do. We need to coordinate with each other, and get
what we can out of the ALFALFA proposal.
RG: It might be worth putting a smaller precursor to check the follow-up
strategy. We will have simulations that will predict what the return rate
is, but we would like to check this for sure.
SS: This isn't already in the precursor?
RG: No, not even in the first year of ALFALFA. Since we now have 3 months
before we need to submit, we could consider doing this now.
SS: Any missing pieces? KO has volunteered to take a look at the RFI situation,
which will be lots of work.
KO: It should be helpful for both the ultra-deep and medium-deep surveys. The
question is whether to stick it into the ultra-deep proposal or to ask
directly for some test time.
SS: At this point we can break into other groups and discuss other things.
MH: Both the NSF and the EPO component are important, and should be discussed.
RG: These proposals are not due until the fall, but we should talk about them
now.
MH: For anybody in the northeastern part of the states, Becky Koopmann will be
organizing an EALFA conference for undergraduates at Union College next
year.
SS: What about international collaborations?
MH: I looked into this, but they seem hard to get for US-EU collaborations, for
instance. We probably need some telescope time before we can expect to get
those grants.

(At this point, the workshop ended.)