Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.naic.edu/alfa/ealfa/meeting1/minutes/monmorn.html
Дата изменения: Mon May 8 23:01:36 2006
Дата индексирования: Sun Dec 23 02:11:34 2007
Кодировка:
Monday Morning Discussion next up previous
Next: Monday pm. Up: Minutes of the 1st Previous: Sunday pm. #2


Minutes of the Monday Morning Discussion

Karen O. Although it might not seem that way, we did make a huge amount of progress yesterday. Riccardo has asked to say something.
Riccardo I'd like to throw out a summary of where we are and how we might proceed.
Editorial note: A copy of Riccardo's presentation is available in: HTML, PDF or PPT format.

  • We all agree there are a number of very exciting EALFA surveys:
    • ALFALFA
    • Deep surveys
      1. Virgo
      2. Can Ven
      3. Around big galaxies
      4. Really deep field (1000s sec/beam)
      5. Deep declination strip
      6. Beyond ZOA
    • ZOA
    We agree on various ones, but they have different needs and may not be of equal interest to everyone.
  • We ask NAIC to provide 200 MHz bandwidth as soon as possible.
  • Our governing principles should be:
    • Do the best science!
    • Train the next generation.
    • Produce a set of legacy surveys.
    • Have the broadest astronomical impact.

  • There are certain realities with have to deal with:
    • Not all hardware(e.g. backends) may be available immediately.
    • The surveys will typically take several years to complete and the competition among them (within E-ALFA as well as across the disciplines) will be stiff.
    • An enormous amount of work is needed before the surveys actually start.

  • While we have been talking very freely without any consideration of constraints, we may face limitations in terms of:
    • Realistic telescope allocations
    • Effort involved in survey design and execution
    • Effort involved in software development
    • Effort involved in data product delivery
    • Competition for NAIC resources
    • Support for individual participants

  • There are many NAIC/ALFA challenges that we need to address:
    • Technical challenges:
      • Survey strategies: drift vs. drive vs. stare; multipass
      • ALFA scan simulator
      • sidelobe/coma simulator
      • Tsys/G ~3; lots of detections! (human detection not good enough)
      • Beam shape/Tsys/gain variations
      • Zenith angle variations
      • Sidelobe/coma contamination
      • RFI environment
      • Extended source vs. point source

    • Political challenges:
      • Highly competitive time allocation
      • US national center mission to serve a broader astronomical community
      • N! effect (involvement of N people)
      • Who is going to do the work?
      • Because of its broader responsibilities, NAIC must protect itself from delays arising within the observing teams.

  • Data taking/early processing
    • RFI mitigation
    • Bandpass calibration/subtraction
    • Zenith angle variations
    • Beam shape/Tsys/gain variations
    • Continuum subtraction/CLEAN
    • RFI excision techniques
    • Astrometric calibration/pointing
    • Gridding
    • Raw data archiving

  • Detection algorithms
    • Point source vs. extended source
    • Sidelobe "cleaning"/removal
    • Baseline fluctuations/removal
    • Template fitting (Davies et al. 2001)
    • Noise characteristics and their variation
    • Injection of "fake" sources (Rosenberg & Schneider)
    • Source catalog generation and cross correlation with other catalogs

  • Data Products and Access Tools
    1. Low level
      • What, when and how?
      • Bandpass calibrated/subtracted scans
      • RFI excised spectra
      • Firstpass maps
      • Continuum source catalog
    2. High level
      • What, when and how?
      • CLEANed data cubes
      • Calibrated spectra at any position
      • Catalogs of detected objects
    3. Access tools
    4. Need to provide NAIC with input
    5. Must be decided what are commonality of standards so that different surveys produce similar products

  • Product specifications
    • Product definition: who, what, when, how?
    • Format specification
    • Quality assurance/verification process
    • User interface develoment
    • Documentation
    • Portal interface: NVO, IVO
    • Software contribution to NAIC

  • US University-based participants
    • Catch-22: NSF PI grant playing field
    • Graduation student support (tuition, location, duration)
    • Commitment/enforcement of deliverables
      • - AASC recommendations re: NSF funds
      • - FIRST, SIRTF legacy, HST Treasury
    • GBT-like grad student program, tied to survey D&D not just observing
    • GBT-like instrumentation program, extended to include software

  • Modus Operandi Hypothesis:       Unlike P-ALFA, E-ALFA is characterized by:
    • Diverse set of science aims/interests
    • In general, less hands-on AO experience
    • Less processing-intensive low level data products
    • More varied multi-wavelength/correlative studies
    • More diverse data products

  • A Modus Operandi Proposal:
    1. E-ALFA subcommittees will articulate a set of recommendations to NAIC on behalf of broader community, e.g. a white paper.
    2. A Steering Committee will coordinate the effort and recommend ground rules for participation/membership.
    3. Participation in committee work will require commitment to a significant level of engagement. Those who choose not to commit to committee work at this point will be kept informed of proceedings in a timely manner.
    4. Between now and July 1, the E-ALFA subcommittees will work on a preliminary version of a white paper which will outline the general survey specifications,design studies, manpower and other resources needed, software and data needs, and standards (we all want the surveys to function according to common standards if possible, access tools to be compatible, especially if different surveys might be carried out by different teams).
    5. NAIC will provide E-ALFA with a schedule for delivery of the final version of this white paper and will provide detailed comments on the Jul 1 draft
    6. By the end-of-year, there will be a call for proposals by NAIC. At that time, the proposing teams will form (based on a better open and public understanding of what is involved), and proposals will be submitted.
      Note: there may be several survey teams.


  • To accomplish this, I propose that we establish subcomittees to focus on the required tasks.

  • Subcommittee Proposal
    1.   Survey Science, Strategies, Simulations
    2.   Phase 1 Data: rfi mitigation & excision, data calibration, and gain variation issues
    3.  Detection algorithms, sidelobe contamination, cleaning, source extraction
    4.   Data Products, definition, access, standards, and archiving
    5.   Follow-up science, telescope(s) requirements
            (If we need another N hours of AO telescope time to do follow-up, we need to specify early. Also coordinate for other telescopes.)
    6.   Synergies, piggybacking options

  • Karen O. We have a lot of survey ideas. An alternative proposal is to take all the different survey proposals and send them back to individual people to write up the science cases for each.
    Brent A subcommitee on survey science might be a higher level committee with subcommittees
    Liese What we need is a subcommittee to provide the specifications/requirements for E-ALFA to NAIC.
    Bob The science discussion was very interesting, but lacking in it was specificity as to the technical requirements. We need to be very open minded; this is just what I need. Ask: what do the science requirement mean in terms of technical requirements? Not just hardware, but also software? Do you have specific models for how calibration gets done? What about for baseline removal? Maybe not everyone needs to worry equally about RFI mitigation (too technical), but maybe some cannot function without it. Perhaps ALFALFA doesn't need to survey below 1350. But NAIC needs to understand what the specifications are so that we can assess their impact on our long term hardware and software development plans.
    Karen O. One thing that goes along with what Bob is saying is that each survey group needs to come back with specific requirements. For example, we should ask for what we each want in terms of bandwidth, resolution, sample (e.g. 9 level), etc.
    Let's review the surveys we identified yesterday:
      1. Virgo
      2. Can Ven
      3. Around big galaxies
      4. Really deep field
      5. Deep declination strip
      6. ZOA
    Noah I have been advocating Virgo, but let me put on my other hat. I am on the IAU working group on surveys. We need to look at the ALFA surveys in combination with other surveys going on in astronomy. Also, while you might split people up now into separate groups, there will be overlap in the interests. So how do we decide which one to participate in. Also, I am not sure that all of us are convinced that you can get good baselines in drive mode rather than drift, so we need to look carefully at the technical issues.
    Karen O. Certainly, the shallow survey is very interesting. Maybe details of observing modes can come later, but right now, we do need to break up into subgroups so we can set some numbers on the survey modes.
    Martha We probably can address the science because it is fun and relatively easy. In fact, we did a lot yesterday. But other tasks are more complicated and a whole lot less fun, but they also need to be done to make progress with ALFA. And they need to be done soon. We need some organization to figure out how to develop and produce this white paper. The P-ALFA group did that before their workshop. Perhaps it is easier for them, but we also need to do that. They have advantage because the vast majority of the P-ALFA folks use Arecibo a lot, and we have perhaps broader survey needs and goals. But we need to keep in mind that we have to compete with P-ALFA for telescope time, so we have to be sure we are in a position to do that. Whatever we want, proposals won't get allocated time in competition if all of the details are not thoroughly worked our. Personally, I'd like to talk about science all day, but I'm afraid we also need to spend some time on the other gory details.
    Steve I propose that we spend some time having groups look at the details, make science somewhat focused, and define the required specifications within that framework.
    Liese Some of the surveys could be lumped together, how about HI and environment?
    Karen O. We seem to have 5 different survey categories. Let's assign someone to each category to lead the discussion.


    Individuals were proposed to lead the discussion of survey needs in 5 categories:
    Survey Leader
    HI in different environments Liese
    Ultra deep survey Lister
    Deep strip survey Steve
    ALFALFA Riccardo
    ZOA Trish

    I propose that we break into subgroups and come back in an hour to see where we are.
    Bob Please keep in mind that we need to know what you folks require in terms of instrumentation and software, and what will be produced in terms of data products.

    The five groups met separately for the next hour. Individuals were allowed to attend any of the meetings they wished. Then the entire group reconvened for further discussion and to hear reports from the individual survey subgroups.


    Reports of survey subgroups

    Report of the subgroup on "HI in different environments"

    Liese There are a variety of sciences goals, from determining the low end of the HIMF and how it might vary in different environments, to searching for high velocity clouds, to understanding the triggering mechanisms for star formation in gas disks and the gas sweeping processes in clusters like Virgo. Benchmark goals are to detect HVCs around neaby systems with velocity widths of 15 km/s and to determine the HIMF to a limit of 5 x 106 solar masses.

    The basic requirements that we ask NAIC to provide are:
    Hardware: We request the extra WAPPs. For one thing, it would be a big bonus to the Virgo project to have access to 200 MHz to look behind Virgo (Coma and beyond). Additionally, we really want 9-level sampling, and some projects could benefit from higher spectral resolution.
    Beam characterization: We request that NAIC to provide the necessary information on characterization of the beam pattern.
    Software:
    • good baseline calibration (few chans wide)
    • mapping algorithms which take into account the characterization of the beam pattern
      • deconvolution algorithm; will also need to consider this in survey design.
      • It might be that follow-up single pixel mapping with lower sidelobe might be useful. So let's be sure there is good clean beam somewhere (e.g., separate feed). Perhaps follow-up will also be done with the VLA.
      • need to get best possible positions, algorithms and strategies
    Steve Why can't you get the single pixel from the central beam?
    Liese Maybe. We didn't think it was as clean.
    Don Essentially true. Also there will be a separate feed.
    Karen O. The new L-wide system has cleaner sidelobes.

    Report of the subgroup on the "Ultra-deep Survey"

    A copy of Lister's science presentation is available in HTML, PDF or PPT format.

    Lister Actually, this turns out to be a lot harder than we might have expected but it would be unique.

    The most compelling science is tied to the evolution of the gas density in the universe (e.g. Pei & Fal 1999). The co-moving gas density goes as (1 + z)3. ALFA can probe only to z to 0.1, but there seems to be very steep evolution with z, so even over that relatively low range, we should see the effect. Models based on multi wavelength observations predict that the density goes as (1 + z)3.2.

    Observations of damped LyA absorption lines suggest that the number of absorbers per unit redshift interval goes as
            dN(DLA)/dz = 0.05(1+z)1.1,
    implying essentially no evolution for DLA absorbers (Storrie-Lombardi & Wolfe 2000).

    The goal of the Ultra Deep Survey would be to differentiate between evolving and non-evolving HI density. To discriminate between the two models, we would need to be able to discern a 50% change in density at z=0.15. The survey would be designed to detect 40 galaxies with masses of 109.5 solar masses at the 99% confidence level. That requires about 0.05 mJy/beam, about 260 times deeper than HIPASS. This would be unique science that only Arecibo can do. We could cover 0.36 square deg in about 1000 hours, say ~70 hours per beam; this would correspond to a volume of 8000 Mpc3 and would give 160 galaxies in the range 109- 1010 solar masses. A by-product would be a survey of redshifted OH megamasers. This survey would probably be done in drift scan mode, amounting to 1000 hours at 3 hours/day, so it is certainly a long term project.

    • Required technical parameters for an Ultra Deep Survey
      • 200 MHz bandwidth
      • Multilevel sampling (9 minimum, preferably more, like 14-bit)
      • 8192 channels
      • Fast dumping to excise RFI
      • Radar blanking, or very fast dumping (~1 ms)
      • Nighttime observations (though willing to test daytime)
      • Need calibration and beam parmameters for all zenith angles
      • Best would be declination near 17.5 deg or 19.5 deg and |b|>30 deg.
      • Could be near some existing deep field
    • Software
      • Must be capable of dealing with drift scans
    • Data Products
      • Data cubes
      • Source catalog
    • Deep optical followup
    • Overlap with other surveys
      • Need HIMF from shallow surveys, for comparison with low z
      • Good test of ultimate performance of ALFA.
    Ed In our discussion of a survey of Virgo, we were talking about observing a few regions for long integration times also. Could you do these two piggybacking?
    Liese Having a foreground galaxy might affect the ability to do optical follow-up.
    Riccardo If off the edge of galaxy, then?
    Ed I would be very interested in those observations then.
    Jon Could you include a quasar so you could also look at absorption?
    Liese Can you make it an optical quasar, not a radio one? You don't want continuum.
    Karen O. Yes, we don't want baseline problems from any radio sources.


    Report of the subgroup on the "Deep Strip Survey"


    Steve First, I have to say that I would like to have more time to understand better the coverage strategies and the simulations such as Riccardo presented. We need to consider the differences in the different beams (the center beam versus outer ones) in setting the survey strategies. I have questions about how much integration time is needed. I think they might be up to a factor 2 longer.

    The goal of a deep strip is to go order of magnitude deeper than any previous survey. Therefore we would like 10 sigma detections of 106 solar masses out to 10 Mpc across a range of environments. A summary of our needs is:

    • Proposed is a narrow strip, probably out of galactic plane, 1 degree x 300 degrees.
    • Mode would be drift scans, stepping by a single beam so a single pixel crosses each sky pixel to get uniform scan.
    • 80 sec per beam total.
    • We'd want a pretty simple data product: data cubes + a catalog but not a unique one. then different people could apply different detection algorithms to it.
    • Basic FITS format; that is pretty straight forward.
    • We would prefer not to do radar blanking. We might cut down to 50 MHz to reduce the RFI. That would be more efficient (overhead of blanking) and safer. We'd want to allow for repeated coverage for RFI identification.
    • We want 9 level sampling, and may want 2.5 km/s so we can hanning smooth.
    • We felt there is o pressure on this survey to go to very high z.
    • Roughly need 600-700 hours to cover this area.
    Chris Radar blanking only adds 15-20%.
    DJ Does daytime vs nighttime matter?
    Steve There has been some sucess with drift scans during the daytime, doing fourier subtraction of standing waves. So daytime may be possible.
    Noah Any specific declination you want?
    Steve We probably want to avoid Virgo to not cover it again. Likewise, we'd want to avoid the galactic plane. But, no particular spot is favored right now.

    Report of the subgroup on the "ZOA Survey"

    Trish In discussing this, we tried to think about what we would we do if we were NOT piggybacking. ZOA science is not to map terribly distant structures, but rather to look at relatively nearby structures, especially for those which are responsible for the motion of the Milky Way.

    So the basic requirements are pretty simple:   100 MHz, 3 level is ok, 5 km/s resolution.
    Liese But isn't this exactly where you'd want to do something that optical surveys cannot do?
    Trish That would be nice, but not really essential for mapping the masses that are pulling around Milky Way.
    Lister The high z universe is statistically the same everywhere, so it's not so important where you do it.
    Liese But you want to add something that optical surveys cannot touch?
    Trish Only if you think its critical to map structures behind the ZoA at large z.
    Karen O. Don't you care about 9 level sampling? With anything less, you get more ringing ' even in 100 MHz. I would recommend 9 level sampling for interference mitigation.
    Trish This survey not need terribly deep. A few 100 seconds per beam would be great. What is critical is positional accuracy because of the lack of optical information. So, the mode would be drift scan, Nyquist sampled. 1 pass would require 330 hours but a second pass would require of order 700 hours. A double pass of an ALFALFA style survey but with longer integration time would be good.

    The goal is to get a factor of 2 below M* at 100 Mpc.
    Liese How much deeper would this be than HIPASS?
    Trish 5 mJy in one pass and 25 kHz resolution, so you could smooth.
    Karen O. Much better velocity resolution?
    Lister The Parkes Deep ZOA survey goes to about +20 deg and gets 5 mJy.

    Report of the subgroup on the "Shallow Survey" (ALFALFA)


    Jess Riccardo has already discussed in some detail the ideas and motivations behind a shallow all-sky survey.
    • 200 MHz would provide more science, but not primary science
    • How to go about doing this, drift vs drive
    • Drift scans clearly way to go, but a lot more time required to do it.
      • Would take more time, so perhaps driven scans should be done.
      • Ideal survey would do whole sky, 3 passes in drift.
    • Data product: fairly simple
      • also need a record of what is done to data
    • RFI: A multipass mode would not only provide transient detection (of interest to others) but also help with rfi identification.
    Mary Did you think about combining with a HVC survey to use the 200 MHz to get higher velocity resolution?
    Jess That is another option to get higher resolution. It certainly needs to be discussed. Our primary criterion would be 5 km/s but higher might be useful.
    Karen O. A lot of discussion has gone one about the importance of getting the low mass end of the HI mass function. How many more objects can you get with ALFALFA than HIPASS did? If you don't make a big enough improvement, than it might not be worth the investment of telescope time.
    Jess Riccardo did some of that in his presentation. Also, ALFALFA would have a lot better spatial and velocity resolution than HIPASS.
    Riccardo A question for Jim? Would you rather have single drift scan (12 sec) or 3 passes?
    Jim Personally I would vote for multi passes (for transients), but this is clearly not as high a priority.
    DJ How much time would be required for this survey?
    Jessica The original idea was 1200 hours, but it depends on the final adopted strategy.
    DJ Does daytime vs nighttime matter?
    Riccardo We would prefer nighttime but part of day could be used. We would want to avoid noon, sunrise and sunset, because of baselines.

    NAIC Requirements

    Bob Let me lay out what I see it is that NAIC needs from you. If we think about who the stakeholders are in this project, NAIC is one of the stakeholders. This project will be a legacy of the Observatory, so it is clearly critical to us that ALFA and its surveys be successful. But, you are all stakeholders in this too. You see the goals, so we view you as critical to the success. Also, we must include among the stakeholders the people who are not here. It is very important to include all of the stakeholders in the discussions.

    So, whatever initial plan you come up with should be discussed with the wider community. We need to insure that the final product is something that everyone will be happy with.

    There are 3 stakeholders in this enterprise: (1) NAIC, (2) those of us here today, and (3) the others who aren't here, but are nonetheless interested. All three groups need to be happy with the outcome. Some we must make sure there is plenty of opportunity for open discussion.

    So, the big question is: What are the next steps? Let me propose some:
    • try to understand synergies
    • muster the resources
    To go forward, we need to catalog the discussion, not just today, but as it develops. For that, we need a white paper (or whatever you want to call it). So, we need to lay groundwork for the whitepaper, both to get information out so it can be widely discussed and to advise NAIC.

    So, what might the white paper look like? It's not just to define the science surveys but also to define where we are at the moment and to present ideas to others who might not be here now. Let me propose as possible outline, which we can then discuss.

    Outline of the E-ALFA "white paper" - status/plans report

    1. Science justification for surveys
      • What are the survey(s)?
    2. Observing strategies
      • need to be fairly specific
      • might involve survey simulations
    3. Early data products (NAIC responsibility)
      • Raw data clearly kept
      • Phase I: basically NAIC is responsible to get data off the telescope and into archive
      • Calibrated, passband corrected, RFI excision (?) (more than the raw data, feeds next step) (white paper needs to outline what/how)
    4. Source extraction algorithms
      • How will it be done, what options, what methods
    5. Phase II data products and standards (Community responsibility)
      • Deconvolution, clean data cube, source catalog
    6. Archive and Access
    7. Follow up observations
      • Do we need VLA, optical or more AO follow-up? What do we expect to need?
    8. Synergies with other consortia
    We need this white paper in timely manner so that NAIC, the other consortia and the rest of the community can begin to understand what's coming with ALFA so that we are all able to plan for it.

    Since the white paper needs to get written, let's see how we can organize ourselves to get that task done. Following the outline, then let me propose a possible organization layout. (Click here for a bigger version of the org chart.)

    1. Science and strategies (mostly community)
      • Science justification
      • Survey strategies
    2. Phase I data/products (primary NAIC involvement)
    3. Source detection and deconvolution (community responsibility)
      (A lot of work, a lot of thought to achieve science)
      • Detection algorithms
      • Phase II data products
    4. Follow up science (might come under science as well)
    5. Synergies with other surveys

    The organization then:
    • Needs leaders to make sure that each box gets written
    • Needs leader(s) for steering group
  • ... so that a document gets produced that contains these items and then gets distributed

  • Steve Phase 1 data reduction should be reversible, so I disagree with having automatic RFI removal.
    Bob The raw data won't go away. NAIC will archive it.
    Liese Does Phase 1 data reduction include deconvolution?
    Bob No, that belongs in the community domain.
    Liese Will NAIC help us characterize the beam pattern so that we can undertake the deconvolution?
    Bob Yes, of course.
    Karen O. Your draft white paper does not have anything about membership criteria, what is a consortium etc.
    Bob Yes. You need to decide how those issues are going to get written. That is one of the basic points.
    Riccardo Isn't it necessary for NAIC to be involved in the definition of needs and requirements and what NAIC expects?
    Bob Very much so. NAIC needs to understand how the surveys will function, but also the community needs to understand what they are committing themselves to and what NAIC expects of the teams.
    Riccardo Also, there may be different surveys, so a set of standards needs to be set so that final data products look the same in the end. It would be more useful to have an effort that might bridge across surveys/disciplines, so that the final products have a common face to the public.
    Bob We want to get a draft sooner than later but it should be an iterative process.
    Martha We want to figure out how to allow communication of technical expertise amongst us.
    Karen O. Only half of the people who signed up for E-ALFA are actually here.
    Bob How do we establish community outreach, including community support, especially for those supported by NSF?
    Riccardo What is the timescale for a draft? What is NAIC's involvement in putting this together?
    Bob The sooner the better. I don't know the precise answer because I don't know how difficult it will prove to be.
    Wim Would 3 months be reasonable?
    Bob That seems reasonable.


    next up previous
    Next: Monday pm Up: Minutes of the 1st Previous: Sunday pm #2


    This page created and maintained by Karen Masters, Kristine Spekkens and Martha Haynes.

    Last modified: Mon Apr 28 10:51:47 EDT 2003