Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.mrao.cam.ac.uk/~steve/algor2008/index.html
Дата изменения: Thu Dec 11 09:21:36 2008
Дата индексирования: Tue Oct 2 03:58:35 2012
Кодировка:

Поисковые слова: гелиакический заход

Algorithms 2008 Workshop --- Steve Gull's Webpage

Email me: Steve Gull

Developments

I shall continue to update this site as I refurbish my suite of mapping and deconvolution programs. As of 11/12/08 I have the mapping and Dirty Map space MaxEnt working. Please let me know if you'd like to try it. I have (somewhere) versions that do the deconvolution using raw data via gridding/degridding (the old VLBMem program).

Prehistory

I was greatly influenced by a talk on MaxEnt by John Ponsonby (1972) and a remark by Dave Rogstadt (1974) which he made whilst I was CLEANing an map of Kepler's SNR..

The Bayesian Radio Astronomer

A long time ago ( possibly 1987?) I gave this talk at Jodrell (and probably at the VLA site too). We had all the Bayesian technology then, with MaxEnt (i.e. MemSys) and optimal gridding. My view of the fundamentals hasn't changed at all, so here it is...bayesradio.pdf

I'm still a few overheads short on this one, but I have a paper copy and will complete it soon. Some excellent cartoons by John Skilling will accompany it...

Bayesian Target Recognition

This is an essentially similar presentation from the early 1990s describing target recognition using data with a position-independent PSF. 15 years of MCMC program development have changed the conclusions, as we can now do multiple point source fitting and detection thresholding properly.  Here it is: target_recognition.pdf

Programs: MaxEnt and MassInf

From 1978 onwards we developed Maximum Entropy algorithms for deconvolution, John Skilling taking over control of the programming in 1981, and leading to the commercial MemSys code. In its fully developed form (circa 1993) it is a fully Bayesian implementation, using conjugate gradient approximations to evaluate and provide error bounds on the all-important "Evidence" (which can be used to compare different prior models). Details of the theory and the algorithm are in the MemSys Users' Manual here: MemSys5_manual.pdf

Radioastronomical imaging from interferometer data is a slightly unusual application for MemSys. It's fine if you go back to the raw data via slow transforms or gridding/degridding, but if you want to use the dirty map and beam as data, the algorithm does fit into our standard "Opus/Tropus" structure for the conjugate gradient loop. Simpler algorithms (such as the one in AIPS) are coded up directly, but our MemSys version known as RAMem was not written until about 1994. It brought the full power of conjugate gradients to DirtyMapMem and I have running using FITS maps and beams on a Windows platform.  RAMem also a very good trainer for Bayesian Neural Nets, where we call it NeuroSys.

From  about 1989 onwards the "Massive Inference" prior was developed and implemented through Markov Chain Monte Carlo methods. These "atomic" models for the sky and other images were desperately slow, and although some applications were very successful it has never been applied to large radio images. The usual MCMC methods use "thermodynamic integration"  to approximate the evidence, but there was no good theory that allowed one to compute error bar on it. The MassInf prior is these days incorporated into our workhorse MCMC programs BayeSys, which is publicly available. The BayeSys manual contains all the details of the theory and algorithms: BayeSys_manual.pdf

Then it all changed....

Nested Sampling

John Skilling invented this in 2004, though he called it something else until David MacKay pointed out that the name could be misunderstood as a tribute to a thoroughly non-Bayesian statistician. The basic idea is to convert the integral required for the evidence into a one-dimensional graph of likelihood against prior range. I'd better let him tell you about it himself...  nested.pdf

Here is a poster he gave earlier this year comparing MCMC and nested sampling: nestposter.pdf

This is Farhan Feroz's recent PhD thesis (under the supervision of Mike Hobson) describing the "MultiNest" implementation of nested sampling, together with recent applications from cosmology. This is the state-of-the art Nested Sampling code -- I've used it and it works well. Farhan is currently coding John's conjugate gradient ideas into it: Feroz_thesis.pdf

Nested Sampling & Conjugate Gradient

Nested sampling tells you what to do when you get a new sample from the set with likelihood L>L_0, but it doesn't tell you how you find one. Here are John Skilling's latest thoughts on how you can apply all the conjugate gradient technology from MemSys to find that elusive new sample: nested_conjgrad.pdf

Bayesian Rotation Measure determinations

Here is a little study I did for Paul Alexander on Faraday rotation analysis for the SKA. The conclusion is that rotation measures of 200 or so are easily found using MCMC for sources greater than about 4 sigma polarised flux. faraday.pdf

Background Papers

This is my Bayesian "coming out" paper in 1979. I am writing under the influence of Geoff Daniell at this stage. He used to drip-feed me Ed Jaynes papers, sometimes saying "Hmm.. I don't think you're ready for that one yet...".  gulldan2.pdf

This is the original "second level of inference" paper given at the 1986 MaxEnt meeting, stressing the use of what we now call the evidence.  inference.pdf

RANT: John Skilling on statisticians and why you should be a Bayesian

I suspect that I might have calmed down a bit after all these years, but John certainly hasn't... Enjoy:  bayes_rant.pdf

 

W3OH MassInf

Kester/Bontekoe

MORE SLIDES

Unfortunately there wasn't an overhead projector at the conference (at least that meant that the talk itself kept to time and it was only the questions that overran...). I'll place here some interesting image and  some commentary as I prepare a presentation for the SKA meeting in Malta.