Документ взят из кэша поисковой машины. Адрес
оригинального документа
: http://acat02.sinp.msu.ru/plenary.html
Дата изменения: Tue Jun 18 10:26:21 2002
Дата индексирования: Mon Oct 1 19:46:17 2012
Кодировка:
Preliminary List of
Plenary/Invited Talks
P. Bhat (FNAL), "RunII Physics at Fermilab and
Advanced Data Analysis Methods"
(Abstract)
N. Brook (LHCb, Bristol Univ.), "LHCb Computing and the GRID"
(Abstract)
R. Brun (ALICE, CERN), "Computing at ALICE"
(Abstract)
D. Buskulic (VIRGO, LAPP), "Data Analysis Software Tools used during VIRGO Engineering Runs, review
and future needs" (Abstract)
C. Charlot (CMS, LLR-Ecole Plytechnique CNRS), "CMS Software and
Computing"
(Abstract)
B. Denby (Versailles Univ.), "Swarm Intelligence for Optimization
Problems" (Abstract)
E. de Doncker (W. Michigan Univ.), "Methods for enhancing numerical
integration" (Abstract)
W. Dunin-Barkowski (IPPI/Texas Tech. Univ), "Great Brain Discoveries:
When White Spots Disappear?"
(Abstract)
D. Green (IBM EMEA), "IBM experience in GRID"
(Abstract)
F. James (CERN), "Summary of Recent Ideas and Discussions on
Statistics in HEP"
(Abstract)
R. Jones (ATLAS, Univ. of Lancaster),
"ATLAS Computing and the Grid"
(Abstract)
M. Kasemann (LCG, FNAL), "The LCG project: common solutions
for LHC" (Abstract)
C. Kesselman (USC/ISI), "GRID computing"
P. Krokovny (Belle, BINP), "Belle computing"
(Abstract)
P. Kunszt (CERN), "Status of the EU DataGrid Project"
(Abstract)
M. Kunze (CrossGRID, FZK), "The CrossGrid Project"
(Abstract)
L. Litov (NA48, JINR/Univ. of Sofia), "Particle
identification in the NA48 experiment using neural network"
(Abstract)
M. Neubauer (CDF, MIT), "Computing at CDF"
(Abstract)
G. Passarino (Univ. of Turin), "A Frontier in Multiscale Multiloop
Integrals: the Algebraic-Numerical Method"
(Abstract)
L. Robertson (LCG, CERN), "The LHC Computing Grid Project - Creating a Global
Virtual Computing Centre for Particle Physics"
(Abstract)
P. Shawhan (LIGO, Caltech) "LIGO Data Analysis"
(Abstract)
O. Tatebe (NIAIST, Tsukuba), "Grid Datafarm Architecture for Petascale
Data Intensive Computing"
(Abstract)
I. Terekhov (D0, FNAL), "Distributed Computing at D0"
(Abstract)
A. Vaniachine (ATLAS, ANL), "Data Challenges in ATLAS Computing"
(Abstract)
P. Bhat (FNAL),
"RunII Physics at Fermilab and Advanced Data Analysis Methods"
The collider Run II now underway at the Fermilab Tevatron
brings extraordinary opportunities for new discoveries, precision
measurements, and exploration of parameter spaces of theoretical
models. We hope to discover the Higgs boson and find evidence for new
physics beyond the Standard Model such as Supersymmetry or Technicolor
or something completely unexpected. We will pursue searches for hints
for the existence of extra dimensions and other exotic signals. These
opportunities, however, come with extraordinary challenges. In this
talk, I will describe the physics pursuits of the CDF and DZero
experiments in Run II and discuss why the use of multivariate and
advanced statistical techniques will be crucial in achieving the
physics goals.
N. Brook (LHCb, Bristol Univ.),
"LHCb Computing and the GRID"
The main requirements of the LHCb software environment in the context
of GRID computing will be presented. Emphasis will be given to the
preliminary experiences gained in the development of a distributed
Monte Carlo production system.
R. Brun (Alice, CERN),
"Computing at ALICE"
The ALICE software is based on three major components AliRoot, Alien and
ROOT that are in constant development and on exernal packages like Geant3,
Pythia and Fluka. The AliRoot framework is totally written in C++ and
includes classes for detailed detector simulation and reconstruction.
This framework has been extensively used to test the complete chain from
data acquisition to data storage and retrieval during several Alice data
Challenges. The software is GRID-aware via the Alien system presented in
another talk.
The detector simulation part is based on the concept of a Virtual Monte
Carlo. The same detector geometry classes and hits/digits are used to run
with the Geant3 or Geant4 packages and an interface with Fluka is in
preparation. When running with a very large number of classes (thousands)
it is important to minimize the classes dependencies. Access to the large
object collections is via the Folder mechanism available in Root. This
structure is not only more scalable but allows simple user to easily
browse and understand the various data structures. The fact that the Alice
environment is based on a small number of components has greatly
facilitated the maintenance, the developement and the adoption of the
system by all physicists in the collaboration.
D. Buskulic (VIRGO, LAPP),
"Data Analysis Software Tools used during VIRGO Engineering Runs, review
and future needs"
During last years, data flow and data storage needs for large
gravitational waves interferometric detectors have reached an order of
magnitude similar to high energy physics experiments.Software tools have
been developed to handle and analyse those large amounts of data, with the
specificities associated to gravitational waves search.
We will make a review of the experience acquired during engineering runs
on the VIRGO detector with the currently used data analysis software
tools, pointing out the peculiarities inherent to our type of experiments.
We will also show what are the possible future needs for the Virgo data
offline analysis.
C. Charlot (CMS, LLR-Ecole Plytechnique CNRS),
"CMS Software and Computing"
CMS is one of the two general-purpose HEP experiments currently under
construction for the Large Hadron Collider at CERN. The handling of
multi-petabyte data samples in a worldwide context requires computing and
software systems with unprecedented scale and complexity. We describe how
CMS is meeting the many data analysis challenges in the LHC era. We cover
in particular the status of our object-oriented software, our system of
globally distributed regional centres and our strategies for Grid-enriched
data analysis.
B. Denby (Versailles Univ.),
"Swarm Intelligence for Optimization Problems"
It has long been known that ensembles of social insects such as bees and
ants exhibit intelligence far beyond that of the individual members.
More recently, optimisation algorithms which attempt to mimic this 'swarm
intelligence' have begun to appear, and have been applied with
considerable success to a number of real world problems. The talk will
first cite examples of naturally occurring swarm intelligence in bees and
ants before passing to a concrete application of Ant Colony Optimisation
to adaptive routing in a satellite telecommunications network. Analogies
to other types of optimisation such as gradient descent and simulated
annealing will be also given. Finally, some ideas of further applications
in scientific research will be suggested.
E. de Doncker (W. Michigan Univ.),
"Methods for enhancing numerical integration"
As we consider common strategies for numerical integration (Monte-Carlo,
Quasi-Monte Carlo, adaptive), we can delineate their realm of
applicability. The inherent accuracy and error bounds for basic
integration methods are given via such measures as the degree of precision
of cubature rules, the index of a family of lattice rules, and the
discrepancy of (deterministic) uniformly distributed point sets.
Strategies incorporating these basic methods are built on paradigms to
reduce the error by, e.g., increasing the number of points in the domain
or decreasing the mesh size, locally or uniformly. For these processes
the order of convergence of the strategy is determined by the asymptotic
behavior of the error, and may be too slow in practice for the type of
problem at hand. For certain problem classes we may be able to improve the
effectiveness of the method or strategy by such techniques as
transformations, absorbing a difficult part of the integrand into a weight
function, suitable partitioning of the domain and extrapolation or
convergence acceleration processes. Situations warranting the use of
these techniques (possibly in an "automated" way) will be described and
illustrated by sample applications.
W. Dunin-Barkowski (IPPI/Texas Tech. Univ), "Great Brain Discoveries:
When White Spots Disappear?"
Knowledge progress about a particular object (e.g. , brain) has
characteristics of exponential growth in a limited volume.
As soon as you know that a visible part of the
whole volume is filled (1/2, 1/10, 1/1000 or 1/10000 - doesn't matter),
the time for the volume be all filled has almost come. The time scale
is in units of a total duration of the process of the filling in
the limited volume, if you have started from zero level.
We didn't know how much we were ignorant about the brain even
decade ago. The whole brain was just Terra Incognito. But recent
progress in computational neuroscience shows that presently we know
about 1/10 (and not less than 1/100000) of all brain network mechanisms.
That's why we can say that we are dealing with white spots on
the map of knowledge about the brain and not with the Terra Incognito
any more. The time for full understanding of the brain is not far
from now (several years by cautious estimates).
A couple of well understood mechanisms of brain functioning (work of
synchronous/asynchronous neuron ensembles in cortex, cerebellar data
prediction machinery, etc.) will be exposed in the talk.
D. Green (IBM EMEA),
"IBM experience in GRID"
To many industry watchers Grid Technology represents the next wave of
distributed computing in which companies can share IT infrastructure and
IT services within or between enterprises - so go as far as saying that it
will replace the internet. Grid Technology provides the answer to the
question facing many IT managers: "How will my organisation ensure that
its IT infrastructure is sufficiently flexible to support a rapidly
changing global market?". It tackles the challenges faced when users need
to access data/IT services from anywhere in the organisation and with the
added complexity of potential for mergers/acquisitions while at the same
time allowing for the possibility of embracing e-utility services. IBM was
the first major company to commit to support the Grid movement and
contribute to the open-source development community - some see this as a
visionary move, giving a potential for IBM to dominate the IT industry for
decades.
The presentation arm you with an understanding of what IBM sees as 'Grid
Computing' and how it may change the way we use IT. The discussion will
provide an indication of the challenges facing an organisation wishing to
invest in grid technology and explain why IBM is so interested in
overcoming the many difficulties yet remaining to be solved.
F. James (CERN),
"Summary of Recent Ideas and Discussions on Statistics in HEP"
Starting with the Confidence Limits Workshop at CERN in January 2000, a
series of four meetings has brought together particle physicists to
discuss and try to settle some of the major outstanding problems of
statistical data analysis that continue to cause disagreement among
experts. These were the first international conferences devoted
exclusively to statistics in HEP, but they will not be the last. In this
talk, I will summarize the main ideas that have been treated, and in a few
cases, points that have been agreed upon.
R. Jones (ATLAS, Univ. of Lancaster),
"ATLAS Computing and the Grid"
ATLAS is building a Grid infrastructure using middleware tools from both
European and American Grid projects. As such, it plays an important role
in ensuring coherence between projects. Various Grid applications are
being build, some in collaboration with LHCb. These will be exercised and
refined, along with our overall computing model, by means of a series of
Data Challenges of increasing complexity.
M. Kasemann (FNAL),
"The LCG project - common solutions for LHC"
Four LHC experiments are developing software for all aspects of data
analysis. Joint efforts and common projects between the experiments and
the LHC Computing Grid Project are underway to minimize costs and risks.
However, the experiments are different from one another, the right
balance between a single set of methods and tools and experiment
specific solutions must be found. Data Challenges of increasing size and
complexity will be performed as milestones along the way towards
completion until LHC start-up to verify the solutions found and to
measure the readiness for data analysis.
P. Krokovny (Belle, BINP),
"Belle computing"
Belle is a high luminosity asymmetric e+/e- collider experiment designed
to investigate the origins of CP violation and other physics.
An important aspect of this experiment is a computing system.
The details of the Belle offline reconstruction and Monte Carlo production
scheme will be discussed at the conference.
P. Kunszt (CERN),
"Status of the EU DataGrid Project"
The EU DataGrid project has as its aim to develop a large-scale research
testbed for Grid computing. Three major application domains have already
been running demonstrations: Particle physics, Earth observation and
Biomedics. The project is in the middle of its second year and has
successfully passed its first EU independent review. The DataGrid testbed
is up and running at the several project sites and is growing in
functionality with each new release. We discuss the status of the project
and the evolution foreseen in the current year, especially in view of the
potential impact of the Globus migration to OGSA. We also present the
plans of the applications how to exploit this technology in the future.
(For more information, see the Web page).
M. Kunze (CrossGRID, FZK),
"The CrossGrid Project"
There are many large-scale problems which require new approaches to
computing, such as earth observation, environmental management,
biomedicine, industrial and scientific modelling. The CrossGrid project
addresses realistic problems in medicine, environmental protection,
flood prediction, and physics analysis and is oriented towards specific
end-users:
medical doctors, who could obtain new tools to help them to obtain
correct diagnoses and to guide them during operations,
industries, which could be advised on the best timing for some
critical operations involving risk of pollution,
flood crisis teams, which could predict the risk of a flood on the
basis of historical records and actual hydrological and meteorological
data,
physicists, who could optimise the analysis of massive volumes of data
distributed across countries and continents.
Corresponding applications will be based on Grid technology and could be
complex and difficult to use: the CrossGrid project aims at developing
several tools which will make the Grid more friendly for average users.
Portals for specific applications will be designed, which should allow
for easy connection to the Grid, create a customised work environment,
and provide users with all necessary information to get their job done.
L. Litov (NA48, JINR),
"Particle identification in the NA48 experiment using neural network"
The Na48 detector situated at CERN SPS accelerator is designed for
precise measurement of direct CP-violation in the neutral kaon system.
A large programme for investigation of rare Ks, K+/-, neutral hyperon
decays and measurement of CP violating asymmetry in charged kaon decays
with unprecedented precision is envisaged. In order to suppress the
background for some of the rare kaon and neutral hyperon decays, a good
particle identification is required. The possibility to use a
feed-forward neural networks to separate electrons from hadrons is
considered. To test
the performance of the neural network, electrons and pions from clearly
reconstructed experimental kaon decays have been used. It is shown, that
the neural network can be a powerful tool for particle identification. A
significant suppression of the background can be reached allowing a
precise
measurement of rare decays parameters.
M. Neubauer (CDF, MIT), "Computing at CDF"
Run II at the Fermilab Tevatron Collider began in March 2001 and will
continue to probe the high energy frontier in particle physics until the
start of the LHC at CERN. It is expected that the CDF collaboration will
store up to 10 Petabytes of data onto tape by the end of Run II. Providing
efficient access to such a large volume of data for analysis by hundreds
of collaborators world-wide will require new ways of thinking about
computing in particle physics research. In this talk, I discuss the
computing model at CDF designed to address the physics needs of the
collaboration. Particular emphasis is placed on current development of a
O(1000) processor PC cluster accessing O(200 TB) of disk at Fermilab
serving as the Central Analysis Facility for CDF and the vision for
incorporating this into a decentralized (GRID-like) framework.
G. Passarino (Univ. of Turin),
"A Frontier in Multiscale Multiloop Integrals: the Algebraic-Numerical
Method."
Schemes for systematically achieving accurate numerical evaluation of
arbitrary multi-loop Feynman diagrams are discussed. The role of a
reliable approach to the direct and precise numerical treatment of these
integrals in producing a complete calculation for two-loop Standard Model
predictions is also reviewed.
L. Robertson (LCG, CERN),
"The LHC Computing Grid Project - Creating a Global Virtual Computing Centre
for Particle Physics"
The computing needs of LHC will require enormous computational and
data storage resources, far beyond the possibilities of a single computing
centre. Grid technology offers a possible solution, tying together computing
resources available to particle physics in the different countries taking
part in LHC. A major activity of the LHC Computing Grid Project (LCG) is to
develop and operate a global grid service, capable of handling
multi-PetaByte data collections while providing levels of reliability,
usability and efficiency comparable with those available in scientific
computing centres.
P. Shawhan (LIGO, Caltech),
"LIGO Data Analysis"
The Laser Interferometer Gravitational-Wave Observatory (LIGO) project has
constructed two 'observatories' in the United States which are poised to
begin collecting scientifically interesting data. Members of the LIGO
Scientific Collaboration have been using data from recent 'engineering runs'
to develop and refine signal detection algorithms and data analysis
procedures. I will describe a few distinct LIGO data-analysis tasks which
vary greatly in their computational demands, and thus will be addressed in
different ways. I will also comment on some of the organization and
implementation challenges which have been encountered so far.
O. Tatebe (NIAIST, Tsukuba),
"Grid Datafarm Architecture for Petascale Data Intensive Computing"
The Grid Datafarm architecture is designed for global petascale
data-intensive computing. It provides a cluster-of-cluster parallel
filesystem with online petascale storage, scalable I/O bandwidth, and
fault tolerance. Gfarm parallel I/O APIs and file affinity scheduling
support scalable I/O bandwidth exploiting local I/O in a grid of
clusters with tens of thousands of nodes in a single filesystem image.
Fault tolerance and load balancing are automatically managed by file
duplication or re-computation using a command history log.
Preliminary performance evaluation has shown scalable disk I/O and
network bandwidth on 64 nodes of the Presto III Athlon cluster. The
Gfarm parallel I/O write and read operations has achieved data
transfer rates of 1.74 GB/s and 1.97 GB/s, respectively, using 64
cluster nodes. The Gfarm parallel file copy reached 443 MB/s with 23
parallel streams on the Myrinet 2000. The Gfarm architecture is
expected to enable petascale data-intensive Grid computing with an I/O
bandwidth scales to the TB/s range and scalable computational power.
I. Terekhov (D0, FNAL),
"Distributed Computing at D0"
The D0 experiment at FNAL is one of the largest currently running
experiments in HEP. Its amount of data, the size of the collaboration,
and, most importantly, the degree to which the collaborators are
distributed around the world, mandate a highly sophisticated, fully
distributed meta-computing system. Its heart is the advanced data handling
system called SAM which provides high-level services of a data grid. The
areas of most rapid development are job and information management. Job
management includes brokering, submission and execution od data analysis
jobs; the information services allow monitoring of jobs and the system as
a whole. For these, newer services, we actively deploy, integrate and and
develop Grid technologies, while collaborating with computer scientists
and the various Grid efforts both in the USA and Europe. In this paper, we
present the present status and the current plans for the D0 meta-computing
system.
A. Vaniachine (ATLAS, ANL),
"Data Challenges in ATLAS Computing"
ATLAS computing is steadily progressing towards a highly functional
software suite, plus a World Wide computing model which gives all ATLAS
equal and equal quality of access to ATLAS data. A key component in the
period before the LHC is a series of Data Challenges of increasing scope
and complexity. These Data Challenges will use as much as possible the
Grid middleware being developed in Grid projects around the world. We are
committed to ^яcommon solutions^р and look forward to the LHC Computing
Grid (LCG) being the vehicle for providing these in an effective way. In
the context of the CERN Review of LHC Computing, the scope and goals of
ATLAS Data Challenges are executed at the prototype tier centers, which
will be built in the Phase 1 of the LCG project.
In close collaboration between the Grid and Data Challenge communities
ATLAS is testing large-scale testbed prototypes around the world,
deploying prototype components to integrate and test Grid software in a
production environment, and running Data Challenge 1 production in 26
prototype tier centers in 17 countries on four continents.