Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.parallel.ru/sites/default/files/docs/faq/12.txt
Дата изменения: Wed Nov 2 11:53:59 2011
Дата индексирования: Tue Oct 2 02:54:30 2012
Кодировка:

Поисковые слова: movie
Newsgroups: comp.parallel,comp.sys.super
From: eugene@sally.nas.nasa.gov (Eugene N. Miya)
Reply-To: eugene@george.arc.nasa.gov (Eugene N. Miya)
Subject: [l/m 4/8/97] Who runs the ||ism-comunity? -- comp.parallel (12/28) FAQ
Organization: NASA Ames Research Center, Moffett Field, CA
Date: 12 Mar 1998 13:03:08 GMT
Message-ID: <6e8med$leq$1@cnn.nas.nasa.gov>

Archive-Name: superpar-faq
Last-modified: 8 Apr 1997

12 Who runs the ||ism-community?
14 References
16
18 Supercomputing and Crayisms
20 IBM and Amdahl
22 Grand challenges and HPCC
24 Suggested (required) readings
26 Dead computer architecture society
28 Dedications
2 Introduction and Table of Contents and justification
4 Comp.parallel news group history
6 parlib
8 comp.parallel group dynamics
10 Related news groups, archives, test codes, and other references


Why this panel?
---------------


One man's research is another man's application.



A significant undercurrent belies high performance computing:
The lines of communication between computer system builders and applications
are something like the "war" between men and women.
Alan Turing never met John Gray [Men are from Mars, Women are from Venus].

Users can't understand what's taking so long and why programming is so hard.
Programmers and architects, perenial optimists [except me, the Resident Cynic]
always promise more and tend to deliver late.

This panel needs a lot of work, because I have exposure to a limited set of
communities including three and four letter agencies, people and friends
in the physics, chemistry, biology, and earth science communities in
academica and industry. etc. etc. Add what you want.


How parallel computing is like an elephant (with blindmen)
----------------------------------------------------------
Who runs the computer industry?
-------------------------------
A little road map
-----------------

Programmers are from Mars.
Users are from Venus.
--E. Miya, March 1996, DL'96

God didn't have an installed base.
--Datamation ??


This section attempts to cover topics relating to various sub-cultures
in the high-performance computing market. If you don't understand something,
you aren't alone. If you think you understand something, you don't.
These topics are long standing (from net.arch in the pre-comp.parallel
and pre-comp.sys.super days).

If parallel computing is a business, are the customers always right?


Scale
-----

How would you like programs to run twice as fast? How about 10% faster?
Not as impressive? In this group, factors of 2-4 aren't impressive.
>From my knowledge, agencies like the Dept. of Energy (formerly ERDA and AEC),
factors of 8-16 (around 10) interest people. Keyword: EXPECTATION:
At 3 MIPS, the CDC 6600's appearance was 50x faster than its predecessor
the ERA/UNIVAC 1604. WE DO NOT SEE THIS DEGREE OF GAIN CONSISTENTLY.
With that reference, we proceed.

Clearly, smaller numbers (improvements %s or 2-4x) speed ups are useful
to some users, but this is illustrative of the nature of Super-scales.
Traditional computer science teaches about the time-space trade-off
in computation. SUPERCOMPUTING DOESN'T ALLOW TRADEOFFS.
We must distinguish between
ABSOLUTE performance (typically measured by wall-clock)
RELATIVE performance (normalized or scaled percentage (%))
If you have a problem and you can't trade one for the other:
then it MIGHT be a supercomputing problem.
Run time tends to either too small or too slow: then it might not be super
anymore. The definition is a moving wave.

Problem scale: space: problems O(n^2) and O(n^3) are common. O(n^4)
from things like homogeneous coordinate systems are increasingly common.
Remember: parallel computing is an O(n) solution to the above O-notation.

Problem scale: time: O(n^3) and greater thru NP-complexity
One popular line FAQ (sci.math) is proving P == NP complexity.
That won't be covered here. (E.g., the Cray-XXX [choose model number
can execute an infinite loop in 10 seconds [some finite figure].
Yes, people have posted that joke here.)

Processors are typically scaled (added) at O(n) or at best O(n^2).
Any improvements must be viewed realistically, bordering on skeptical.
This is why claims of superlinear speed up (properly super-unitary) should
be viewed with great skepticism. Clearly, people working in this area
need the proverbal pat on the back, but giddy claims only serve to hurt
the field in the long run.

Nothing like showing 2-D computational results,
when end-user customers work on 3-D problems.

Additionally, people tend to assume synchronous systems.
Asynchronous systems are even more "fun."

Let's bring cost into the discussion:
Since the 1970s, it has been realized that complete processor or memory
connectivity (the typical example give was a cross-bar) scales by O(n^2)
interconnections.
Over time, various multistage interconnection networks (MINs)
have scaled this down to variations around O(n ln n)
[interpreted: this is still more than O(n)].
This contrasts with the perceived dropping cost of electronics
(semiconductor substrate). See the Wulf quote about CPUs on an earlier panel.

This is not a problem for small scale (4 to 8-16 processors,
aren't you glad you got those numbers?) on a bus, but investigators
(users) want more power than that. It is VERY hard to justify these
non-linear costs to people like Congress:
"You mean I pay for 8 processors and I get 4x the performance?
You have some serious explaining to do son."

This brings up the superlinear speed up topic (more properly called
"superunitary speed up"). That is another panel.


What are some of the problems?
----------------------------------------

The technical problems interact with some of the economic/political problems.

First comes consistency. You take it for granted. (Determinism)

Say to yourself,
asymmetric
boundary conditions
exception handling

consistency
starvation
dead lock

state of the art limits of semiconductors

If you are not in the computing industry, you might be confused by
time scales. Silicon Valley does not operate on the same wavelengths
or time scales as other parts of the world.
It is estimated that the US Government takes an average of 430 days (1990)
to purchase a supercomputer. The typical memory chip has a useful commercial
life time of two years before it is succeeded by better technology.

Fields like astronomy and glaciology may take a year or two to referee
some papers.
Refereed papers in computing are frequently the exception (obsolete)
and seminars and conferences tend to hold more weight
(including personal followup). The speed at which some ideas are discarded
can be particularly fast.
Many parts of the computer community tend to assume their users' environments
behave very much like their own. This is usually not the case.
This is why I value my contacts outside the computer industry.


A funny relationship exists.
Traditional science has been characterized by theory and experiment.
In the late 1980s, several key Nobel Laureates (remember that there are
no Nobel Prizes for Math or Computing) starting with people
like Ken Wilson and continuing the tradition with Larry Smarr
have argued for a third part: computational science.

The silent majority in many sciences which rebuts sometime saying:
Any field which has to use 'science' in its name isn't one.
[R.P. Feynman Lectures on Computation]
Students should be particular aware of this when they enter mine academic
mine fields. I first encountered in my Freshman year.
To quote Don Knuth:
Sometimes a name is very important.
Who's Don Knuth?
Most noted computer scientist and author of
The Art of Computer Programming (controversial title for
an incomplete book (3 vols. of 7 written)) as well as a typesetting
system (Metafont) and document processing systems (TeX).
D. E. Knuth
"Algormithmic versus Mathematical Thinking"
Amer. Mathmatical Monthly
March 1985
Differences:
Mathematical Thinking: Does not have the concept of Cost associated with it
[along with consequence and dependency].
Algormithmic Thinking: Does have cost associated with operations.
Etc.

The relationship between computer manufacturers and computer users
was characterized in 1978 and independently later by others:
"The VAX isn't a computer, it's marijuana." --GAM
"You guys are like drug dealers. First, you hook your users
[see, what fields use the term "user:" drugs and computers].
Then you addict them to the need for software. Then you just when
you think its over, you need to do upgrades. Then you need to buy
a new machine. What a racket!"
--A noted geophysicist.


A trend?
--------

While one of the early justifications for the ARPAnet was promoting
access to scarce resources like supercomputers, a trend is appearing.
Quality organizations and people are taking a slightly stealthlier
approach. At one time, an e-mail address on a business card or in the
employment section of a journal was a sign of advanced networking access
("Hey, we're hot stuff"). This is no more.

Knuth publically announced that he was leaving email for once and for all.
He did. The hot CS universities like CMU no longer advertise their email
address: you either know implicitly how to reach them, or you can remain
clueless. People are drowning in data.


Computer Industry and Education
-------------------------------

What do computer scientists do? Are they JUST programmers?
-----------------------------------------------------------


The computer industry is currently unique in the breath of generality
where its tools can be used. People distinguished in their fields
approach some computer people and get cut down to size when they find
themselves competing against other computing interests.
In contrast to some disciplines where careers only begin after a PhD is
reached, computing is distinguished by its youth.
While many computer scientists have PhDs,
it is notable that quite a few enterpeneurs (in contrast to pioneers)
have not even completed college. This lack of formal education
and sometimes large amounts of "new" money (not just Bill Gates or
Steve Jobs or Steve Wozniak [although he went back and finished his BSEE]),
also tends to upset "old" money rust belt types of interests.

Where does an 800 lb gorilla sit?
Anywhere he wants. Where did the joke appear?
The New York Times Magazine in an article about Microsoft.
What does Microsoft care about supercomputers? Nothing.
Is that good? Depends who to talk with.



How does one characterize what computer scientists do to other sciences?

Apparently the usual classic breakdown includes:

Basic Programming and computation theory
Programming languages, compilers, data structures
Numerical methods and analysis
Operating Systems communications/networks
Database management systems
Architecture [bordering on electronics engineering]

Other topics
Software engineering
Computer graphics
It is incredibly easy to insult people. There, see I just did.
It turned out that the first breakdown in fact insulted
the second community.


See "Newt and the Apple" [as in "Sir Isaac" and "computer," not Gingrich].


Urgency
-------

See the Grand Challenge panel.



The Economic Model
------------------

Dollar value:
Supercomputing is clearly a business, but it is a very small business
($$s about akin to the disposable diaper market).
But it's like research: it has the POTENTIAL for great consequences
in the future.

Speciality markets:
Signal and image processing
Flight simulation
Minisupercomputers: (basically as defunct as minicomputers)
Is the customer always right? Sometimes. But not always.


Vertically integrated manufacturers/markets.
They have their own fab-lines.

The customer? Industry (market force?)? Academia?


Emerging applications.
----------------------

Entertainment.

Special long term FAQ:
----------------------
The Entertainment Industry
--------------------------

The entertainment industry is frequently cited by naive readers
as a direction for funding parallel/supercomputing research and development.
Don't bother citing it.
It's not called show BUSINESS for nothing. It makes no more sense to do this
than to cite them as representative of the building/construction industry.
This is why movie companies make sets. The entertainment industry is largely
not interested in research and development (some limited research
(market research) is carried on by the larger companies),
their business is to make a buck for their investors.
The majority of film companies are not interested in long-term futures, and
objects (e.g., props) hanging around (computers) after production is completed.
Yes, some small amount of graphics research gets carried on at companies
like Lucasfilm and Pixar.

One FAQ concerns the use of supercomputers by specific companies:
1) No, Lucasfilm, Ltd. never brought a Cray. CRI offered the first Cray-1
for less than $1M. (one dollar less to be specific; they declined the offer
after Lucas saw the yearly maintenace cost [if you had to ask,
you could not afford]).
2) Yes, sure Crays have been used in film generation:
the first was the short film "Andre and Wally Bee"
(unvectorized C on a COS system) with the help of about
20 Apollo workstations. Also MAGI (Mathematical Algorithms Group, Inc.)
and III (I^3, Information International, Inc.), Whitney/Demos Productions
(TRON). Some of this has been covered on some comp.graphics FAQs.
Yes, W/DP had a Connection Machine. But this is atypical.

Digital Productions had a Cray, albeit they couldn't keep it busy with
animation work and had to rent time out. I'm pretty sure W/Demos
postdated both DP and TRON. III did not AFAIK have a Cray;
they had the Foonly (1/5 Cray 1) - Omnibus inherited.


The nature of movie archiving is for specific purposes (i.e., residuals,
not the floating point kind).

This may change now that Danny Hillis is at Walt E. Disney, Inc.


Traditional applications.
-------------------------

Military
Artillery trajectory calculations

Cryptography
Nuclear calculations
hydrodynamics (including changes of state)



Military

Trajectory problems can all largely be done via small portable laptops.
Current military uses of high performance computers include simulation
and battle management software.

Cryptography

One of the most secretive uses of computers. Alan Turing cracking the Engima.
US computer nerds, I mean citizens, are recommended to visit and carefully
play with the functioning (carefully!) Engima at the
National Cryptologic Museum on the Washington-Baltimore Parkway.
This box is why we have computers. It would be a shame to break this
one machine or place it in safe storage (under glass). Treat with care.
Free. An X-MP/24 is also on display as it a Storage Tek silo.
sci.crypt. (preferably research).


Nuclear calculations

The realm of the real quantum world (quantum as in small in case you
have "quantum leaps" in the brain). This is the world of radiative transfer,
neutron and photon transport, shielding, special materials, etc. etc.
Some codes (from reactor design) are clearly open, but many other codes are
classified. This is why we have supercomputers. Generally supercomputers
are generally credited in the push to stop above ground nuclear testing and
hazardous affects of accumulating fallout in human tissues.

I cannot recommend the general sci.physics.* news group due to the
poor signal-to-noise ratio, but don't take my word for it.


Computer Science: Math vs. EE
-----------------------------
Classic battle between
theory and application
CS clearly is a new science.
CS Depts. are frequently insulted by the claim they are basically a
"service science." Mathematics might also be claimed as a "service science."

The first parallel machines and supercomputers were constructed before
the acronyms PRAM and SIMD were in common use. Do not let theory fool you.

Personally, I did my undergrad work in math at a school before it had
a CS Dept. I watched the Math and EE Depts. fight for CS control.
The Math Dept. wanted the power and the money, but was not interested
in the intellectual discipline, the EE Dept. basically (not completely) won.
Math departments do win on other campuses.

Either way, CS Depts. create some grudging animosity between Math and EE.
You can't win.

Can you distinguish between computer engineering applications and
computer science research?


So what should education be doing?
----------------------------------
(Teaching and research)

The question is frequently asked in some circles what should the
Architecture: basic taxonometric overview of differences
Things to make machines go way faster.
Algorithms
Things to make machines go way faster.
Special problems unique to size and speed
Checkpointing and recovery

Computational Science


The acceptance of new ideas is somewhat difficult to describe.
It is not enough to have a new idea. The tale goes that Cray will hear a
new idea, nod his head.


User groups/professional societies
==================================

This section is provided to contrast to the different profesional groups.

Publications are one way for one discipline to insult another:

Referred papers:
Conference papers:
Work-in-progress sessions:
Poster Sessions:
Hallway and meal conversation:

Bibliographic citation:
Largely irrelevant, the field either doesn't care or has minor significance.
The field is diverse: some areas sensitive, other areas not.
Potential land mines:
Authors by last name alphabetic order
Authors by order of importance to work
Authors by first name initials
Authors by full name


ACM
IEEE/Computer Society [CS]

Referred papers: The Transactions [of ...] are the highly regarded journals.
Conference papers: COMPCON
Work-in-progress sessions: COMPCON: none.
Poster Sessions: COMPCON: none.
Hallway and meal conversation:

Uniforum
Technical Committee on supercomputing

The technical community tends to have an interesting division of labor or
split. This is probably also due to the heavy presence of the US on the net
and the use of US machines in other countries. Is computing a mathematical
science or electrical engineering? The dominance of the latter is
more pronounced than many are willing to admit. This is why we look
at professional societies.


ACM: Association for Computing Machinery
Adm. Grace Hopper's Society

The ACM has SIG (Special Interest Groups) and it had TIGs
(Technical Interest Groups), but they died out in favor or SIGs.
The two major SIGs for parallel computing are

Referred papers: Communications of the ACM is the flagship (monthly)
Excellent papers in the quarterly Computing Surveys
Many excellent Transaction journals
Conference papers: The ACM had a yearly National Conference: it died.
The real work gets down in the SIGs.
Work-in-progress sessions:
Poster Sessions: Conference dependent
Hallway and meal conversation:

SIGARCH
SIGMETRICS

Referred papers:
Conference papers:
Work-in-progress sessions: To me, perhaps some of the funest sessions I have
ever attended.
Poster Sessions:
Hallway and meal conversation:


Additionally, other SIGs have peripheral interest like SIGGRAPH,
a SIG which is nearly 10 times the size of the rest of the ACM combined.
Planned years in advance. A separate Conference staff exists.
30-40K attendees are common.


What is/was SIGBIG?
===================

SIGBIG (once SICBIG) was a Bay Area local special interest committe (group)
of the Association for Computing Machinery. It was founded by Mary Fowler
(then with Technology Development of California (TDC)/Zero-One Systems/Sterling
Software running the ILLIAC IV, then the Cray-1S and the Cray X-MP/22/48)
at the Ames Research Center with limited help and encouragement of
Marcie Smith (NASA Ames, Division Chief (ret.)), Sid Fernbach (CDC/LLNL), and
George Michael (LLNL). SIGBIG complemented the ACM's much larger, formal
SIGSMALL. SIGBIG has no national/international complement like SIGSMALL
but was a SIC (later SIG) of the San Francisco and Penisula Chapters of
the ACM.

SIGBIG's main feature were free monthly seminars on various topics and
discussions on supercomputing and other aspects of high performance computing.
The technical topics included most every new machine to be released:
from the Elxsi to the Cray-2, surveys, algorithms, software (OS) wars,
systolic computing, etc.

My role was to provide technical support and even take over the organization
and run it if need be. I personally decided against this due to the
political climate, but I did take over and run a very successful Bay Area
ACM/SIGGRAPH organization which did include parallel computing and
supercomputing (with SIGBIG) in its program.

SIGBIG failed to reach critical mass as a professional organization and
passed quietly away in the late 1980s. Seminar announcements were posted
to net.arch and later comp.arch for a while. Unfortunately, SIGBIG did not
get the support it needed. Mary now supports computing for the Mayor's
Office of Oakland, CA on issues of handicapped computing.

TDC became Zero-One and was bought out by Sterling Software.
It was the first customer of Convex Computer.
Some of the premise of this work was flawed,
but the hardware and Convex software was good and very impressive for the time
(the fastest VAX-11 was the 785 at the time).

Bay Area ACM/SIGGRAPH was formally disbanded in 1992 and a new Silicon Valley
ACM/SIGGRAPH was formed in 1993. It may take up some parallel computing/
supercomputing. TBD.

Special mention (credit): Bence Gerber (CRI), Simon Fok (TDC/O-1),
Ken Stevens and Cathy Schulbach, Cindy Weeks, Diane Smith, Ron Levine.
And probably many others who deserve mention.


IEEE: Institute of Electrical and Electronic Engineers
The hardball boys


SIAM: Society of Industrial and Applied Mathematics


AMS: American Mathematical Society (largely theory)


AAAI
Do we have "thinking" machines yet?
This is where large portions of the LISP community reside.
comp.ai.*.


AIAA: American Institute of Aeronautics and Astronautics
Many meetings don't produce proceedings. You buy the papers
as you see fit.
What's nice is that test pilots don't like to write extensive reports
(hating bureaucracy), so a minimum concession/incentive is the optional paper.

It has been said that "Only Pulliam could get away with giving a talk
without a suit and tie."


AIP/APS, etc.: American Institute of Physics, American Physical Society

Characterized by E. Feigenbaum as:
"The polo players of science....."
Shiny new building.
Big money folk.


ACS: American Chemical Society
Chemists are frequently regarded as second-class physicists in many
circles. At least, they "rate."


AGU: American Geophysical Union
Originally throught to have lots of power due to the Arab Oil crisis.
Industries struggling to stay alive.
Accused of using benchmark time to accomplish real runs without paying.
You don't have to wear ties at their meetings (unless maybe you
are the paper presenter).
Very down to earth people.


God, help the biologists. 8^)
Genomes
To quote a friend: Two major major communities exist:
the Molecular people and everyone else
(systematic, ecosystem, physio, etc.)

Of the molecular crowd:
It appears that the pharmaceutical drug companies are into supercomputing,
but the genetic crowd isn't.


Envision a spherical cow
------------------------

First we start with over simple assumptions.
Then we throw even numbers out.
Then we start scaling: 1-dimension, 2-dimension, 3-dimension, 4....
It grows fast.
Then we throw symmetry out the window (asymmetric). See irregular grids.
Then we vary time. See time-varying problems.
Beware of keywords like sparse or dense.


AAAS: American Association for the Advancement of Science

The American Association for the Advancement of Science (AAAS) is relatively
weak when it comes to computing in general. Every so often Science
(which has a Web page) has a special issue on computing.


Did you hear the one about the engineer during the French Revolution?



Suggested readings
------------------

This is a tricky issue. I personally recommend searching the parallelism
biblio, but I'm working on this FAQ.

Much of the above is covered more tacitfully if less completely in

Computing the future: a broader agenda for computer science and engineering /
Juris Hartmanis and Herbert Lin, editors ;
Committee to Assess the Scope and Direction of Computer Science
and Technology, Computer Science and Telecommunications Board,
Commission on Physical Sciences, Mathematics, and Applications,
National Research Council. Washington, D.C. : National Academy Press, 1992

I recommend an obscure, but interesting technical report published
in a CERN conference cited in the above authored report by Rob Pike of AT&T
Bell Labs entitled:
Computer Science versus Physics.
I hope to republish this article shortly.

The typical comment is that many of the textbooks gloss over some topics
so quickly that they offer little practical information.

%A Kevin Dowd
%T High Performance Computing:
RISC Architectures, Optimization, & Benchmarks
%S Nutshell Books
%I O'Reilly & Associates, Inc.
%C Sebastopol, CA
%D June 1993
%K book, text,
%X I. Modern Computer Architectures
Ch. 1 What is High Performance COmputing?
Ch. 2 RISC Computers
Ch. 3 Memory
II. Porting and Tuning Software
Ch. 4 What an Optimizing Compiler Does
Ch. 5 Clarity
Ch. 6 Finding Porting Problems
Ch. 7 Timing and Profiling
Ch. 8 Understanding Parallelism
Ch. 9 Eliminating Clutter
Ch. 10 Loop Optimizations
Ch. 11 Memory Reference Optimizations
Ch. 12 Language Support for Performance
III. Evaluating Performance
Ch. 13 Industry Benchmarks
Ch. 14 Running your own benchmarks
IV. Parallel Computing
Ch. 15 Large Scale Parallel Computing
Ch. 16 Shared Memory Multiprocessors
%X What?! Nothing about "massive parallelism?"

Articles to parallel@ctc.com (Administrative: bigrigg@ctc.com)
Archive: http://www.hensa.ac.uk/parallel/internet/usenet/comp.parallel