Документ взят из кэша поисковой машины. Адрес оригинального документа : http://engineering.phys.msu.ru/en/branches/second-year-students-classes/parallel-programming
Дата изменения: Mon Apr 11 22:35:46 2016
Дата индексирования: Mon Apr 11 22:35:46 2016
Кодировка: UTF-8
Parallel Programming

Description

This course is dedicated to the main principles, methods and techniques of parallel programming oriented at resolution of resource-intensive tasks physically and in non-linear optics in particular. The course offers profound description of popular techniques of parallel programming such as OpenMP and MPI. Practical classes are conducted with the help a teaching cluster, which contains 14 Intel Xeon-based processors.

The course?s ultimate point is an individual project made which encompasses all the material. The given course gives students an excellent opportunity for further growth at the forefront of science.

Introduction

Modern scientific and applied research in the field of physics often requires large-scale resource-intensive computations. Rapid growth of productivity of the modern computing systems is achieved by using parallel processors and multicore systems. Though traditional supercomputers with their original structure considerably increase the productivity of the computations, their high cost is a significant disadvantage, especially in comparison with cluster systems. Along side with its considerably low cost, productivity of a cluster system remains quite high in a wide specter of tasks. For instance, according to the latest list of the world?s most powerful computers (www.top500.org), 81% of the most productive computational systems of the world are of cluster technology with standard computational nodes. At the same time a standard computational node has long ceased to be a single-process machine, and efficient use of multiprocessor/multi-core architecture for the solution of a resource-intensive problem depends on the applied development tools and methods of parallelization. OpenMP, which is supported by all modern compilers, has in fact become standard in research-intensive applications.

The main goal of the given course is to motivate students of physics to apply ?parallel thinking? when working on a computational solution of a problem in physics; this course also aims at providing students with techniques and skills to ?parallelize? tasks. As not every task can be successfully parallelized, the course draws special attention to analysis of possible problems in physics and algorithms of their possible solution, which do not allow parallel use of several cluster nodes. Practical classes on a teaching cluster and a modern supercomputer will allow students to acquire skills of remote use of powerful computing resources, which will widen the perspective of a scientific investigation.

Acquired knowledge and skills

As a result, by the and this course, students acquire knowledge about the popular parallel programming techniques, knowledge of modern high-performance computing systems and practical skills to work with them.

Education technology

The course has electronic version of the educational materials. Practical classes are conducted in the rooms with the modern network projection equipment.

Software and online resources

Assessment

  • Midterm assessment takes place during the 8th week and is based on student?s current academic progress. Assessment criteria are completed practical tasks, reports and library-research papers.
  • There is also weekly assessment, which criteria are attendance, participation during the class, level of preparation for the seminars, home assignments.

Syllabus

Section

3rd semester, week #

Overview of the technology of parallel computing.

1

Introduction to the architecture of high-performance systems.

2

Linux operating system

3-5

Introduction to technology of parallel programming

6

OpenMP technology

7-9

MPI (message passing interface)

10-12

Working on an individual task using MPI

13-17

Fail/pass Exam.

18

љ

4th semester, week #

Selecting solutions methods for the coursework and developing a functional structure of the programme.

1

Working on a course project: developing functional blocks of the programme, running tests, checking interaction between the blocks, demonstrating basic functionality of the developing programme. Consultations.

2-6

Midterm assessment.

7

Coursework: debugging and functionality increment. Consultations.

8-15

Coursework presentation and defense. Examination.

16

Reference

  • Nemnyuguin, S.A., Stesik, O.L. Parallel Programming for Multicore Computation Systems. Saint Petersburg: Petersburg, 2002.
  • Kalitkin, N.N. Calculus of Approximations. Moscow: Nauka, 1978.
  • Voyevodin, V.V. Parallel Structures of Algorithms and Programs. Moscow: OVM at Academy of Sciences USSR, 1987.
  • Bukatov, A.A., Datsuk, V.N., Zhegulo, A.I. Programming Multicore Computation Systems. Rostov-na-Donu: OOO CVVR, 2003.
  • Antonov, A.S. Parallel Programming Using MPI. Moscow: MSU Press, 2004.
  • Korneyev, V.D. Parallel Programming on MPI. Moscow: Regular and Chaotic Dynamics, 2003.
  • Andrews, Gr. A. Foundations of Multithreaded, Parallel, and Distributed Programming. Addison-Wesley, 2000.
  • Bogachyov, K.Y. Basics of Parallel Programming. Moscow: Binom. Laboratory of Knowledge, 2003.
  • Buravlyov, A., Cheldiyev, M., Babyrin, A. et al. High Performance Scalable Multicore Computation Systems // Modern Computer-Aided Technologies (CTA), Volume 3, pp.72-77, 2009.
  • Boyde L., Chalut K.J. and Guck J. "Near- and far-field scattering from arbitrary three-dimensional aggregates of coated spheres using parallel computing" // Phys. Rev. E, v. 83, p. 026701, 2011.
  • Fujisawa A., Shimizu A., Itoh K., et al. "Wavelet analyses using parallel computing for plasma turbulence studies." // Phys. Plasmas, v. 17, p. 104503, 2010.
  • Chou Yu-Ch., Nestinger S.S., Cheng H.H. "MPI: Interpretive Parallel Computing in C" // Comput. Sci. Eng., v. 12, p. 54, 2010.
  • Raghunathan S. "Parallel Computing Algorithms and Applications: Scientific Parallel Computing" // Comput. Sci. Eng., v. 9, p. 64, 2007.