Parallel Programming (3:1)

Instructor: Sathish Vadhiyar

Meeting Hours: 14:00-15:30 ; Tuesday, Thursday; Room 202, SERC

The objective of this course is to give you some level of confidence in parallel programming techniques, algorithms and tools. At the end of the course, you would (we hope) be in a position to apply parallelization to your project areas and beyond, and to explore new avenues of research in the area of parallel programming. The course covers parallel programming tools, constructs, models, algorithms, parallel matrix computations, parallel programming optimizations, scientific applications and parallel system software. MPI, OpenMP and CUDA will be covered.

Class

Topic Reading Material
  • Introduction: Grama et al. - 2.4, 3.1, 3.5, 5.1, 5.6
  • MPI-1: Online tutorial "MPI Complete Reference". Google for it.
  • OpenMP - Lecture slides, and OpenMP tutorial: http://www.llnl.gov/computing/tutorials/openMP
  • CUDA - lecture slides
  • Besides, follow the "Platform for Assignments" link in my HPC 2015 web page.

 

  • PRAM algorithms: Book "Parallel Computing: Theory and Practice" by Michael J Quinn. Available with me. Pages 25-32, 40-42, 256
  • MPI-1: Online tutorial "MPI Complete Reference". Google for it.
  • Collective Communications: Lecture slides, and Google for paper "Optimization of Collective Communication Operations in MPICH" by Thakur, Rabenseifner and Gropp, IJHPCA 2005
  • Mapping to network topologies. Section 5.1 in book "Parallel Computing: Theory and Practice" by Michael J Quinn.
  • CUDA Optimizations - Chapter 5 of CUDA programming guide and slides 30-34 of advanced CUDA, CUDA Occupancy calculator
  • MPI-2: Online tutorial: http://www-unix.mcs.anl.gov/mpi/mpi-standard/mpi-report-2.0/mpi2-report.htm
  • Parallel I/O Optimizations: Google for paper Rajeev Thakur, William Gropp, and Ewing Lusk, “A Case for Using MPI's Derived Datatypes to. Improve I/O Performance,” in Proceedings of SC98.
  • Parallel Algorithms
    • Divide and Conquer algorithms ppt
      • Solving tridiagonal systems
      • Prefix computations
      •  Sample sort
      • FFT ppt
    • Mesh-based algorithm
    • Sort, Search and Graph Algorithms - ppt
      • Radix and histogram sort
      • APSP using Dijiskstra's SSSP
  • Tridiagonal systems - lecture slides
  • Prefix computations - lecture slides
  • Sorting
    • Paper: On the versatility of parallel sorting by regular sampling. Li et al. Parallel Computing. 1993. (Pages 1-6)
    • Paper: Parallel Sorting by regular sampling. Shi and Schaeffer. JPDC 1992. (Pages 2-4)
    • Paper: Highly scalable parallel sorting. Solomonic and Kale. IPDPS 2010. (Pages 1-7)
  • FFT Chapter in "Introduction to Parallel Computing" book
  • APSP: Book: Introduction to parallel computing by Grama et al. Sections 10.2-10.4, Sections 11.4.1-11.4.6
  • Graph algorithms
    • Lecture slides
    • Paper: A Scalable Distributed Parallel Breadth-First Search Algorithm on BlueGene/L. Yoo et al. SC 2005. (Pages 1-7)
    • Paper: Accelerating large graph algorithms on the GPU using CUDA. Harish and Narayanan. HiPC 2007. (Page 5-8)
    • DFS: Book: Introduction to parallel computing by Grama et al. Sections 10.2-10.4, Sections 11.4.1-11.4.6
    • A parallel graph partitioning algorithm for a message-passing multiprocessor - Gilbert and Zmijewski - - pages 427-433, 437-440.
    • Coloring: Maximal Independent sets by Luby - "Introduction to Parallel Computing" Book
    • Coloring: Paper "Scalable parallel graph coloring algorithms" by Gebremedhin, Manne.
  • Matrix computations
    • Dense Linear Algebra ppt
    • Sparse Linear Algebra ppt
  • Dense Linear Algebra
    • Lecture slides
    • Paper: Towards Dense Linear Algebra for Hybrid Accelerated Manycore Systems. Parallel Computing. 2010 (section 4.1)
  • Sparse LA
    • Sparse Matrix vector multiplication: Paper: Efficient Sparse matrix-vector multiplication on cache-based GPUs. Reguly, Giles. InPar 2012.
    • For cholesky factorization and subsequent steps:
    • Sources [You can get these papers from me and take photocopies]
      • Parallel Algorithms for sparse linear systems - Heath, Ng and Peyton
      • Reordering sparse matrices for parallel elimination - Liu
      • Task scheduling for parallel sparse Cholesky factorization - Geist and Ng
      • Lecture slides
    • General steps of sparse matrix factorization: Heath, Ng and Peyton - pages 420-429
    • Parallel ordering
      • Heath, Ng and Peyton - pages 429-435 till Kernighan Lin
      • Liu - pages 75 and 89 (you can read other pages on reduction of elimination tree heights if interested)
    • For mapping: Heath, Ng and Peyton - 437-439, figures 9 and 10; Geist and Ng - sections 3 and 4
    • For numerical factorization: Heath, Ng and Peyton - 442-450
  • Molecular Dynamics - Paper: ``A New Parallel Method for Molecular Dynamics Simulation of Macromolecular Systems'' by Plimpton and Hendrickson. Sections 2-5.
  • Mesh applications
    • Paper: "Multilevel diffusion schemes for repartitioning of adaptive meshes" by Schloegel et al.
    • Paper: "Dynamic repartitioning of adaptively refined meshes" by Schloegel et al.
    • Paper: "Dynamic Octree Load Balancing Using Space-filling curves" by Campbell et al. - Section 2.5
    • Paper: "Irregularity in Multi-dimensional space-filling curves with applications in multimedia databases" by Mokbel and Aref - Section 4
  • N-body Simulations
    • The paper "Scalable parallel formulations of the Barnes-Hut method for n-body simulations" by Grama, Kumar and Sameh. In Parallel Computing 24(1998) 797-822.
    • The paper "Load balancing and data locality in adaptive hierarchical N-body methods: Barnes-Hut, Fast Multipole, and Dardiosity" by Singh, Holt, Totsuka, Gupta and Hennessey. In Journal of Parallel and Distributed Computing, 1994.
  • Scheduling
    • Paper: "Backfilling with lookahead to optimize the packing of parallel jobs" by Shmueli and Feitelson. JPDC 2005.
    • Paper: "A comparison study of eleven static mapping heuristics for a class of meta-tasks on heterogeneous computing systems" by Tracy Braun et al., HCW 1999.
  • Fault Tolerance:
    • Paper: "An overview of checkpointing in uniprocessor and distributed systems, focusing on implementation and performance" by James Plank

Important - Look at the Rules section that contains important information on assignment deadlines, policies on plagiarism.

Platform for Assignments

Parallel Profilers

Assignments

  1. Assignment 1 - Collective communications and coalesced access

  2. Assignment 2 - Sample Sort on GPU

  3. Assignment 3 - MPI PageRank

Final Project

The final project has to clearly demonstrate the uniqueness of your work over existing work and show adequate performance improvements. You can work in a team of max 2 members. It can be in

Sample Projects from Previous Years

Important Assignment Notes

Ethics
  1. Please do not even exchange ideas with your friends since there is a thin line between exchanging ideas and codes looking the same.
  2. Please do not look up web/books for solutions.
  3. See Dr. Yogesh' nice writeup on plagiarism policies in his HPC page
Deadlines

All assignments will be evaluated for a maximum of 10. There will be a penalty of -1 for every additional day taken for submission after the assignment due date.

Thus, you will have to be judicious regarding deciding when to submit your assignments.

Example

Suppose you have completed 1/2 of the assignment by the due date.

Scenario 1:

You think that it will take another 1 day to finish 3/4 of the assignment. In this scenario, if you submit by the due date, you will get a maximum score of 5 and if you submit a day after, you will get a maximum score of 6.5 (=7.5-1, -1 for the extra day). Thus, you will get better score if you take an extra day, finish 3/4 of the assignment and then submit.

Scenario 2:

You think that it will take another 3 days to finish 3/4 of the assignment.  In this scenario, if you submit by the due date, you will get a maximum score of 5 and if you submit 3 days after, you will get a maximum score of 4.5 (=7.5-3, -3 for the three extra days). Thus, you will get better score if you submit your assignment that is 1/2 complete by the due date than submit the assignment that will be 3/4 complete after 3 days.