Parallel Programming (3:1)

Instructor: Sathish Vadhiyar

Meeting Hours: 9:30-11:00 AM; Tuesday, Thursday; Room 202, SERC

The objective of this course is to give you some level of confidence in parallel programming techniques, algorithms and tools. At the end of the course, you would (we hope) be in a position to apply parallelization to your project areas and beyond, and to explore new avenues of research in the area of parallel programming. The course covers parallel programming tools, constructs, models, algorithms, parallel matrix computations, parallel programming optimizations, scientific applications and parallel system software. MPI, OpenMP and CUDA will be covered.

Class

  1. Abhijith J
  2. Bhatt Shreyas Atul
  3. Bikash Chandra Sahoo
  4. Kartik Kharbanda
  5. Manjunath Hegde
  6. Meesala Raviteja
  7. Navchetan Awasthi
  8. Nikhil P Ranjanikar
  9. Nikhilesh Meghwal
  10. Pragati Sahebrao Chopade
  11. Satish Kumar
  12. Suguna M
  13. Tanmoy Mahajan
  14. Vijay Kumar M
  15. Vineetha Kondameedi
  16. Sabyasachi Sahoo
  17. Shilpa Babalad
  18. Ankit Kumar
  19. Akhil Gupta

Additional Reading

Topic Reading Material
  • Prerequisites
    • Introduction
    • MPI
    • OpenMP
  • Introduction: Grama et al. - 2.4, 3.1, 3.5, 5.1, 5.6
  • MPI-1: Online tutorial "MPI Complete Reference". Google for it.
  • OpenMP - Lecture slides, and OpenMP tutorial: http://www.llnl.gov/computing/tutorials/openMP
  • See Dr. Yogesh' HPC course slides for more details
  • Parallel Programming tools/languages/models
    • MPI collective communication implementations
    • MPI communicator groups, process topologies
    • CUDA-basics
    • PRAM algorithms

 

  • MPI-1: Online tutorial "MPI Complete Reference". Google for it.
  • Collective Communications: Lecture slides, and Google for paper "Optimization of Collective Communication Operations in MPICH" by Thakur, Rabenseifner and Gropp, IJHPCA 2005
  • PRAM algorithms: Book "Parallel Computing: Theory and Practice" by Michael J Quinn. Available with me. Pages 25-32, 40-42, 256
  • Parallel Algorithms
    • List ranking and Parallel Prefix
    • Sorting
    • Graph
    • FFT
  • List ranking
    • Paper: Optimization of Linked List Prefix Computations on Multithreaded GPUs Using CUDA (section III)
  • Sorting
    • Book: Introduction to parallel computing by Grama et al. Sections 9.2.1, 9.2.2, 9.5, 9.6.2
    • Paper: On the versatility of parallel sorting by regular sampling. Li et al. Parallel Computing. 1993. (Pages 1-6)
    • Paper: Parallel Sorting by regular sampling. Shi and Schaeffer. JPDC 1992. (Pages 2-4)
  • Graph algorithms
    • Paper: A Scalable Distributed Parallel Breadth-First Search Algorithm on BlueGene/L. Yoo et al. SC 2005. (Pages 1-7)
    • Paper:Accelerating large graph algorithms on the GPU using CUDA. Harish and Narayanan. HiPC 2007. (Page 5-8)
    • Book: Introduction to parallel computing by Grama et al. Sections 10.2-10.4, Sections 11.4.1-11.4.6
  • FFT Chapter in "Introduction to Parallel Computing" book
  • Matrix computations
    • Dense matrix computations
    • Sparse matrix computations
  • Dense Linear Algebra
    • Lecture slides
    • Paper: Towards Dense Linear Algebra for Hybrid Accelerated Manycore Systems. Parallel Computing. 2010 (section 4.1)
  • Sparse LA
    • Sparse Matrix vector multiplication: Paper: Efficient Sparse matrix-vector multiplication on cache-based GPUs. Reguly, Giles. InPar 2012.
    • For cholesky factorization and subsequent steps:
    • Sources [You can get these papers from me and take photocopies]
      • Parallel Algorithms for sparse linear systems - Heath, Ng and Peyton
      • Reordering sparse matrices for parallel elimination - Liu
      • A parallel graph partitioning algorithm for a message-passing multiprocessor - Gilbert and Zmijewski
      • Task scheduling for parallel sparse Cholesky factorization - Geist and Ng
      • Lecture slides
    • General steps of sparse matrix factorization: Heath, Ng and Peyton - pages 420-429
    • Parallel ordering
      • Heath, Ng and Peyton - pages 429-435 till Kernighan Lin
      • Liu - pages 75 and 89 (you can read other pages on reduction of elimination tree heights if interested),
      • for Kernighan Lin ordering - Gilbert and Zmijewski - pages 427-433, 437-440, your lecture slides
    • For mapping: Heath, Ng and Peyton - 437-439, figures 9 and 10; Geist and Ng - sections 3 and 4
    • For numerical factorization: Heath, Ng and Peyton - 442-450
    • Maximal Independent sets by Luby - "Introduction to Parallel Computing" Book
  • Advanced Parallel Programming Constructs, Models and Optimizations
    • MPI-IO
    • Parallel I/O Optimizations
    • GPU programming - CUDA Optimizations ppt, Advanced CUDA Slides
  • Scientific Applications
    • Molecular dynamics
    • Game of Life
    • Mesh applications
  • Molecular Dynamics - Paper: ``A New Parallel Method for Molecular Dynamics Simulation of Macromolecular Systems'' by Plimpton and Hendrickson. Sections 2-5.
  • Mesh applications
    • Paper: "Multilevel diffusion schemes for repartitioning of adaptive meshes" by Schloegel et al.
    • Paper: "Dynamic repartitioning of adaptively refined meshes" by Schloegel et al.
    • Paper: "Dynamic Octree Load Balancing Using Space-filling curves" by Campbell et al. - Section 2.5
    • Paper: "Irregularity in Multi-dimensional space-filling curves with applications in multimedia databases" by Mokbel and Aref - Section 4
  • Parallel System Software
    • Scheduling in Parallel Systems
    • Fault tolerance for large systems
  • Scheduling
    • Paper: "Backfilling with lookahead to optimize the packing of parallel jobs" by Shmueli and Feitelson. JPDC 2005.
    • Paper: "A comparison study of eleven static mapping heuristics for a class of meta-tasks on heterogeneous computing systems" by Tracy Braun et al., HCW 1999.
  • Fault Tolerance:
    • Paper: "An overview of checkpointing in uniprocessor and distributed systems, focusing on implementation and performance" by James Plank

Important - Look at the Rules section that contains important information on assignment deadlines, policies on plagiarism.

Platform for Assignments and Project- Tyrone, Fermi

Parallel Profilers

Assignments

  1. Assignment 1 - Jacobi iterations using MPI, and Sparse Matrix Vector Multiplication using CUDA

  2. Assignment 2 - PageRank using MPI, illustration of CUDA Optimizations

  3. Assignment 3 - Parallel Nested Dissection

Final Project

The final project has to clearly demonstrate the uniqueness of your work over existing work and show adequate performance improvements. You can work in a team of max 2 members. It can be in

Sample Projects from Previous Years

Important Assignment Notes

Ethics
  1. Please do not even exchange ideas with your friends since there is a thin line between exchanging ideas and codes looking the same.
  2. Please do not look up web/books for solutions.
  3. See Dr. Yogesh' nice writeup on plagiarism policies in his HPC page
Deadlines

All assignments will be evaluated for a maximum of 10. There will be a penalty of -1 for every additional day taken for submission after the assignment due date.

Thus, you will have to be judicious regarding deciding when to submit your assignments.

Example

Suppose you have completed 1/2 of the assignment by the due date.

Scenario 1:

You think that it will take another 1 day to finish 3/4 of the assignment. In this scenario, if you submit by the due date, you will get a maximum score of 5 and if you submit a day after, you will get a maximum score of 6.5 (=7.5-1, -1 for the extra day). Thus, you will get better score if you take an extra day, finish 3/4 of the assignment and then submit.

Scenario 2:

You think that it will take another 3 days to finish 3/4 of the assignment.  In this scenario, if you submit by the due date, you will get a maximum score of 5 and if you submit 3 days after, you will get a maximum score of 4.5 (=7.5-3, -3 for the three extra days). Thus, you will get better score if you submit your assignment that is 1/2 complete by the due date than submit the assignment that will be 3/4 complete after 3 days.