Parallel Programming
- Instructor: Sathish Vadhiyar
- Course number: DS295
- Credits: 3:1
- Semester: Jan 2025
- Lecture: Tue/Th 1130AM-1PM (First class: Jan 2, 2025, 1130AM)
- Office Hours: Fridays, 5-6 PM
- Fill out this survey form
- Microsoft Teams Link
Overview
The objective of this course is to give you some level of confidence in parallel programming techniques, algorithms and tools. At the end of the course, you would (we hope) be in a position to apply parallelization to your project areas and beyond, and to explore new avenues of research in the area of parallel programming.
The course covers parallel architectures, parallelization principles, parallel programming standards and tools, parallel algorithms, numerical methods, scientific applications and parallel system software.
Pre-requisites
Basic knowledge of computer systems, data structures and programming, and algorithms.
Grading Scheme
- Sessionals
- Mid-term [1 No.] (February 18) – 20
- Assignments [3 Nos.] – 30
- Assignment 1 due: February 10
- Assignment 2 due: March 20
- Assignment 3 due: April 7
- Terminal
- Literature Study Report (Due: around April mid) – 20
- Final project – 30
- Proposal: February 28
- Final presentation: Final exam day
- Final report: Next day
Resources
- Introduction to Parallel Computing. Ananth Grama, Anshul Gupta, George Karypis, Vipin Kumar. Publisher: Addison Wesley. ISBN: 0-201-64865-2. 2003.
- Parallel Computing Architecture. A Hardware/Software Approach. David Culler, Jaswant Singh. Publisher: Morgan Kauffman. ISBN: 981-4033-103. 1999.
- Parallel Computing. Theory and Practice. Michael J. Quinn. Publisher: Tata: McGraw-Hill. ISBN: 0-07-049546-7. 2002.
- An Introduction to Parallel Programming. Peter S Pacheco. Publisher: Morgan Kauffman. ISBN: 978-93-80931-75-3. 2011.
- Online references for OpenMP, MPI, CUDA
- Various publication materials and references that will be posted along with the lecture slides.
Lecture Slides
- Parallel Programming fundamentals
- Parallel programming standards and tools
- Parallel Algorithms
- Searching
- Sorting
- Machine and Deep Learning Algorithms
- Parallel linear algebra and numerical methods
- Matrix multiplication
- Fast Fourier Transforms
- Dense linear algebra
- Sparse linear algebra
- Parallelism in scientific applications
- System software
Reading Materials
For those portions for which reading materials are not given below, lecture slides above are the only reading materials.
- Parallel Programming fundamentals
- Parallel Architecture
- Grama et al. – 2.4, For GPU architecture
- Google for NVIDIA A100 white paper
- Parallelization Principles
- Grama et al. – 3.1, 3.5, 5.1-5.6
- Book by Culler and Singh – 2.2, 2.3
- PRAM: Book by Michael J Quinn Pages 25-32, 40-42, 256
- Roofline performance model – Paper by Williams et al. “Roofline: an insightful visual performance model for multicore architectures” in Communications of ACM, 2009.
- Parallel Architecture
- Parallel Programming standards and tools
- CUDA – Google for NVIDIA CUDA programming guide.
Topic | MPI |
|
|
|
|
Sparse linear algebra pdfDirect linear algebra pdf | Sparse LA
|
Scientific Applications
DFS and Dynamic Load Balancing pdf Mesh applications pdf N-Body Simulations pdf Bioinformatics pdf |
|
System Software |
|
Assignments
- Turing cluster notes on OpenMP, MPI and CUDA for the assignments
- Assignment 1 –
- Assignment 2 –
- Assignment 3 –
Final Project
The final project has to clearly demonstrate the uniqueness of your work over existing work and show adequate performance improvements. You can work in a team of max 2 members. It can be in
- parallelizing well known algorithms or problems. Examples:
- hybrid executions on both CPU and GPU cores.
- graph algorithms – spanning trees, shortest paths, satisfiability, mesh generation or refinement (e.g., Delaunay)
- sorting, searching
- clustering algorithms
- parallelization of fundamental algorithms you would encounter in an algorithm book. e.g.,
- Introduction to Algorithms. Third Edition. Cormen, Leiserson, Rivest, Stein
- The Design and Analysis of Algorithms. Aho, Hopcroft, Ulman
- Data Structures and Algorithms. Aho, Hopcroft, Ulman
- parallelizing numerical methods, Examples:
- Dense martrix or sparse matrix computations. e.g., Cholesky, QR, Inverse, SVD, conjugate gradient etc.
- >Transforms (FFT, wavelet etc.)
- Any numerical method you would encounter in matrix/numerical methods book. e.g.,
- Introduction to Numerical Analysis. Second Edition. Hildebrand
- Elementary Numerical Analysis. Third Edition. Conte, de Boor
- System software. Examples:
- Techniques for scheduling or mapping parallel tasks to processors to achieve least makespan or throughput
- Load balancing techniques for irregular computations
- Automatic techniques for splitting tasks among GPU and CPU cores.
- Automatic data management techniques for GPU cores.
- Proposing genetic programming abstractions
- Fault tolerance
Sample Projects from Previous Years
- Finding maximum clique in hybrid system – pdf
- Parallel hybrid implementation of SVM – pdf
- GPU TSP – pdf
- GPU implementation of implicit Runge-Kutta methods – pdf
- Parallel implementation of shared nearest neighbor clustering algorithm – pdf
- Hybrid implementation of alternate least square algorithm – pdf
- Approximate k-nearest neighbor search – pdf
- Incremental Delaunay triangulation – pdf
- k-maximum subarray problem – pdf
- Scale Invariant Feature Transform (SIFT) – pdf
- AKS algorithm for primality proving – pdf
- Betweenness centrality – pdf
Ethics and Rules
Ethics
- Please do not even exchange ideas with your friends since there is a thin line between exchanging ideas and codes looking the same.
- Please do not look up web/books for solutions.
- See Dr. Yogesh’ nice writeup on plagiarism policies in his HPC page
Deadlines
All assignments will be evaluated for a maximum of 10. There will be a penalty of -1 for every additional day taken for submission after the assignment due date.
Thus, you will have to be judicious regarding deciding when to submit your assignments.
Example
Suppose you have completed 1/2 of the assignment by the due date.
Scenario 1:
You think that it will take another 1 day to finish 3/4 of the assignment. In this scenario, if you submit by the due date, you will get a maximum score of 5 and if you submit a day after, you will get a maximum score of 6.5 (=7.5-1, -1 for the extra day). Thus, you will get better score if you take an extra day, finish 3/4 of the assignment and then submit.
Scenario 2:
You think that it will take another 3 days to finish 3/4 of the assignment. In this scenario, if you submit by the due date, you will get a maximum score of 5 and if you submit 3 days after, you will get a maximum score of 4.5 (=7.5-3, -3 for the three extra days). Thus, you will get better score if you submit your assignment that is 1/2 complete by the due date than submit the assignment that will be 3/4 complete after 3 days.