DS221: Introduction to Scalable Systems (2018)

Department of Computational and Data Sciences

Introduction to Scalable Systems

  • Instructor: Sathish Vadhiyar (www | email), Yogesh Simmhan (www | email) and Matthew Jacob (www | email)
  • Teaching Assistant: Manasi Tiwari (email)
  • Course number: DS221
  • Credits: 3:1
  • Semester: Aug, 2018
  • Lecture: Tue/Th 1130AM-1PM (First class: Aug 7, 1130AM)
  • Lab: TBD
  • Room: CDS 202

[DS221 2017 Website]

Overview

This course covers computer systems topics that are essential for students engaging in computational and data sciences. It introduces topics on architecture, OS and data structures that may be new to students without a degree in Computer Science. Then it moves to more advanced topics on tree/graph data structures, HPC/GPGPU programming and Big Data platforms.

Some of the topics covered are: Architecture: computer organization, single-core optimizations including exploiting cache hierarchy and vectorization, parallel architectures including multi-core, shared memory, distributed memory and GPU architectures; Algorithms and Data Structures: algorithmic analysis, overview of trees and graphs, algorithmic strategies, concurrent data structures; Parallelization Principles: motivation, challenges, metrics, parallelization steps, data distribution, PRAM model; Parallel Programming Models and Languages: OpenMP, MPI, CUDA; Distributed Computing: Commodity cluster and cloud computing; Distributed Programming: MapReduce/Hadoop model.

This course is a precursor to more advanced courses like DS 295: Parallel Programming, and DS 256: Scalable Systems for Data Science, and includes topics from the earlier DS 286: Data Structures and Programming and DS 292: High Performance Computing courses.

Pre-requisites

This is an introductory crash-course on computer systems, algorithms, HPC and Big Data platforms. So the pre-requisites are minimal: Basic knowledge of computer systems, data structures and programming, and algorithms. However, the course will have a rapid pace and students are expected to pick up the skills rapidly through self-learning.

Grading Scheme

  • Sessionals (50 points)
    • 3 assignments – 10 + (15 + 5) = 30 points
    • Two mid-Term exams – 2 x 10 = 20 points
  • Terminal (50 points)
    • 1 assignment – 20 points
    • Final exam – 30 points

Resources

  • Parallel Computing Architecture. A Hardware/Software Approach. David Culler, Jaswant Singh. Publisher: Morgan Kauffman. ISBN: 981-4033-103. 1999.
  • Parallel Computing. Theory and Practice. Michael J. Quinn. Publisher: Tata: McGraw-Hill. ISBN: 0-07-049546-7. 2002.
  • Computer Systems – A Programmer’s Perspective. Bryant and O’Hallaron. Publisher: Pearson Education. ISBN: 81-297-0026-3. 2003.
  • Introduction to Parallel Computing. Ananth Grama, Anshul Gupta, George Karypis, Vipin Kumar. Publisher: Addison Wesley. ISBN: 0-201-64865-2. 2003.
  • An Introduction to Parallel Programming. Peter S Pacheco. Publisher: Morgan Kauffman. ISBN: 978-93-80931-75-3. 2011.
  • Online references for OpenMP, MPI, CUDA
  • Data Structures, Algorithms, and Applications in C++, 2nd Edition, Sartaj Sahni
  • Lecture slides

Tentative Schedule

NOTE: Besides these lectures, there will be several lab sessions and tutorials as well. The schedule for these will be announced in class.

  • Architecture (7 lectures) MJT, Starting Aug 7
    • Computer organization including cache hierarchy, locality
    • Assignment 1
  • Midterm Exam 1 (Aug 30)


  • Algorithms and Data Structures: (7 lectures/YS), YS, Starting Sep 4
    • Algorithmic analysis and Lists
    • Basic Data Structures: Stacks, Queues, Trees
    • Searching: Hashmap, Search trees
    • Fundamentals of graphs
    • Algorithmic Strategies
    • Assignment 2 (posted online on 18 Sep, 2018)
  • Big Data Systems (3 lectures/YS)
    • Introduction to Big Data, Spark programming model
    • Big Data programming models: Spark Streaming and Giraph
    • Assignment 3
  • Midterm Exam 2 (Oct 16)


  • Parallel architectures pdf
    • Shared and Distributed memory architectures
    • Many-core architectures pdf
  • Parallelization Principles pdf
  • Parallel Programming Models and Languages
    • Shared memory: OpenMP pdf
    • Distributed Memory: MPI – pt2pt, collectives pdf
    • GPUs: CUDA pdf
  • Sample Parallel Algorithms
    • Prefix computations, Sorting, Searching pdf
  • Final Exam (Fri Dec 7, Morning)

Reading Portions

  • Parallel architecture – Grama et al. – 2.4, Many-core – Google for NVIDIA Kepler white paper
  • Parallelization principles – Grama et al. – 3.1, 3.5, 5.1-5.6; Culler and Singh – 2.2, 2.3
  • OpenMP tutorial: http://www.llnl.gov/computing/tutorials/openMP
  • MPI online tutorial: “MPI complete reference”. Google for it.
  • CUDA: Google for NVIDIA CUDA programming guide.
  • Parallel algorithms
    • Parallel Quick sort – the corresponding portion in the text book by Grama et al.
    • BFS
      • Paper: A Scalable Distributed Parallel Breadth-First Search Algorithm on BlueGene/L. Yoo et al. SC 2005. (Pages 1-7)
      • Paper: Accelerating large graph algorithms on the GPU using CUDA. Harish and Narayanan. HiPC 2007. (Page 5-8)

Assignments

Mid-Term Answer Keys

  • Midterm 1
  • Midterm