Learn to Use HPC Systems and Supercomputers (Complete Guide)
3.6 (8 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
77 students enrolled
Wishlisted Wishlist

Please confirm that you want to add Learn to Use HPC Systems and Supercomputers (Complete Guide) to your Wishlist.

Add to Wishlist

Learn to Use HPC Systems and Supercomputers (Complete Guide)

Learn parallel programming OpenMP, CUDA and distributed computing MPI & use HPC cluster systems with Slurm and PBS
3.6 (8 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
77 students enrolled
Last updated 8/2017
Current price: $10 Original price: $20 Discount: 50% off
5 hours left at this price!
30-Day Money-Back Guarantee
  • 1 hour on-demand video
  • 48 Articles
  • 1 Practice Test
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
What Will I Learn?
  • Learn about Supercomputing
  • HPC system's basic components
  • HPC software stack
  • HPC job schedulers and batch systems (Slurm and PBS Pro)
  • Introduction to parallel programming concepts: Open MP, MPI and
  • GPU programming: CUDA
View Curriculum
  • Linux/ Unix command line
  • Computer programming skills in any language

The first course on HPC systems on the Udemy. This basic course has been specially designed to enable you to utilize parallel and distributed programming and computing to accelerate the solution of a  complex problem with the help of High Performance Computing (HPC) systems and Supercomputers.

Learn about Supercomputing

A Little bit of Supercomputing history, Supercomputing examples, Supercomputers vs. HPC clusters, HPC clusters computers, Benefits of using cluster computing.

Components of a HPC system

Components of a High Performance Systems (HPC) cluster, Properties of Login node(s), Compute node(s), Master node(s), Storage node(s), HPC networks and so on.

PBS - Portable Batch System

Introduction to PBS, PBS basic commands, PBS `qsub`, PBS `qstat`, PBS `qdel` command, PBS `qalter`, PBS job states, PBS variables, PBS interactive jobs, PBS arrays, PBS Matlatb example

SLURM -Workload Manager

Introduction to Slurm, Slurm commands, A simple Slurm job, Slurm distrbuted MPI and GPU jobs, Slurm multi-threaded OpenMP jobs, Slurm interactive jobs, Slurm array jobs, Slurm job dependencies

Parallel programming - OpenMP and MPI

OpenMP basics, Open MP - clauses, worksharing constructs, OpenMP- Hello world!, reduction and parallel `for-loop`, section parallelization, vector addition, MPI - hello world! send/ receive and `ping-pong`

Parallel programming - GPU and CUDA

Finally, it gives you a concise beginner friendly guide to the GPUs - graphics processing units, GPU Programming - CUDA, CUDA - hello world and so on!

HPC clusters typically have a large number of computers  (often called ‘nodes’) and, in general, most of these nodes would be configured identically. Though from the out side the cluster may look like a single system, the internal workings to make this happen can be quite complex. This idea should not be confused with a more general client-server model of computing as the idea behind clusters is quite unique. 

A cluster of computers joins computational powers of the compute nodes to provide a more combined computational power. Therefore, as in the client-server model, rather than a simple client making requests of one or more servers, cluster computing utilize multiple machines to provide a more powerful computing environment perhaps through a single operating system.

Who is the target audience?
  • Students, researchers and programmers from any discipline
  • Software developers and Big data analysts
Students Who Viewed This Course Also Viewed
Curriculum For This Course
65 Lectures
Supercomputers and HPC clusters
6 Lectures 16:57

Supercomputers play an important role in today’s research world. They aid us to solve compute-intensive problems such as physical simulation, climate research, molecular modeling and so on. Before we get into how to operate on a supercomputer, let’s revisit its history a bit.

Preview 02:14

Examples of Supercomputing Facilities

Introduction to HPC Systems

Components of a HPC System
7 Lectures 09:36
What are the HPC Nodes Types?

HPC Cluster Components

HPC Login Node(s)

HPC Master Node(s)

HPC Storage Node(s)

HPC Compute Nodes

HPC Access and Data Transfer
2 Lectures 04:31
Access to HPC

Data Transfer
HPC Software Modules
3 Lectures 05:49
HPC Jobs and Scheduling Software
3 Lectures 05:04
SLURM - Workload Manager
12 Lectures 13:57
Introduction to Slurm

What are the Most Common Slurm Commands?

A List of Slurm Commands

Useful Slurm Commands

Slurm Entities and Partitions

Example of a Simple Slurm Job

Slurm Job Submission Demonstration

Slurm distributed MPI and GPU jobs

Slurm Multi-threaded OpenMP Jobs

Slurm Interactive Jobs

Slurm Array Jobs

Slurm job dependencies

Slurm Commands Test
3 questions
PBS - Portable Batch System
12 Lectures 14:25
Introduction to PBS

PBS Command Examples

PBS basic commands

PBS command: qsub

PBS command: qstat

PBS command: qdel

PBS command: qalter

PBS Job Variables

PBS Job Script Example

PBS Interactive Jobs

PBS Arrays
Parallel Programming with OpenMP
9 Lectures 11:57
Introduction to OpenMP

OpenMP Components (Directives, Routines and Variables)

OpenMP Clauses

OpenMP - Worksharing Constructs

OpenMP- Hello world! Code Example

OpenMP Hello world! Demonstration

OpenMP - Reduction and Parallel `for-loop`

OpenMP - Section Parallelization Example

OpenMP Vector Add Example
Parallel and Distributed Programming with MPI (Message Passing Interface)
5 Lectures 09:12
Introduction to MPI

MPI Programm Stucture

MPI - Hello World! Example

MPI Send/ Receive
Data-Parallel Programming with GPUs (Graphics Processing Units)
4 Lectures 09:27

What is CUDA?

CUDA - Hello World! Example Code

CUDA Vector Addition Demonstration
1 More Section
About the Instructor
Ahmed Arefin, PhD
3.7 Average rating
42 Reviews
1,377 Students
3 Courses
Computation Scientist| Founder- Learn Scientific Programming

Ahmed Arefin, PhD is an enthusiastic computer programmer with more than a decade of well-rounded computational experience. He likes to code, but loves to write, research and teach. Following a PhD and Postdoc research in the area of data-parallelism he moved forward to become a Scientific Computing professional, keeping his research interests on, in the area of parallel, distributed and accelerated computing. 

In his day job, he pets a few of the world’s fastest T500 supercomputers at a large Australian agency for scientific research.

Learn Scientific Programming
3.7 Average rating
42 Reviews
1,377 Students
3 Courses

Learn Scientific Programming is an innovative E-Learning school that aims to demonstrate the use of scientific programming languages, e.g., Julia, OpenMP, MPI, C++, Matlab, Octave, Bash, Python Sed and AWK including RegEx in processing scientific and real-world data. 

We help you to solve large-scale science biological, engineering, and humanities problems, gain adequate understanding through the analysis of mathematical models implemented on high-performance computers and share the knowledge. 

scientificprogramming io