Learn to Use HPC Systems and Supercomputers (Complete Guide)
3.6 (8 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
77 students enrolled
Wishlisted Wishlist

Please confirm that you want to add Learn to Use HPC Systems and Supercomputers (Complete Guide) to your Wishlist.

Add to Wishlist

Learn to Use HPC Systems and Supercomputers (Complete Guide)

Learn parallel programming OpenMP, CUDA and distributed computing MPI & use HPC cluster systems with Slurm and PBS
3.6 (8 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
77 students enrolled
Last updated 8/2017
English
Current price: $10 Original price: $20 Discount: 50% off
5 hours left at this price!
30-Day Money-Back Guarantee
Includes:
  • 1 hour on-demand video
  • 48 Articles
  • 1 Practice Test
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
What Will I Learn?
  • Learn about Supercomputing
  • HPC system's basic components
  • HPC software stack
  • HPC job schedulers and batch systems (Slurm and PBS Pro)
  • Introduction to parallel programming concepts: Open MP, MPI and
  • GPU programming: CUDA
View Curriculum
Requirements
  • Linux/ Unix command line
  • Computer programming skills in any language
Description

The first course on HPC systems on the Udemy. This basic course has been specially designed to enable you to utilize parallel and distributed programming and computing to accelerate the solution of a  complex problem with the help of High Performance Computing (HPC) systems and Supercomputers.

Learn about Supercomputing

A Little bit of Supercomputing history, Supercomputing examples, Supercomputers vs. HPC clusters, HPC clusters computers, Benefits of using cluster computing.

Components of a HPC system

Components of a High Performance Systems (HPC) cluster, Properties of Login node(s), Compute node(s), Master node(s), Storage node(s), HPC networks and so on.

PBS - Portable Batch System

Introduction to PBS, PBS basic commands, PBS `qsub`, PBS `qstat`, PBS `qdel` command, PBS `qalter`, PBS job states, PBS variables, PBS interactive jobs, PBS arrays, PBS Matlatb example

SLURM -Workload Manager

Introduction to Slurm, Slurm commands, A simple Slurm job, Slurm distrbuted MPI and GPU jobs, Slurm multi-threaded OpenMP jobs, Slurm interactive jobs, Slurm array jobs, Slurm job dependencies

Parallel programming - OpenMP and MPI

OpenMP basics, Open MP - clauses, worksharing constructs, OpenMP- Hello world!, reduction and parallel `for-loop`, section parallelization, vector addition, MPI - hello world! send/ receive and `ping-pong`

Parallel programming - GPU and CUDA

Finally, it gives you a concise beginner friendly guide to the GPUs - graphics processing units, GPU Programming - CUDA, CUDA - hello world and so on!

HPC clusters typically have a large number of computers  (often called ‘nodes’) and, in general, most of these nodes would be configured identically. Though from the out side the cluster may look like a single system, the internal workings to make this happen can be quite complex. This idea should not be confused with a more general client-server model of computing as the idea behind clusters is quite unique. 

A cluster of computers joins computational powers of the compute nodes to provide a more combined computational power. Therefore, as in the client-server model, rather than a simple client making requests of one or more servers, cluster computing utilize multiple machines to provide a more powerful computing environment perhaps through a single operating system.

Who is the target audience?
  • Students, researchers and programmers from any discipline
  • Software developers and Big data analysts
Students Who Viewed This Course Also Viewed
Curriculum For This Course
65 Lectures
01:39:22
+
Supercomputers and HPC clusters
6 Lectures 16:57

Supercomputers play an important role in today’s research world. They aid us to solve compute-intensive problems such as physical simulation, climate research, molecular modeling and so on. Before we get into how to operate on a supercomputer, let’s revisit its history a bit.

Preview 02:14

Examples of Supercomputing Facilities
03:15


Introduction to HPC Systems
01:56

+
Components of a HPC System
7 Lectures 09:36
What are the HPC Nodes Types?
02:13

HPC Cluster Components
00:43

HPC Login Node(s)
00:22

HPC Master Node(s)
01:28

HPC Storage Node(s)
00:27

HPC Compute Nodes
00:11

+
HPC Access and Data Transfer
2 Lectures 04:31
Access to HPC
03:12

Data Transfer
01:19
+
HPC Software Modules
3 Lectures 05:49
+
HPC Jobs and Scheduling Software
3 Lectures 05:04
+
SLURM - Workload Manager
12 Lectures 13:57
Introduction to Slurm
01:36

What are the Most Common Slurm Commands?
02:45

A List of Slurm Commands
01:21

Useful Slurm Commands
01:05

Slurm Entities and Partitions
01:22

Example of a Simple Slurm Job
00:29

Slurm Job Submission Demonstration
02:50

Slurm distributed MPI and GPU jobs
00:35

Slurm Multi-threaded OpenMP Jobs
00:24

Slurm Interactive Jobs
00:35

Slurm Array Jobs
00:32

Slurm job dependencies
00:22

Slurm Commands Test
3 questions
+
PBS - Portable Batch System
12 Lectures 14:25
Introduction to PBS
01:08

PBS Command Examples
01:44

PBS basic commands
00:15

PBS command: qsub
03:05

PBS command: qstat
00:28

PBS command: qdel
00:30

PBS command: qalter
00:21


PBS Job Variables
01:14

PBS Job Script Example
01:47

PBS Interactive Jobs
00:38

PBS Arrays
02:19
+
Parallel Programming with OpenMP
9 Lectures 11:57
Introduction to OpenMP
00:50

OpenMP Components (Directives, Routines and Variables)
00:59

OpenMP Clauses
00:52

OpenMP - Worksharing Constructs
00:43

OpenMP- Hello world! Code Example
00:36

OpenMP Hello world! Demonstration
06:17

OpenMP - Reduction and Parallel `for-loop`
00:47

OpenMP - Section Parallelization Example
00:30

OpenMP Vector Add Example
00:20
+
Parallel and Distributed Programming with MPI (Message Passing Interface)
5 Lectures 09:12
Introduction to MPI
01:45

MPI Programm Stucture
00:59

MPI - Hello World! Example
01:35


MPI Send/ Receive
02:22
+
Data-Parallel Programming with GPUs (Graphics Processing Units)
4 Lectures 09:27

What is CUDA?
02:15

CUDA - Hello World! Example Code
01:35

CUDA Vector Addition Demonstration
03:54
1 More Section
About the Instructor
Ahmed Arefin, PhD
3.7 Average rating
42 Reviews
1,377 Students
3 Courses
Computation Scientist| Founder- Learn Scientific Programming

Ahmed Arefin, PhD is an enthusiastic computer programmer with more than a decade of well-rounded computational experience. He likes to code, but loves to write, research and teach. Following a PhD and Postdoc research in the area of data-parallelism he moved forward to become a Scientific Computing professional, keeping his research interests on, in the area of parallel, distributed and accelerated computing. 

In his day job, he pets a few of the world’s fastest T500 supercomputers at a large Australian agency for scientific research.

Learn Scientific Programming
3.7 Average rating
42 Reviews
1,377 Students
3 Courses

Learn Scientific Programming is an innovative E-Learning school that aims to demonstrate the use of scientific programming languages, e.g., Julia, OpenMP, MPI, C++, Matlab, Octave, Bash, Python Sed and AWK including RegEx in processing scientific and real-world data. 

We help you to solve large-scale science biological, engineering, and humanities problems, gain adequate understanding through the analysis of mathematical models implemented on high-performance computers and share the knowledge. 

scientificprogramming io