Python Parallel Programming Solutions
3.5 (15 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
287 students enrolled
Wishlisted Wishlist

Please confirm that you want to add Python Parallel Programming Solutions to your Wishlist.

Add to Wishlist

Python Parallel Programming Solutions

Master efficient parallel programming to build powerful applications using Python
3.5 (15 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
287 students enrolled
Created by Packt Publishing
Last updated 3/2017
English
Current price: $12 Original price: $125 Discount: 90% off
4 days left at this price!
30-Day Money-Back Guarantee
Includes:
  • 4 hours on-demand video
  • 1 Supplemental Resource
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion

Training 5 or more people?

Get your team access to Udemy's top 2,000 courses anytime, anywhere.

Try Udemy for Business
What Will I Learn?
  • Synchronize multiple threads and processes to manage parallel tasks
  • Implement message passing communication between processes to build parallel applications
  • Program your own GPU cards to address complex problems
  • Manage computing entities to execute distributed computational tasks
  • Write efficient programs by adopting the event-driven programming model
  • Explore the cloud technology with DJango and Google App Engine
  • Apply parallel programming techniques that can lead to performance improvements
View Curriculum
Requirements
  • This course is for software developers who are well versed with Python and want to use parallel programming techniques to write powerful and efficient code.
Description

This course will teach you parallel programming techniques using examples in Python and help you explore the many ways in which you can write code that allows more than one process to happen at once.

Starting with introducing you to the world of parallel computing, we move on to cover the fundamentals in Python. This is followed by exploring the thread-based parallelism model using the Python threading module by synchronizing threads and using locks, mutex, semaphores queues, GIL, and the thread pool. Next you will be taught about process-based parallelism, where you will synchronize processes using message passing and will learn about the performance of MPI Python Modules.

Moving on, you’ll get to grips with the asynchronous parallel programming model using the Python asyncio module, and will see how to handle exceptions. You will discover distributed computing with Python, and learn how to install a broker, use Celery Python Module, and create a worker. You will understand anche Pycsp, the Scoop framework, and disk modules in Python. Further on, you will get hands-on in GPU programming with Python using the PyCUDA module and will evaluate performance limitations.

About the Author

Giancarlo Zaccone, a physicist, has been involved in scientific computing projects among firms and research institutions. He currently works in an IT company that designs software systems with high technological content.

Who is the target audience?
  • It will help you master the basics and the advanced levels of parallel computing.
Compare to Other Python Courses
Curriculum For This Course
64 Lectures
03:59:05
+
Getting Started with Parallel Computing and Python
9 Lectures 46:49

In this video, we will take a look at Flynn's taxonomy. 

Preview 06:12

Another aspect that we need to consider to evaluate a parallel architecture is memory organization. In this video you will understand this concept.

Memory Organization
06:58

This video is the continuation of the previous video where we will take a closer look at distributed memory systems.

Memory Organization (Continued)
05:30

In this video, you will get an overview of parallel programming models.

Parallel Programming Models
04:13

The design of algorithms that exploit parallelism is based on a series of operations. This video shows us how to design such parallel programs.

Designing a Parallel Program
06:18

The development of parallel programming created the need of performance metrics. This video will help us evaluate the performance of a parallel program.

Evaluating the Performance of a Parallel Program
05:19

Python is a powerful, dynamic, and interpreted programming language that is used in a wide variety of applications. In this video, we will get introduced to Python and its features.

Introducing Python
06:19

In this video, we simply demonstrate how to start a single new program from inside a Python program.

Working with Processes in Python
02:25

This video simply shows you how to create a single thread inside a Python program.

Working with Threads in Python
03:35
+
Thread-Based Parallelism
11 Lectures 32:13

The simplest way to use a thread is to instantiate it with a target function. This video shows us how to do that.

Preview 03:19

This video helps us in determining the thread which we created earlier.

Determining the Current Thread
01:04

This video will help us to implement a new thread using the threading module.

Using a Thread in a Subclass
01:57

In this video, we describe the Python threading synchronization mechanism called lock().

Thread Synchronization with Lock
05:22

If we want only the thread that acquires a lock to release it, we must use an RLock() object. This video will get you introduced to RLock.

Thread Synchronization with RLock
01:45

A semaphore is an abstract data type managed by the operating system. In this video we will carry out the thread synchronization with semaphores.

Thread Synchronization with Semaphores
04:50

A condition identifies a change of state in the application. In this video, we will carry out the thread synchronization with a condition.

Thread Synchronization with a Condition
02:24

Events are objects that are used for communication between threads. In this video, we will carry out thread synchronization with an event.

Thread Synchronization with an Event
01:49

The "with" statement is useful when you have two related operations that must be executed as a pair with a block of code in-between. This video will show us how to use the "with" statement.

Using the "with" Statement
02:01

Queues are much easier to deal with and make threaded programming considerably safe. In this video, we will take a look at thread communication using queue.

Thread Communication Using a Queue
03:06

In this video, we will verify the impact of the GIL, evaluating the performance of a multithread application.

Evaluating the Performance of Multithread Applications
04:36
+
Process-Based Parallelism
18 Lectures 44:09

Spawn means the creation of a process by a parent process. This video will show us how to spawn a process.

Preview 02:47

The procedure to name a process is similar to that described for the threading library. Let's check this out in this video.

Naming a Process
01:13

Running a process in the background is a typical mode of execution of laborious processes. This video will show you how to do that.

Running a Process in the Background
01:16

It's possible to kill a process immediately using the terminate() method. Let's see how to do that in this video.

Killing a Process
01:27

In this video, we will see how to implement a custom subclass and process. 

Using a Process in a Subclass
01:21

The development of parallel applications has the need for the exchange of data between processes. We will see how to do that in this video.

Exchanging Objects between Processes
02:57

Synchronization primitives are quite similar to those encountered for the library and threading. In this video, we will see how to synchronize the process.

Synchronizing Processes
02:42

Python multiprocessing provides a manager to coordinate shared information between all its users. In this video, we will see how to manage a state between processes.

Managing a State between Processes
01:26

The multiprocessing library provides the Pool class for simple parallel processing tasks. In this video, we will see how to use it.

Using a Process Pool
02:21

The Python programming language provides a number of MPI modules to write parallel programs. In this video, we will see how to use the mpi4py library.

Using the mpi4py Python Module
04:11

One of the most important features among those provided by MPI is the point-to-point communication. We will check that out in this video.

Point-to-Point Communication
02:59

A common problem we face is that of the deadlock in processes. This video will help us to avoid such problems.

Avoiding Deadlock Problems
03:06

In a collective communication broadcast process, a single process sends the same data to any other process.

Using Broadcast for Collective Communication
03:10

The scatter functionality sends the chunks of data in an array to different processes. This video will show us how to use scatter for collective communication.

Using Scatter for Collective Communication
02:08

With the gather function, all processes send data to a root process that collects the data received. Let's look at using gather for collective communication.

Using Gather for Collective Communication
01:39

The Alltoall collective communication combines the scatter and gathers functionalities. In this video, we will see how to use Alltoall for collective communication.

Using Alltoall for Collective Communication
03:05

Reduction takes an array of input elements in each process and returns an array of output elements to the root process. We will take a look at this operation in this video.

The Reduction Operation
02:53

MPI allows us to assign a virtual topology to a communicator. In this video, we will see how to optimize the communication using such mechanism.

Optimizing the Communication
03:28
+
Asynchronous Programming
5 Lectures 19:13

With the release of Python 3.2, the concurrent.future module was introduced. In this video, we will see how to use this module.

Preview 05:21

In this video, the focus is on handling events with the help of Asyncio.

Event Loop Management with Asyncio
04:20

In this video, we will see how to use the co-routine mechanism of Asyncio to simulate a finite state machine of five states.

Handling Coroutines with Asyncio
04:06

The Asyncio module provides us with the asyncio. Task(coroutine) method to handle computations with tasks. In this video, we will see how to manipulate a task with Asyncio.

Manipulating a Task with Asyncio
02:22

Another key component of the Asyncio module is the Future class. This video will teach you how to deal with Asyncio and futures.

Dealing with Asyncio and Futures
03:04
+
Distributed Python
9 Lectures 39:59

Celery is a Python framework used to manage a distributed tasks, following the object-oriented middleware approach. In this video, we will see how to use Celery to distribute tasks.

Preview 03:27

In this video, we'll learn to create and call a task using the Celery module.

Creating a Task with Celery
03:08

SCOOP is a Python module to distribute concurrent tasks (called Futures) on heterogeneous computational nodes. In this video we will take a look at scientific computing with Scoop.

Scientific Computing with SCOOP
04:54

The SCOOP Python modules define more than one map function; and they allow asynchronous computation that could be propagated to its workers. In this video, we will see how to handle Map functions with SCOOP. 

Handling Map Functions with SCOOP
04:03

Python Remote Objects (Pyro4) is a library that resembles Java's Remote Method Invocation (RMI), which allows you to invoke a method of a remote object. In this video we will see how to do that.

Remote Method Invocation with Pyro4
05:27

Implement a chain of objects with Pyro4 using Python scripts.

Chaining Objects with Pyro4
04:01

In this video, we will see how to build a simple client-server application with Pyro4.

Developing a Client-Server Application with Pyro4
04:18

PyCSP is a Python module based on communicating sequential processes, which is a programming paradigm developed to build concurrent programs via message passing. We will take a look at that in this video.

Communicating Sequential Processes with PyCSP
07:02

Remote Python Call (RPyC) is a Python module that is used for remote procedure calls as well as for distributed computing. In this video, we will see how to carry out a remote procedure call with RPyC.

A Remote Procedure Call with RPyC
03:39
+
GPU Programming with Python
12 Lectures 56:42

PyCUDA is a Python wrap for Compute Unified Device Architecture (CUDA), the softwarelibrary developed by NVIDIA for GPU programming. In this video, we will see how to use PyCUDA.

Preview 07:32

The PyCUDA programming model is designed for the common execution of a program on a CPU and GPU. This video will show us how to build a PyCUDA application.

Building a PyCUDA Application
07:32

In the CUDA-capable GPU card, there are four types of memories. In this video, we will take a look at those with the help of matrix manipulation.

Understanding the PyCUDA Memory Model with Matrix Manipulation
05:36

In this video, we will see the common use case of GPU computations in order to invoke a kernel function.

Kernel Invocations with GPU Array
02:23

The PyCuda.elementwise.ElementwiseKernel function allows us to execute the kernel on complex expressions. We will see how to do that in this video.

Evaluating Element-Wise Expressions with PyCUDA
03:20

PyCUDA provides a functionality to perform reduction operations on the GPU. We will take a look at that in this video.

The MapReduce Operation with PyCUDA
03:42

NumbaPro is a Python compiler that provides a CUDA-based API to write CUDA programs. In this video we will demonstrate GPU programming with NumbaPro.

GPU Programming with NumbaPro
04:47

NumbaPro provides a Python wrap for CUDA libraries for numerical computing. We will understand it with the help of this video.

Using GPU-Accelerated Libraries with NumbaPro
05:26

In this video, we'll examine the Python implementation of OpenCL called PyOpenCL.

Using the PyOpenCL Module
04:03

As for programming with PyCUDA, the first step to build a program for PyOpenCL is the encoding of the host application. This video will show us how to build the application.

Building a PyOpenCL Application
04:58

PyOpenCL provides the functionality in the pyopencl.elementwise class that allows us to evaluate the complicated expressions in a single computational pass. We will see how to do that in this video.

Evaluating Element-Wise Expressions with PyOpenCl
03:10

In this video, we will test the GPU application by implementing the regular definition schema of an application for PyOpenCL.

Testing Your GPU Application with PyOpenCL
04:13
About the Instructor
Packt Publishing
3.9 Average rating
8,274 Reviews
59,171 Students
687 Courses
Tech Knowledge in Motion

Packt has been committed to developer learning since 2004. A lot has changed in software since then - but Packt has remained responsive to these changes, continuing to look forward at the trends and tools defining the way we work and live. And how to put them to work.

With an extensive library of content - more than 4000 books and video courses -Packt's mission is to help developers stay relevant in a rapidly changing world. From new web frameworks and programming languages, to cutting edge data analytics, and DevOps, Packt takes software professionals in every field to what's important to them now.

From skills that will help you to develop and future proof your career to immediate solutions to every day tech challenges, Packt is a go-to resource to make you a better, smarter developer.

Packt Udemy courses continue this tradition, bringing you comprehensive yet concise video courses straight from the experts.