Haskell is a lazy, purely-functional programming language with a very precise type system. Each of these features make Haskell quite different from mainstream object-oriented programming languages, which is where Haskell's appeal and its difficulty lie.
In this course, you’ll discover different ways to structure interactions between the program and the outside world. We’ll look at some subtler aspects of the IO monad, such as lazy IO and unsafePerformIO. In addition to the IO monad, we’ll also check out two other structured forms of interaction: streaming libraries and functional reactive programming.
Then we explore parallel, concurrent, and distributed programming. Thanks to purity, Haskell is especially well-suited for the first two, and so there are a number of approaches to cover. As for distributed programming, we focus on the idea of splitting a large monolithic program into smaller microservices, asking whether doing so is a good idea. We’ll also consider a different way of interacting with other microservices, and explore an alternative to microservices.
By the end of this course, you’ll have an in-depth knowledge of various aspects of Haskell, allowing you to make the most of functional programming in Haskell.
About the Author
Samuel Gélineau is a Haskell developer with more than 10 years of experience in Haskell Programming. He has been blogging about Haskell for about the same time. He has given many talks at Montreal’s Haskell Meetup, and is now co-organizer.
Samuel is a big fan of functional programming, and spends an enormous amount of time answering Haskell questions on the Haskell subreddit, and as a result has a good idea of the kind of questions people have about Haskell, and has learned how to answer those questions clearly, even when the details are complicated. Apart from Haskell, he is a fan of Elm, Agda, and Idris, and also Rust.
In this video, we will install everything you will need in order to follow along with the code I'll be presenting during the course.
In this video, we will look at the wide variety of side-effects supported by the IO monad.
In this video, you will learn the subtleties of exception handling in Haskell, including why throwing exceptions is one of the only side-effect which isn't tracked by Haskell's type system.
In this video, we will try to understand why so many different kinds of side-effects live in the IO monad, as opposed to a more fine-grained system with multiple monads each responsible for tracking one effect. Then, we will create our own fine-grained effect-tracking monads.
In this video, you will learn how and when to use unsafePerformIO to execute side-effects within pure code.
In this video, we will examine how lazy IO is implemented, and how the same underlying mechanism can be used to express effectful stream transformations.
In this video, we introduce streams and demonstrate how laziness makes it easy to implement streaming algorithms in Haskell.
In this video, we explore the consequence of a design in which streams are allowed, guaranteed, or forbidden from terminating after a finite number of elements, and similarly for stream transformers terminating after a finite number of steps.
In this video, we explore two variants of a callback-based APIs: push-based and pull-based.
In this video, we explore another alternative API in which the focus is on the streams instead of the stream transformers.
In this video, we will introduce the fundamental abstractions on which Functional Reactive Programming is based.
In this video, we look at the way in which FRP libraries handle state, and compare this approach to the way in which state is handled by traditional callback-based systems.
In this video, we look at FRP operators which allow the FRP network to be changed dynamically.
In this video, we look at three ways to specialize the type of the switch operator in order to eliminate time leaks.
In this video, we look at the properties of time which are either required or provided by FRP implementations.
What is "Parallel Programming in Haskell" about, and how does it differ from "Concurrent Programming in Haskell?" We answer those questions by clarifying the meaning of a few terms which are frequently confused.
How to specify which parts of an algorithm should be executed in parallel?
In this video, we turn laziness from a source of bugs into a source of parallelism opportunities.
In this video, we see how purity and immutability makes Haskell a great fit for writing parallel algorithms.
In this video, you will learn how to write programs whose outcome is deterministic even though the interleaving of its threads is not.
In this video, we look at a generalization of IVars which preserves determinism while allowing more than one thread to modify each variable.
In this video, we look at a few gotchas when writing code which forks threads.
In this video, you will learn how to turn an asynchronous API into a synchronous API and vice versa.
If our problem is not composed of smaller identical subproblems, can we still benefit from concurrency?
In this video, we draw inspiration from laziness in order to delay blocking for child thread until the last possible moment.
In this video, we introduce a way for threads to sleep until another thread wakes them up.
In this video, we introduce software transactional memory, a radically simpler way to write concurrent programs.
In this video, we see how purity and effect-tracking makes Haskell a great fit for writing concurrent algorithms.
In this video, we show that while writing code in the style of a combinator library has advantages, making it easier to write distributed programs is not one of them.
In this video, we investigate whether writing code in the style of a monad transformer stack also makes it harder to write distributed programs.
How can we use more than one architectural style, such as combinator libraries or monad transformers, in the same Haskell program?
In this video, we examine whether microservices are a good fit for a distributed Haskell application.
In this video, we optimize the set of requests being sent to a number of other services.
In this video, we introduce a different way to structure a distributed application, by launching remote threads instead of performing remote calls.
This video introduces data structures which are designed to cope with the uncertainties of communication in a distributed application.
Packt has been committed to developer learning since 2004. A lot has changed in software since then - but Packt has remained responsive to these changes, continuing to look forward at the trends and tools defining the way we work and live. And how to put them to work.
With an extensive library of content - more than 4000 books and video courses -Packt's mission is to help developers stay relevant in a rapidly changing world. From new web frameworks and programming languages, to cutting edge data analytics, and DevOps, Packt takes software professionals in every field to what's important to them now.
From skills that will help you to develop and future proof your career to immediate solutions to every day tech challenges, Packt is a go-to resource to make you a better, smarter developer.
Packt Udemy courses continue this tradition, bringing you comprehensive yet concise video courses straight from the experts.