Tuning Apache Spark: Powerful Big Data Processing Recipes
What you'll learn
- How to attain a solid foundation in the most powerful and versatile technologies involved in data streaming: Apache Spark and Apache Kafka
- Form a robust and clean architecture for a data streaming pipeline
- Ways to implement the correct tools to bring your data streaming architecture to life
- How to create robust processing pipelines by testing Apache Spark jobs
- How to create highly concurrent Spark programs by leveraging immutability
- How to solve repeated problems by leveraging the GraphX API
- How to solve long-running computation problems by leveraging lazy evaluation in Spark
- Tips to avoid memory leaks by understanding the internal memory management of Apache Spark
- Troubleshoot real-time pipelines written in Spark Streaming
- To pick up this course, you don’t need to be an expert with Spark. Customers should be familiar with Java or Scala.
Video Learning Path Overview
A Learning Path is a specially tailored course that brings together two or more different topics that lead you to achieve an end goal. Much thought goes into the selection of the assets for a Learning Path, and this is done through a complete understanding of the requirements to achieve a goal.
Today, organizations have a difficult time working with large datasets. In addition, big data processing and analyzing need to be done in real time to gain valuable insights quickly. This is where data streaming and Spark come in.
In this well thought out Learning Path, you will not only learn how to work with Spark to solve the problem of analyzing massive amounts of data for your organization, but you’ll also learn how to tune it for performance. Beginning with a step by step approach, you’ll get comfortable in using Spark and will learn how to implement some practical and proven techniques to improve particular aspects of programming and administration in Apache Spark. You’ll be able to perform tasks and get the best out of your databases much faster.
Moving further and accelerating the pace a bit, You’ll learn some of the lesser known techniques to squeeze the best out of Spark and then you’ll learn to overcome several problems you might come across when working with Spark, without having to break a sweat. The simple and practical solutions provided will get you back in action in no time at all!
By the end of the course, you will be well versed in using Spark in your day to day projects.
From blueprint architecture to complete code solution, this course treats every important aspect involved in architecting and developing a data streaming pipeline
Test Spark jobs using the unit, integration, and end-to-end techniques to make your data pipeline robust and bulletproof.
Solve several painful issues like slow-running jobs that affect the performance of your application.
Anghel Leonard is currently a Java chief architect. He is a member of the Java EE Guardians with 20+ years’ experience. He has spent most of his career architecting distributed systems. He is also the author of several books, a speaker, and a big fan of working with data.
Tomasz Lelek is a Software Engineer, programming mostly in Java and Scala. He has been working with the Spark and ML APIs for the past 5 years with production experience in processing petabytes of data. He is passionate about nearly everything associated with software development and believes that we should always try to consider different solutions and approaches before solving a problem. Recently he was a speaker at conferences in Poland, Confitura and JDD (Java Developers Day), and at Krakow Scala User Group. He has also conducted a live coding session at Geecon Conference. He is a co-founder of initlearn, an e-learning platform that was built with the Java language. He has also written articles about everything related to the Java world.
Who this course is for:
- An Application Developer, Data Scientist, Analyst, Statistician, Big data Engineer, or anyone who has some experience with Spark will feel perfectly comfortable in understanding the topics presented. They usually work with large amounts of data on a day to day basis. They may or may not have used Spark, but it’s an added advantage if they have some experience with the tool.
Packt are an established, trusted, and innovative global technical learning publisher, founded in Birmingham, UK with over eighteen years experience delivering rich premium content from ground-breaking authors and lecturers on a wide range of emerging and established technologies for professional development.
Packt’s purpose is to help technology professionals advance their knowledge and support the growth of new technologies by publishing vital user focused knowledge-based content faster than any other tech publisher, with a growing library of over 9,000 titles, in book, e-book, audio and video learning formats, our multimedia content is valued as a vital learning tool and offers exceptional support for the development of technology knowledge.
We publish on topics that are at the very cutting edge of technology, helping IT professionals learn about the newest tools and frameworks in a way that suits them.