CCA175 Exam Prep Qs pt A (With Spark 2.4 Hadoop Cluster VM)
What you'll learn
- Students will get hands-on experience working in a Spark Hadoop environment
- Students will get to practice loading data from HDFS for use in Spark applications
- Students will get to practice writing the results back into HDFS using Spark
- Students will get to practice reading and writing files in a variety of file formats
- Students will get to practice performing standard extract, transform, load (ETL) processes on data using the Spark API
- Students will also get to practice working with Zeppelin notebooks
- Students should have a basic knowledge of SQL queries and the Spark API or be willing to learn in order to pass the certification exams.
- A P.C. or laptop with a minimum of 8 GB RAM and 20 GB of free space.
Prepare for the transform, stage and store section of the CCA Spark & Hadoop Developer certification and helps pass the CCA175 exam.
Students enrolling on this course can be 100% confident that after working on the problems contained here they will be in a great position to pass the transform, stage and store section of the CCA175 exam.
As the number of vacancies for big data, machine learning & data science roles continue to grow, so too will the demand for qualified individuals to fill those roles.
It’s often the case the case that to stand out from the crowd, it’s necessary to get certified.
This exam preparation series has been designed to help YOU pass the Cloudera certification CCA175, this is a hands-on, practical exam where the primary focus is on using Apache Spark to solve Big Data problems.
On solving the problems contained here you’ll have all the necessary skills & the confidence to handle any transform, stage & store related questions that come your way in the exam.
(a) There are 30 problems in this part of the exam preparation series. All of which are directly related to the transform, stage & store component of the CCA175 exam syllabus.
(b) Fully worked out solutions to all the problems.
(c) Also included is the Verulam Blue virtual machine which is an environment that has a spark Hadoop cluster already installed so that you can practice working on the problems.
• The VM contains a Spark stack which allows you to read and write data to & from the Hadoop file system as well as to store metastore tables on the Hive metastore.
• All the datasets you need for the problems are already loaded onto HDFS, so you don’t have to do any extra work.
• The VM also has Apache Zeppelin installed with fully executed Zeppelin notebooks that contain solutions to the problems.
Who this course is for:
- his material is ideally suited for students looking to pass the CCA175 certification exam or anyone who simply wants to practice working in a big data environment using Spark-SQL.
Verulam Blue is a UK based start-up founded by Matthew Barr, a data scientist.
Matthew has worked on a number of projects, ranging from data cleansing through to working on developing prediction models for clinical use.
Matthew is now focused on running Verulam Blue, a start-up which aims to train & help prepare individuals pass big data-related certification exams.
Prior to transitioning to the world of Data Science Matthew worked for nearly a decade in the financial services sector as an actuarial analyst in London.
Matthew also holds two master’s degrees from University College London, one in data science the other in mathematics.