There’s an increasing number of data being produced every day. This has led to the demand for skilled professionals who can analyze these data and make decisions. R is one of the popular tools which is widely used by data analysts for performing data analysis on real-world data.
This Learning Path is the complete learning process to play with data. You will start with the most basic importing techniques for downloading compressed data from the Web. You will get introduced to how CRAN works and will demonstrate why viewers should use them.
Next, you will learn to create static plots. Then, you will understand how to plot spatial data on interactive web platforms such as Google Maps and OpenStreetMap.
You will learn advanced data analysis concepts such as cluster analysis, time-series analysis, association mining, PCA, handling missing data, sentiment analysis, spatial data analysis with R and QGIS, and advanced data visualization with R’s ggplot2 library.
Finally, you will implement the various topics learned so far to analyze real-world datasets from various industry sectors.
By the end of this Learning Path, you will learn how to perform data analysis on real-world data.
For this course, we have combined the best works of these esteemed authors:
Fabio Veronesi obtained a Ph.D. in digital soil mapping from Cranfield University and then moved to ETH Zurich, where he has been working for the past three years as a postdoc. In his career, Dr. Veronesi worked at several topics related to environmental research: digital soil mapping, cartography and shaded relief, renewable energy and transmission line siting. During this time Dr. Veronesi specialized in the application of spatial statistical techniques to environmental data.
Dr. Bharatendra Rai
Dr. Bharatendra Rai is Professor of Business Statistics and Operations Management in the Charlton College of Business at UMass Dartmouth. He teaches courses on topics such as Analyzing Big Data, Business Analytics and Data Mining, Twitter and Text Analytics, Applied Decision Techniques, Operations Management, and Data Science for Business.
Accessing and importing open access environmental data is a crucial skill for data scientists. This section teaches you how to download data from the Web, import it in R and check it for consistency.
Often times, datasets are provided for free, but on FTP, websites and practitioners need to be able to access them. R is perfectly capable of downloading and importing data from FTP sites.
Not all text files can be opened easily with read.table. The fixed-width format is still popular but requires a bit more work in R.
Some data files are simply too difficult to be imported with simple functions. Luckily R provides the readLines function that allows importing of even the most difficult tables.
Most open data is generated automatically and therefore may contain NA or other values that need to be removed. R has various functions to deal with this problem.
To follow the exercises in the book viewers would need to install several important packages. This video will explain how to do and where to find information about them.
Vector data are very popular and widespread and require some thoughts before importing. R has dedicated tools to import these data and work with them.
Often times, spatial data is provided in tables and needs to be transformed before it can be used for analysis. This can be done simply with the sp package.
Geographical projections are very important and need to be handled carefully. R provides robust functions to do so successfully.
Many datasets have a temporal component and practitioners need to know how to deal with it. R provides functions to do that in a very easy way.
Raster data is fundamentally different from vector data, since its values refer to specific areas (cells) and no single locations. This video will clearly explain this difference and teach users how to import this data in R.
The NetCDF format is becoming very popular, since it allows to store 4D datasets. This requires some technical skills to be accessed and this video will teach viewers to open and import NetCDF files.
Many raster datasets we download from the web are distributed in tiles, meaning a single raster for each subset of the area. To obtain a full raster for the study area we are interested to cover we can create a mosaic.
Mosaicking involves merging rasters based on location. Spatio-temporal datasets include also multiple rasters for the same location but different times. To merge these we need to use the stacking function.
Once we complete our analysis we often need to export our results and share them with colleagues. Popular formats are CSV and TXT files, which we learn how to export in this video.
If we work with vector data and we want to share the same format with our co-workers, we need to learn how to export in vector formats. This will be covered here.
Many raster datasets we download from the Web are distributed in tiles, meaning a single raster for each subset of the area. To obtain a full raster for the study area we are interested in covering, we can create a mosaic.
Nowadays WebGIS applications are extremely popular. However, to use our data for WebGIS, we first need to export them in the correct format. This video will show how to do that.
In the previous volume we explored the basics R functions and syntaxes to import various types of data. In this video we will put these functions together, and overcome some unexpected challenges, to import a full year of NOAA data.
Before we can start analyzing our data we first need to properly understand what we are dealing with. The first step we have to take in this direction is describe our data with simple statistical indexes.
Numerical summaries are very useful but certainly not ideal to provide us with a direct feeling for the dataset in hands. Plots are much more informative and thus being able to produce them is certainly a crucial skill for data analysts.
For multivariate data we are often interested in assessing correlation between variables. This can be done in R very easily, and ggplot2 can also be used to produce more informative plots.
Detecting outliers is another basic skill that every data analyst should have and master. R provides a lot of technical tools to help us in finding outliers.
This Section will be dedicated entirely to manipulating vector data. However, viewers first need to familiarize with some basic concepts, otherwise they may not be able to understand the rest of the section.
In volume 1 we learned how to set the projection of our spatial data. However, in many cases we have to change this projection to successfully complete our analysis, and this requires some specific knowledge.
In many cases we may be interested in understanding the relation between spatial objects. One of such relations is the intersection, where we first want to know how two objects intersect, and then also extract only the part of one of these object that is included or outside the first.
Other important GIS operations that users have to master involve creating buffers and calculating distances between objects.
The last two GIS functions that anybody should master are used to merge different geometries and spatial objects and overlay.
Raster objects are imported in R as rectangular matrixes. Users needs to be aware of this to properly work on these data, otherwise it may create some issues during the data analysis.
In many cases open data are not distributed directly in raster formats and they need to be converted. This can be easily done with the right functions.
Working with raster data often means extracting data for particular locations for further analysis, or crop the data to reduce their size. These are essential skills to master for any data analyst.
Sometimes we may need to filter out some values of our raster. It may seem tricky but only because it requires some skills.
Creating new raster by calculating their value is extremely important for spatial data analysis. Doing so is simple but can be difficult to understand at first.
Syntactically plotting spatial data in R is no different than plotting other types of data. Therefore, users need to know the basics of plotting before they can start making maps.
Creating multilayer plot can be difficult because we need to take care of several different aspects at once. However, learning that is very easy.
When plotting spatial data we are often interested in using colors to show the values of some variables. This can be done manually but producing the right color scale may be difficult. This issue can be solved employing automatic methods.
Creating multivariate plots not only means adding layers, but also using legends so that the viewer understands what the plot is showing. Creating legends in R is tricky because it requires a lot of tweaking, which will be explained here.
Temporal data need to be treated with specific procedures to highlight this additional component. This may be done in different ways depending on the scope of the analysis and R provides the right platform for this.
Being able to plot spatial data on web maps is certainly helpful and a crucial skill to have, but it can be difficult since it requires knowledge of different technologies. R makes this process very easy with dedicated functions that allow us to plot on web GIS services a breeze.
Plotting data with the function plotGoogleMaps is not as easy as using the function plot. With a simple step by step guide we can achieve good command of the function, so that users can plot whatever data they choose.
An interactive map with just one layer is hardly useful for our purposes. Many times we are faced with the challenge of plotting several data at once. This requires some additional work and understanding, but it is definitely not hard in R.
Plotting raster data on Google maps can be tricky. The function plotGoogleMaps does not handle rasters very well and if not done correctly the visualization will fail. This video will show users how to plot rasters successfully.
Plotting on Google Maps is easy but Google Maps are commercial products therefore if we want to use the on our commercial website we would need to pay. OpenStreetMaps are free to use, therefore knowing how to use them is certainly an advantage.
Using open data for our analysis requires a deep knowledge of the data provider and the actual data we are using. Without this knowledge we may end up with erroneous results.
Downloading data from the World Bank can be difficult since it requires users to know the acronym used to refer to these data. However, with some help this process becomes very easy.
To create a spatial map of the World Bank data we just have to download and we need to transform them into spatial data. However, in the dataset there are no coordinates of other information that may help us do that. The solution is to use the geocoding information from another dataset for this purpose.
Using the world bank data just to plot a static spatial map is very limitative. There are tons of other uses that researchers can do with these data and this video serves to provide some guidance into these additional avenue of research.
Executing a point pattern analysis is technically easy in R. However, it is extremely important that practitioners understand the theory behind a point pattern analysis to ensure the correctness of the results. This video illustrates this theory.
In many cases practitioners start their analysis by applying complex statistics without even looking at their data. This is a problem that may affect the correctness of their results. This video will teach the correct order to start a point pattern analysis.
Calculating intensity and density of a point pattern can be done in many ways. Finding the best for the dataset in hand can be challenging. The package spatstat and the literature provides some tips to do it correctly.
By looking at the plot we created in the previous videos, we started understanding the spatial distribution of our data. However, we now need to prove quantitatively that our ideas are correct.
In many cases we may want to model a point pattern to try and explain its location intensity in a way that would allow us to predict it outside our study area. This requires a general understanding of the modelling process, which will be explained here.
Cluster analysis is commonly used in many fields. The problem is that in order to use it correctly we need to understand the clustering process, which is what this video is about.
As in every data analysis the data preparation plays a crucial role in guaranteeing its success. This video will prepare the data to be used for clustering.
Clustering algorithms are extremely simple to apply. The challenge is interpret their results and try to understand what the algorithm is telling us in terms of insights into our data.
When applying the k-means algorithms we need to specify the number of clusters in which we want our dataset to be divided. However, since it is often used as explanatory test, we may not know the optimal number of clusters.
Hierarchical clustering allows us to see how all of our points are related to each other with a bottom-up approach. However, determining the optimal number of clusters is not so trivial with this method.
Determining the best clustering algorithm for our data is probably the most challenging part of such an analysis. This video will show the sort of reasoning users will need to make that decision.
Time series analysis is another important technique to master. However, it requires some specific knowledge to understand the process and what this technique can actually do.
Time-series can be imported and analyzed using two formats: ts and xts. Both have their pros and cons and users need to be able to master both if they want to perform the best time-series analysis.
Dealing with time-series sometimes means extracting data according to their location along the time line. This can be done in R but require some explanation to do it correctly.
Another important aspect of time-series analysis is decomposition and correlation. This allows us to draw important conclusions about our data. Technically this is not difficult to do, but it requires careful consideration if we want to do it right.
The final step of time-series analysis is forecasting, where we try to simulate future events. This is extremely useful but requires adequate knowledge of the methods available, their pros and cons.
There are numerous geostatistical interpolation techniques that can be used to map environmental data. Kriging is probably the most famous but it not the only one available. It is important to know every technique to understand where to use what.
The first challenge of any geostatistical analysis is the data preparation. We cannot just download data, but we need to clean them and prepare them for analysis.
Simple interpolation is easy to use and easy to interpret, therefore it is still commonly used. The package gstat allows us to use inverse distance, but to do so we need to follow some simple but precise rules.
Before we can interpolate our data using kriging, we need to take care of some important steps. For example, we need to check if our data has a trend and then test for normality, because kriging can only be applied to normally distributed data.
Variogram is the keystone of kriging interpolation and users need to know how to compute and fit a model to it. These things require careful considerations that we are going to explore here.
In this video, all concepts learned previously will be merged to perform a kriging interpolation. The problem in this case is making sure that everything works correctly and the process is smooth.
There are numerous statistical learning algorithms that can be used to map environmental data. It is important to know every technique to understand where to use what.
Once again for data analysis, getting to know our data is the most important thing we need to do once we start. This can be done by looking at the data provider and using some explanatory techniques.
Many users start a data analysis by testing complex methods. This is a problem though, because many times a simpler method can help us better understand our data. This video shows how to fit these simple models.
Regression trees are extremely powerful algorithms, but sometimes are considered as black boxes. This is a problem because only expert users can understand their output. This may change simply by understanding how these algorithms work.
Support vector machine is another important algorithm that is sometimes difficult to train. In this video we will look at the methods in the package caret to do that using an additional cross-validation.
The aim of this video is to introduce R/RStudio to those using it for the first time.
The aim of this video is to introduce commonly used visualization tools in R.
The aim of this video is to introduce the interactive visualization package “plotly” in R.
The aim of this video is to introduce the “googleVis” package in R.
The aim of this video is to introduce visualization with ggplot2, d3heatmap, and googleVis packages.
The aim of this video is to introduce the idea of regression, logistic regression, and data partitioning.
The aim of this video is to introduce data partitioning.
The aim of this video is to present steps for multiple linear regression
The aim of this video is to introduce multicollinearity issues with regression models.
The aim of this video is to introduce logistic regression using R.
The aim of this video is to provide a logistic model interpretation.
The aim of this video is to show calculation for confusion matrix and misclassification error.
The aim of this video is to show how to create ROC curves in R.
The aim of this video is to provide an overall view of prediction and model assessment.
The aim of this video is to introduce multinomial logistic regression using R.
The aim of this video is to provide the interpretation to the multinomial logistic model.
The aim of this video is to show calculation for confusion matrix and misclassification error.
The aim of this video is to provide an overall view of prediction and model assessment.
The aim of this video is to introduce ordinal logistic regression using R.
The aim of this video is to provide ordinal logistic model interpretation.
The aim of this video is to show calculation for the confusion matrix and misclassification error.
The aim of this video is to provide an overall view of the prediction and model assessment.
Packt has been committed to developer learning since 2004. A lot has changed in software since then - but Packt has remained responsive to these changes, continuing to look forward at the trends and tools defining the way we work and live. And how to put them to work.
With an extensive library of content - more than 4000 books and video courses -Packt's mission is to help developers stay relevant in a rapidly changing world. From new web frameworks and programming languages, to cutting edge data analytics, and DevOps, Packt takes software professionals in every field to what's important to them now.
From skills that will help you to develop and future proof your career to immediate solutions to every day tech challenges, Packt is a go-to resource to make you a better, smarter developer.
Packt Udemy courses continue this tradition, bringing you comprehensive yet concise video courses straight from the experts.