R is one of the most comprehensible statistical tool for managing and manipulating data. With the ever increasing number of data, there is a very high demand of professionals who have got skills to analyze these data. If you're looking forward to becoming an expert data analyst, then go for this Learning Path.
Packt’s Video Learning Paths are a series of individual video products put together in a logical and stepwise manner such that each video builds on the skills learned in the video before it.
The highlights of this Learning Path are:
Let’s take a quick look at your learning journey! This Learning Path begins with familiarizing you with the programming and statistics aspects of R. You will learn how CRAN works and why to use it. Acquire the ability to conduct data analysis in practical contexts with R, using core language packages and tools. You will then generate various plots in R using the basic R plotting techniques. Learn how to make plots, charts, and maps in step-by-step manner. Utilize R packages to add context and meaning to your data.
Moving ahead, the Learning Path will gradually take you through creating interactive maps using the googleVis package. Finally, you will generate chloropleth maps and contouring maps, bubble plots, and pie charts.
By the end of this Learning Path, you will be equipped with all data analysis and visualization techniques and build a strong foundation for moving into data science.
About the Author:
We have combined the best works of the following esteemed authors to ensure that your learning journey is smooth:
Dr. Samik Sen is a theoretical physicist and loves thinking about hard problems. After his PH.D. in developing computational methods to solve problems for which no solutions existed, he began thinking about how to tackle math problems while lecturing. He has a YouTube channel associated with data science, which also provides a valuable engagement with people round the world who look at problems from a different perspective.
Fabio Veronesi obtained a Ph.D. in digital soil mapping from Cranfield University and then moved to ETH Zurich, where he has been working for the past three years as a postdoc. In his career, Dr. Veronesi worked at several topics related to environmental research: digital soil mapping, cartography and shaded relief, renewable energy and transmission line siting. During this time, he specialized in the application of spatial statistical techniques to environmental data.
Atmajit Singh Gohil works as a senior consultant at a consultancy firm in New York City. After graduating, he worked in the financial industry as a Fixed Income Analyst. He writes about data manipulation, data exploration, visualization, and basic R plotting functions on his blog. He has a master's degree in financial economics from the State University of New York (SUNY), Buffalo. He also graduated with a Master of Arts degree in economics from the University of Pune, India.
The aim of the video is to introduce the section and overview of the language R.
We need to have the core programs before we can begin and in this video,we show where to get them.
In this video, we look at where to begin, so that we can get started.
In this video, you will learn how RStudio has packages which avoid the problems and how we'll work on them.
In this video, we see more familiar things in R.
In this video, we are now ready to write programs.
In this video, we will look at R data types which are new.
In this video, we will introduce some key commands to study data.
In this video, we willintroduce various commands to help us pick out elements in which we are interested in.
In this video, we will investigate the Titanic dataset to see what it says.
In this video, we willadd a value by processing our data.
In this video,we will download football results from a web page.
In this video, we will use R to do some statistics.
In this video, we will work with distributions using R.
In this video, we will see some of R's graphical power.
In this video, we will use the plotting package, ggplot2.
In this video, we will see another plotting technique known as Facets.
Accessing and importing open access environmental data is a crucial skill for data scientists. This section teaches you how to download data from the Web, import it in R and check it for consistency.
Often times, datasets are provided for free, but on FTP, websites and practitioners need to be able to access them. R is perfectly capable of downloading and importing data from FTP sites.
Not all text files can be opened easily with read.table. The fixed-width format is still popular but requires a bit more work in R.
Some data files are simply too difficult to be imported with simple functions. Luckily R provides the readLines function that allows importing of even the most difficult tables.
Most open data is generated automatically and therefore may contain NA or other values that need to be removed. R has various functions to deal with this problem.
To follow the exercises in the book viewers would need to install several important packages. This video will explain how to do and where to find information about them.
Vector data are very popular and widespread and require some thoughts before importing. R has dedicated tools to import these data and work with them.
Often times, spatial data is provided in tables and needs to be transformed before it can be used for analysis. This can be done simply with the sp package.
Geographical projections are very important and need to be handled carefully. R provides robust functions to do so successfully.
Many datasets have a temporal component and practitioners need to know how to deal with it. R provides functions to do that in a very easy way.
Raster data is fundamentally different from vector data, since its values refer to specific areas (cells) and no single locations. This video will clearly explain this difference and teach users how to import this data in R.
The NetCDF format is becoming very popular, since it allows to store 4D datasets. This requires some technical skills to be accessed and this video will teach viewers to open and import NetCDF files.
Many raster datasets we download from the web are distributed in tiles, meaning a single raster for each subset of the area. To obtain a full raster for the study area we are interested to cover we can create a mosaic.
Mosaicking involves merging rasters based on location. Spatio-temporal datasets include also multiple rasters for the same location but different times. To merge these we need to use the stacking function.
Once we complete our analysis we often need to export our results and share them with colleagues. Popular formats are CSV and TXT files, which we learn how to export in this video.
If we work with vector data and we want to share the same format with our co-workers, we need to learn how to export in vector formats. This will be covered here.
Many raster datasets we download from the Web are distributed in tiles, meaning a single raster for each subset of the area. To obtain a full raster for the study area we are interested in covering, we can create a mosaic.
Nowadays WebGIS applications are extremely popular. However, to use our data for WebGIS, we first need to export them in the correct format. This video will show how to do that.
In the previous volume we explored the basics R functions and syntaxes to import various types of data. In this video we will put these functions together, and overcome some unexpected challenges, to import a full year of NOAA data.
Before we can start analyzing our data we first need to properly understand what we are dealing with. The first step we have to take in this direction is describe our data with simple statistical indexes.
Numerical summaries are very useful but certainly not ideal to provide us with a direct feeling for the dataset in hands. Plots are much more informative and thus being able to produce them is certainly a crucial skill for data analysts.
For multivariate data we are often interested in assessing correlation between variables. This can be done in R very easily, and ggplot2 can also be used to produce more informative plots.
Detecting outliers is another basic skill that every data analyst should have and master. R provides a lot of technical tools to help us in finding outliers.
This Section will be dedicated entirely to manipulating vector data. However, viewers first need to familiarize with some basic concepts, otherwise they may not be able to understand the rest of the section.
In volume 1 we learned how to set the projection of our spatial data. However, in many cases we have to change this projection to successfully complete our analysis, and this requires some specific knowledge.
In many cases we may be interested in understanding the relation between spatial objects. One of such relations is the intersection, where we first want to know how two objects intersect, and then also extract only the part of one of these object that is included or outside the first.
Other important GIS operations that users have to master involve creating buffers and calculating distances between objects.
The last two GIS functions that anybody should master are used to merge different geometries and spatial objects and overlay.
Raster objects are imported in R as rectangular matrixes. Users needs to be aware of this to properly work on these data, otherwise it may create some issues during the data analysis.
In many cases open data are not distributed directly in raster formats and they need to be converted. This can be easily done with the right functions.
Working with raster data often means extracting data for particular locations for further analysis, or crop the data to reduce their size. These are essential skills to master for any data analyst.
Sometimes we may need to filter out some values of our raster. It may seem tricky but only because it requires some skills.
Creating new raster by calculating their value is extremely important for spatial data analysis. Doing so is simple but can be difficult to understand at first.
Syntactically plotting spatial data in R is no different than plotting other types of data. Therefore, users need to know the basics of plotting before they can start making maps.
Creating multilayer plot can be difficult because we need to take care of several different aspects at once. However, learning that is very easy.
When plotting spatial data we are often interested in using colors to show the values of some variables. This can be done manually but producing the right color scale may be difficult. This issue can be solved employing automatic methods.
Creating multivariate plots not only means adding layers, but also using legends so that the viewer understands what the plot is showing. Creating legends in R is tricky because it requires a lot of tweaking, which will be explained here.
Temporal data need to be treated with specific procedures to highlight this additional component. This may be done in different ways depending on the scope of the analysis and R provides the right platform for this.
Being able to plot spatial data on web maps is certainly helpful and a crucial skill to have, but it can be difficult since it requires knowledge of different technologies. R makes this process very easy with dedicated functions that allow us to plot on web GIS services a breeze.
Plotting data with the function plotGoogleMaps is not as easy as using the function plot. With a simple step by step guide we can achieve good command of the function, so that users can plot whatever data they choose.
An interactive map with just one layer is hardly useful for our purposes. Many times we are faced with the challenge of plotting several data at once. This requires some additional work and understanding, but it is definitely not hard in R.
Plotting raster data on Google maps can be tricky. The function plotGoogleMaps does not handle rasters very well and if not done correctly the visualization will fail. This video will show users how to plot rasters successfully.
Plotting on Google Maps is easy but Google Maps are commercial products therefore if we want to use the on our commercial website we would need to pay. OpenStreetMaps are free to use, therefore knowing how to use them is certainly an advantage.
Using open data for our analysis requires a deep knowledge of the data provider and the actual data we are using. Without this knowledge we may end up with erroneous results.
Downloading data from the World Bank can be difficult since it requires users to know the acronym used to refer to these data. However, with some help this process becomes very easy.
To create a spatial map of the World Bank data we just have to download and we need to transform them into spatial data. However, in the dataset there are no coordinates of other information that may help us do that. The solution is to use the geocoding information from another dataset for this purpose.
Using the world bank data just to plot a static spatial map is very limitative. There are tons of other uses that researchers can do with these data and this video serves to provide some guidance into these additional avenue of research.
Executing a point pattern analysis is technically easy in R. However, it is extremely important that practitioners understand the theory behind a point pattern analysis to ensure the correctness of the results. This video illustrates this theory.
In many cases practitioners start their analysis by applying complex statistics without even looking at their data. This is a problem that may affect the correctness of their results. This video will teach the correct order to start a point pattern analysis.
Calculating intensity and density of a point pattern can be done in many ways. Finding the best for the dataset in hand can be challenging. The package spatstat and the literature provides some tips to do it correctly.
By looking at the plot we created in the previous videos, we started understanding the spatial distribution of our data. However, we now need to prove quantitatively that our ideas are correct.
In many cases we may want to model a point pattern to try and explain its location intensity in a way that would allow us to predict it outside our study area. This requires a general understanding of the modelling process, which will be explained here.
Cluster analysis is commonly used in many fields. The problem is that in order to use it correctly we need to understand the clustering process, which is what this video is about.
As in every data analysis the data preparation plays a crucial role in guaranteeing its success. This video will prepare the data to be used for clustering.
Clustering algorithms are extremely simple to apply. The challenge is interpret their results and try to understand what the algorithm is telling us in terms of insights into our data.
When applying the k-means algorithms we need to specify the number of clusters in which we want our dataset to be divided. However, since it is often used as explanatory test, we may not know the optimal number of clusters.
Hierarchical clustering allows us to see how all of our points are related to each other with a bottom-up approach. However, determining the optimal number of clusters is not so trivial with this method.
Determining the best clustering algorithm for our data is probably the most challenging part of such an analysis. This video will show the sort of reasoning users will need to make that decision.
Time series analysis is another important technique to master. However, it requires some specific knowledge to understand the process and what this technique can actually do.
Time-series can be imported and analyzed using two formats: ts and xts. Both have their pros and cons and users need to be able to master both if they want to perform the best time-series analysis.
Dealing with time-series sometimes means extracting data according to their location along the time line. This can be done in R but require some explanation to do it correctly.
Another important aspect of time-series analysis is decomposition and correlation. This allows us to draw important conclusions about our data. Technically this is not difficult to do, but it requires careful consideration if we want to do it right.
The final step of time-series analysis is forecasting, where we try to simulate future events. This is extremely useful but requires adequate knowledge of the methods available, their pros and cons.
There are numerous geostatistical interpolation techniques that can be used to map environmental data. Kriging is probably the most famous but it not the only one available. It is important to know every technique to understand where to use what.
The first challenge of any geostatistical analysis is the data preparation. We cannot just download data, but we need to clean them and prepare them for analysis.
Simple interpolation is easy to use and easy to interpret, therefore it is still commonly used. The package gstat allows us to use inverse distance, but to do so we need to follow some simple but precise rules.
Before we can interpolate our data using kriging, we need to take care of some important steps. For example, we need to check if our data has a trend and then test for normality, because kriging can only be applied to normally distributed data.
Variogram is the keystone of kriging interpolation and users need to know how to compute and fit a model to it. These things require careful considerations that we are going to explore here.
In this video, all concepts learned previously will be merged to perform a kriging interpolation. The problem in this case is making sure that everything works correctly and the process is smooth.
There are numerous statistical learning algorithms that can be used to map environmental data. It is important to know every technique to understand where to use what.
Once again for data analysis, getting to know our data is the most important thing we need to do once we start. This can be done by looking at the data provider and using some explanatory techniques.
Many users start a data analysis by testing complex methods. This is a problem though, because many times a simpler method can help us better understand our data. This video shows how to fit these simple models.
Regression trees are extremely powerful algorithms, but sometimes are considered as black boxes. This is a problem because only expert users can understand their output. This may change simply by understanding how these algorithms work.
R comes loaded with some basic packages, but the R community is rapidly growing and active R users are constantly developing new packages for R.
Everything in R is in the form of objects. Objects can be manipulated in R.
We will dive into R's capability with regard to matrices and edit (add, delete, or replace) elements of a matrix.
One of the useful and widely used functions in R is the data.frame() function.
Once we have processed our data, we need to save it to an external device or send it to our colleagues. It is possible to export data in R in many different formats.
Most of the tasks in R are performed using functions. A function in R has the same utility as functions in Arithmetic.
If we want to perform an action repeatedly in R, we can utilize the loop functionality.
R has some very handy functions, such as apply, sapply, tapply, and mapply, that can be used to reduce the task of writing complicated statements.
One quick and easy way to edit a plot is by generating the plot in R and then using Inkspace or any other software to edit it.
Scatter plots are used primarily to conduct a quick analysis of the relationships among different variables in our data.
We will display multivariate data on a scatter plot and also introduce interactive scatter plots.
The advantage of using the Google Chart API in R is the flexibility it provides in making interactive plots.
Line plots are simply lines connecting all the x and y dots. They are very easy to interpret and are widely used to display an upward or downward trend in data.
Gantt charts are used to track the progress of a project displayed against time.
Plot a histogram using the googleVis package and merge more than one histogram on the same page.
The advantage of the Google Chart API is the interactivity and the ease with which they can be attached to a web page.
The waterfall plots or staircase plots are observed mostly in financial reports.
This video helps you get introduced to the concept of dendrograms.
This video teaches you to create a plot which is easy to study and more informative.
Heat maps are a visual representation of data, wherein each value in a matrix is represented with a color. This video shows you how to create a heat map.
This video dives into plotting a heat map by customizing colors.
This video teaches you to integrate a dendrogram and heat map into a single plot.
R allows us to plot three-dimensional interactive heat maps using the heat map package.
Tree maps are basically rectangles placed adjacent to each other. The size of each rectangle is directly proportional to the data being used in the visualization.
We encounter maps on a daily basis, be it for directions or to infer information regarding the distribution of data. Maps have been widely used to plot various types of data in R.
Choropleth maps can be state level as well as county level. In this video, we will plot well-being data on a state level.
Contour maps are used to display data related to temperature or topographic information.
For each region, a bubble or a pie chart is used that represents percentage.
Overlaying maps with text is not a very prominent medium of displaying information.
The shapefile package in R can be used to read a shapefile, add the processed data to our shapefile, and then save it in the shapefile format.
The idea of a cartogram is to show the gravity of the issue or data being studied.
Pie charts are a great visualization technique to represent data and help viewers understand statistical data.
Labels are important as they give the information about the sections of the pie chart. We will include labels inside the pie chart in this video.
Donut charts have advantages over pie charts with respect to the area and efficiency in visualizing information.
Instead of using multiple pie charts for comparing data, we can use slope charts.
Fan plots are an alternative to pie charts and are useful in differential and comparative analysis.
Packt has been committed to developer learning since 2004. A lot has changed in software since then - but Packt has remained responsive to these changes, continuing to look forward at the trends and tools defining the way we work and live. And how to put them to work.
With an extensive library of content - more than 4000 books and video courses -Packt's mission is to help developers stay relevant in a rapidly changing world. From new web frameworks and programming languages, to cutting edge data analytics, and DevOps, Packt takes software professionals in every field to what's important to them now.
From skills that will help you to develop and future proof your career to immediate solutions to every day tech challenges, Packt is a go-to resource to make you a better, smarter developer.
Packt Udemy courses continue this tradition, bringing you comprehensive yet concise video courses straight from the experts.