Data Mining with Rattle

Learn to use the GUI-based comprehensive Data Miner data mining software suite implemented as the rattle package in R
3.7 (27 ratings)
Instead of using a simple lifetime average, Udemy calculates a
course's star rating by considering a number of different factors
such as the number of ratings, the age of ratings, and the
likelihood of fraudulent ratings.
757 students enrolled
$19
$50
62% off
Take This Course
  • Lectures 82
  • Length 15 hours
  • Skill Level All Levels
  • Languages English
  • Includes Lifetime access
    30 day money back guarantee!
    Available on iOS and Android
    Certificate of Completion
Wishlisted Wishlist

How taking a course works

Discover

Find online courses made by experts from around the world.

Learn

Take your courses with you and learn anywhere, anytime.

Master

Learn and practice real-world skills and achieve your goals.

About This Course

Published 8/2015 English

Course Description

Data Mining with Rattle is a unique course that instructs with respect to both the concepts of data mining, as well as to the "hands-on" use of a popular, contemporary data mining software tool, "Data Miner," also known as the 'Rattle' package in R software. Rattle is a popular GUI-based software tool which 'fits on top of' R software. The course focuses on life-cycle issues, processes, and tasks related to supporting a 'cradle-to-grave' data mining project. These include: data exploration and visualization; testing data for random variable family characteristics and distributional assumptions; transforming data by scale or by data type; performing cluster analyses; creating, analyzing and interpreting association rules; and creating and evaluating predictive models that may utilize: regression; generalized linear modeling (GLMs); decision trees; recursive partitioning; random forests; boosting; and/or support vector machine (SVM) paradigms. It is both a conceptual and a practical course as it teaches and instructs about data mining, and provides ample demonstrations of conducting data mining tasks using the Rattle R package. The course is ideal for undergraduate students seeking to master additional 'in-demand' analytical job skills to offer a prospective employer. The course is also suitable for graduate students seeking to learn a variety of techniques useful to analyze research data. Finally, the course is useful for practicing quantitative analysis professionals who seek to acquire and master a wider set of useful job skills and knowledge. The course topics are scheduled in 10 distinct topics, each of which should be the focus of study for a course participant in a separate week per section topic.

What are the requirements?

  • Students will need to install the R console and RStudio software (instructions are provided).

What am I going to get from this course?

  • Perform and support life-cycle data mining tasks and activities using the popular Data Miner ("Rattle") software suite.
  • Understand the functionalities implicit in the data, explore, test, transform, cluster, associate, model, evaluate, and log tabs in the Data Miner ("Rattle") GUI software platform.
  • Know how to explore, visualize, transform, and summarize data sets in Rattle.
  • Know how to create advanced, interactive Ggobi visualizations of data.
  • Know how to use, estimate and interpret: cluster analyses; association analyses mining rules; decision trees; random forests; boosting; and support vector machines using Rattle.

What is the target audience?

  • Anyone interested in data mining seeking to master the use of a powerful popular contemporary (and no-cost) Data Mining software suite
  • Data analytics professionals seeking to augment their data mining skill sets with a popular and useful data mining package.
  • Undergraduate and graduate students seeking to attain in-demand data mining skills for data analysis/mining tasks to offer to prospective employers.

What you get with this course?

Not for you? No problem.
30 day money back guarantee.

Forever yours.
Lifetime access.

Learn on the go.
Desktop, iOS and Android.

Get rewarded.
Certificate of completion.

Curriculum

Section 1: Introduction, Orientation, and Demos
Course Overview
Preview
01:52
12:37

Generally, data mining (sometimes called data or knowledge discovery) is the process of analyzing data from different perspectives and summarizing it into useful information - information that can be used to increase revenue, cuts costs, or both. Data mining software is one of a number of analytical tools for analyzing data. It allows users to analyze data from many different dimensions or angles, categorize it, and summarize the relationships identified. Technically, data mining is the process of finding correlations or patterns among dozens of fields in large relational databases.

Explanation of Class Materials
07:43
11:34

Rattle - the R Analytical Tool To Learn Easily - is a popular GUI for data mining using R. It presents statistical and visual summaries of data, transforms data that can be readily modelled, builds both unsupervised and supervised models from the data, presents the performance of models graphically, and scores new datasets.

More Rattle Demonstrations
07:00
Exercise for Introduction Section
03:27
Section 2: Rattle Interface Tabs and Introductory Script Demonstrations
Session Agenda
03:15
13:39

Rattle is a tab-oriented user interface that is similar to Microsoft Office's ribbon interface. It makes getting started with data mining in R very easy. Rattle is a tab-based GUI (graphical user interface) that performs a myriad of data mining functions using a "point-and-click" style of interaction with the GUI software, but Rattle also creates the underlying R code that actually drives the execution actions. Therefore, Rattle appeals to both people seeking the ease-of-use that is very much missing from R, and people looking to learn R programming.

10:53

What the Tabs do:

Data: The Data tab allows you to select your data source and import from a variety of file formats.

Explore: The Explore tab contains various things for performing exploratory work on your data to help understand distribution.

Test: The Test tab allows you to perform various statistical tests, from the T-test and F-test to others I've never heard of!

Transform: The Transform tab lets you clean up or modify your data set, using techniques such as ranking or rescaling.

Cluster: The Cluster tab lets you do various forms of clustering from numeric K-means clustering, to heirarchical and biclustering.

Associate: The Associate tab lets you do association rule data mining, which would be great for doing market basket analysis for retail data mining.

Model: The Model tab lets you create decision tree models, random forests, neural nets and other sophisticated data models.

Evaluate: The Evaluate tab is crucial because it helps you determine how well your model has worked. It provides an error matrix showing true outcomes versus the predicted outcomes.

Log: Lastly, the Log tab records all the actions run by your R code in Rattle, which helps you monitor performance, progress and errors.

Rattle Interface and Tabs (part 3)
13:27
Script Demonstrations (part 1)
13:46
Script Demonstrations (part 2)
13:37
Script Demonstrations (part 3)
10:32
Section 3: Loading and Exploring Data
Loading and Describing Data in Rattle
14:50
13:43

We explore the shape or distribution of our data before we begin mining.

Through this exploration we begin to understand the "lay of the land," just as a miner works to understand the terrain before blindly digging for gold. Through this exploration we may identify problems with the data, including missing values, noise and erroneous data, and skewed distributions. This will then drive our choice of tools for preparing and transforming our data and for mining it.

Rattle provides tools ranging from textual summaries to visually appealing graphical summaries, tools for identifying correlations between variables, and a link to the very sophisticated GGobi tool for visualising data. The Explore tab provides an opportunity to understand our data in various ways.

Exploring the Data in Rattle
12:52
Exploring Data with Plots in Rattle
15:28
Script to Load Data and Read Files
14:45
More Data Visualization with Scripts
08:49
Continue Plotting with Scripts
05:39
Section 4: Data Visualizations with Ggobi and Data Transformation in Rattle
09:12

In statistics, interactive data exploration is an applied form or exploratory data analysis (EDA), an approach to analyzing data sets to summarize their main characteristics, often with visual methods. A statistical model can be used or not, but primarily EDA is for seeing what the data can tell us beyond the formal modeling or hypothesis testing task. Exploratory data analysis was promoted by John Tukey to encourage statisticians to explore the data, and possibly formulate hypotheses that could lead to new data collection and experiments. EDA is different from initial data analysis (IDA),[1] which focuses more narrowly on checking assumptions required for model fitting and hypothesis testing, and handling missing values and making transformations of variables as needed. EDA encompasses IDA.

11:23

GGobi is an open source visualization program for exploring high-dimensional data. It provides highly dynamic and interactive graphics such astours, as well as familiar graphics such as the scatterplot, barchart and parallel coordinates plots. Plots are interactive and linked withbrushing and identification.

Data Transformation in Rattle (part 1)
14:30
Data Transformation in Rattle (part 2)
13:43
15:02

Reshaping data is a common task in real-life data analysis, and it is usually tedious and frustrating. You've struggled with this task in Excel, in SAS, and in R: how do you get your clients' data into the form that you need for summary and analysis? The reshape package for R (R Development Core Team 2007) presents a new approach that aims to reduce the tedium and complexity of reshaping data.

Section 5: Cluster Analysis
07:46

Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters). It is a main task of exploratory data mining, and a common technique for statistical data analysis, used in many fields, including machine learning, pattern recognition, image analysis, information retrieval, and bioinformatics.

13:49

Connectivity based clustering, also known as hierarchical clustering, is based on the core idea of objects being more related to nearby objects than to objects farther away. This is a form of "similarity." These algorithms connect "objects" to form "clusters" based on their distance. A cluster can be described largely by the maximum distance needed to connect parts of the cluster. At different distances, different clusters will form, which can be represented using a dendrogram, which explains where the common name "hierarchical clustering" comes from: these algorithms do not provide a single partitioning of the data set, but instead provide an extensive hierarchy of clusters that merge with each other at certain distances. In a dendrogram, the y-axis marks the distance at which the clusters merge, while the objects are placed along the x-axis such that the clusters don't mix.

Connectivity based clustering is a whole family of methods that differ by the way distances are computed. Apart from the usual choice of distance functions, the user also needs to decide on the linkage criterion (since a cluster consists of multiple objects, there are multiple candidates to compute the distance to) to use. Popular choices are known as single-linkage clustering (the minimum of object distances), complete linkage clustering (the maximum of object distances) or UPGMA ("Unweighted Pair Group Method with Arithmetic Mean", also known as average linkage clustering). Furthermore, hierarchical clustering can be agglomerative (starting with single elements and aggregating them into clusters) or divisive (starting with the complete data set and dividing it into partitions).

Distance-based Cluster Analysis Demos using Scripts (part 2)
14:08
Data Exploration Options (part 1)
09:16
Data Exploration Options (part 2)
08:25
Cluster Analysis Example: Ancient Pottery Shards
15:06
Cluster Analysis Example: Classifying Exoplanets (part 1)
12:02
Cluster Analysis Example: Classifying Exoplanets (part 2)
11:03
Section 6: Association Analysis
Cluster Analysis Exercise Solution
13:28
06:29

Affinity analysis is a form of association analysis . . . a type data analysis and data mining technique that discovers co-occurrence relationships among activities performed by (or recorded about) specific individuals or groups. In general, this can be applied to any process where agents can be uniquely identified and information about their activities can be recorded. In retail, affinity analysis is used to perform market basket analysis, in which retailers seek to understand the purchase behavior of customers. This information can then be used for purposes of cross-selling and up-selling, in addition to influencing sales promotions, loyalty programs, store design, and discount plans.[

Introduction to Association Analysis using R Script
09:22
Introduction to Association Analysis using Rattle
Preview
07:05
08:48

Association rule learning is a popular and well researched method for discovering interesting relations between variables in large databases. It is intended to identify strong rules discovered in databases using different measures of interestingness.[1] Based on the concept of strong rules, Rakesh Agrawal et al.[2] introduced association rules for discovering regularities between products in large-scale transaction data recorded by point-of-sale (POS) systems in supermarkets. For example, the rule found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as, e.g., promotional pricing or product placements. In addition to the above example frommarket basket analysis association rules are employed today in many application areas including Web usage mining, intrusion detection, Continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions.

Visualizing Association Rules (part 1)
10:47
Visualizing Association Rules (part 2)
Preview
10:37
Visualizing Association Rules (part 3)
10:01
Association Analysis Exercise
00:36
Section 7: Decision Trees and Recursive Partitioning
Association Analysis Exercise Solution
11:06
16:13

A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm.

Introduction to Decision Trees and Rattle Demo (part 1)
10:49
Introduction to Decision Trees and Rattle Demo (part 2)
Preview
10:51
Introduction to Decision Trees and Rattle Demo (part 3)
10:01
Introduction to Decision Trees and Rattle Demo (part 4)
11:51
10:07

Recursive partitioning is a statistical method for multivariable analysis.[1] Recursive partitioning creates a decision tree that strives to correctly classify members of the population by splitting it into sub-populations based on several dichotomous independent variables. The process is termed recursive because each sub-population may in turn be split an indefinite number of times until the splitting process terminates after a particular stopping criterion is reached.

Recursive partitioning methods have been developed since the 1980s. Well known methods of recursive partitioning include Ross Quinlan's ID3 algorithm and its successors, C4.5 and C5.0 and Classification and Regression Trees. Ensemble learning methods such as Random Forests help to overcome a common criticism of these methods - their vulnerability to overfitting of the data - by employing different algorithms and combining their output in some way.

Recursive Partitioning Demo with Bodyfat Data (part 2)
10:40
Recursive Partitioning Demo with Bodyfat Data (part 3)
Preview
14:37
Recursive Partitioning Demo with Glaucoma Data (part 1)
10:44
Recursive Partitioning Demo with Glaucoma Data (part 2)
Preview
12:13
Recursive Partitioning Demo with Glaucoma Data (part 3)
12:30
Section 8: Random Forests
Recursive Partitioning Exercise Solutions
13:21
11:59

Random forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Random forests correct for decision trees' habit of overfitting to their training set.

The algorithm for inducing a random forest was developed by Leo Breiman and Adele Cutler, and "Random Forests" is their trademark. The method combines Breiman's "bagging" idea and the random selection of features, introduced independently by Ho and Amit and Geman in order to construct a collection of decision trees with controlled variance.

Random Forest Rattle Tutorial with Weather Data (part 1)
14:15
Random Forest Rattle Tutorial with Weather Data (part 2)
14:55
Random Forest Rattle Tutorial with Weather Data (part 3)
Preview
08:14
Random Forest Modeling with R Weather Data (part 1)
14:30
09:59

Bootstrap aggregating, also called bagging, is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms used in statistical classification andregression. It also reduces variance and helps to avoid overfitting. Although it is usually applied to decision tree methods, it can be used with any type of method. Bagging is a special case of the model averagingapproach.

Random Forest Modeling with R Weather Data (part 3)
08:21
Decision Tree Iris Data
12:11
Random Forest Iris Data (part 1)
Preview
12:16
Random Forest Iris Data (part 2)
13:27
Random Forest Exercise
06:45
Section 9: Boosting
Random Forest Exercise Solution (part 1)
09:10
Random Forest Exercise Solution (part 2)
09:28
08:34

Boosting is a machine learning ensemble meta-algorithm for reducing bias primarily and also variance in supervised learning, and a family of machine learning algorithms which convert weak learners to strong ones. Boosting is based on the question posed byKearns and Valiant (1988, 1989): Can a set of weak learners create a single strong learner? A weak learner is defined to be a classifier which is only slightly correlated with the true classification (it can label examples better than random guessing). In contrast, a strong learner is a classifier that is arbitrarily well-correlated with the true classification.

Boosting Tutorial using Rattle
12:21
Basics of Boosting Demo using R
09:01
Replicating Adaboost using Rpart (part 1)
11:10
Replicating Adaboost using Rpart (part 2)
10:48
Boosting Extensions and Variants
14:06
Boosting Exercise
06:24
Section 10: Support Vector Machines
06:27

In machine learning, support vector machines (SVMs, also support vector networks[1]) are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used forclassification and regression analysis. Given a set of training examples, each marked for belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic binary linear classifier. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on.

Boosting Exercise Solution
13:07
Demonstrate Basis of SVM using R Scripts
10:58
SVM Tutorial in Rattle
10:00
SVM Model Evaluation (part 1)
11:36
SVM Model Evaluation (part2)
11:08
SVM Model Evaluation (part 3)
08:58

Students Who Viewed This Course Also Viewed

  • Loading
  • Loading
  • Loading

Instructor Biography

Geoffrey Hubona, Ph.D., Professor of Information Systems

Dr. Geoffrey Hubona held full-time tenure-track, and tenured, assistant and associate professor faculty positions at 3 major state universities in the Eastern United States from 1993-2010. In these positions, he taught dozens of various statistics, business information systems, and computer science courses to undergraduate, master's and Ph.D. students. He earned a Ph.D. in Business Administration (Information Systems and Computer Science) from the University of South Florida (USF) in Tampa, FL (1993); an MA in Economics (1990), also from USF; an MBA in Finance (1979) from George Mason University in Fairfax, VA; and a BA in Psychology (1972) from the University of Virginia in Charlottesville, VA. He was a full-time assistant professor at the University of Maryland Baltimore County (1993-1996) in Catonsville, MD; a tenured associate professor in the department of Information Systems in the Business College at Virginia Commonwealth University (1996-2001) in Richmond, VA; and an associate professor in the CIS department of the Robinson College of Business at Georgia State University (2001-2010). He is the founder of the Georgia R School (2010-2014) and of R-Courseware (2014-Present), online educational organizations that teach research methods and quantitative analysis techniques. These research methods techniques include linear and non-linear modeling, multivariate methods, data mining, programming and simulation, and structural equation modeling and partial least squares (PLS) path modeling. Dr. Hubona is an expert of the analytical, open-source R software suite and of various PLS path modeling software packages, including SmartPLS. He has published dozens of research articles that explain and use these techniques for the analysis of data, and, with software co-development partner Dean Lim, has created a popular cloud-based PLS software application, PLS-GUI.

Ready to start learning?
Take This Course