Find online courses made by experts from around the world.
Take your courses with you and learn anywhere, anytime.
Learn and practice realworld skills and achieve your goals.
Linear MixedEffects Models with R is a 7session course that teaches the requisite knowledge and skills necessary to fit, interpret and evaluate the estimated parameters of linear mixedeffects models using R software. Alternatively referred to as nested, hierarchical, longitudinal, repeated measures, or temporal and spatial pseudoreplications, linear mixedeffects models are a form of leastsquares modelfitting procedures. They are typically characterized by two (or more) sources of variance, and thus have multiple correlational structures among the predictor independent variables, which affect their estimated effects, or relationships, with the predicted dependent variables. These multiple sources of variance and correlational structures must be taken into account in estimating the "fit" and parameters for linear mixedeffects models.
The structure of mixedeffects models may be additive, or nonlinear, or exponential or binomial, or assume various other ‘families’ of modeling relationships with the predicted variables. However, in this "handson" course, coverage is restricted to linear mixedeffects models, and especially, how to: (1) choose an appropriate linear model; (2) represent that model in R; (3) estimate the model; (4) compare (if needed), interpret and report the results; and (5) validate the model and the model assumptions. Additionally, the course explains the fitting of different correlational structures to both temporal, and spatial, pseudoreplicated models to appropriately adjust for the lack of independence among the error terms. The course does address the relevant statistical concepts, but mainly focuses on implementing mixedeffects models in R with ample R scripts, ‘real’ data sets, and live demonstrations. No prior experience with R is necessary to successfully complete the course as the first entire course section consists of a "handson" primer for executing statistical commands and scripts using R.
Not for you? No problem.
30 day money back guarantee.
Forever yours.
Lifetime access.
Learn on the go.
Desktop, iOS and Android.
Get rewarded.
Certificate of completion.
Section 1: Introduction to R as a Statistical Environment  

Lecture 1 
Introduction to the Course
Preview

01:32  
Lecture 2  10:00  
RStudio Integrated Development Environment (IDE) is a powerful and productive user interface for R. It’s free and open source, and works great on Windows, Mac, and Linux. 

Lecture 3 
Basic Quantitative Operations in R (part 1)
Preview

05:53  
Lecture 4 
Basic Quantitative Operations in R (part 2)

06:59  
Lecture 5 
More R Scripting and Plotting (part 1)

09:45  
Lecture 6 
More R Scripting and Plotting (part 2)

04:52  
Lecture 7  06:18  
One of the great strengths of R is the user's ability to add functions. In fact, many of the functions in Rare actually functions of functions. The structure of a function is given below.


Lecture 8 
Functions in R (part 2)
Preview

07:39  
Lecture 9  09:54  
R has a wide variety of data types including scalars, vectors (numerical, character, logical). All columns in a matrix must have the same mode(numeric, character, etc.) and the same length. 

Lecture 10  11:12  
A data frame is more general than a matrix, in that different columns can have different modes (numeric, character, factor, etc.). This is similar to SAS and SPSS datasets. 

Lecture 11 
Exercises: Getting Started with R as a Statistical Environment

05:39  
Section 2: Basic Linear MixedEffects Concepts  
Lecture 12 
Solution to Getting Started with R as a Statistical Environment Exercise

13:32  
Lecture 13 
Some LME Background from MVA Package (scripts, part 1)

06:05  
Lecture 14 
Some LME Background (scripts, part 2)

09:27  
Lecture 15 
Some LME Background (scripts, part 3)

07:27  
Lecture 16 
Some LME Background (scripts, part 4)

08:40  
Lecture 17 
Temporal Pseudoreplication Fertilizer Exercise

3 pages  
Lecture 18 
Solution to Temporal Pseudoreplication Fertilizer Exercise

04:26  
Lecture 19  07:27  
A mixed model is a statistical model containing both fixed effects and random effects. These models are useful in a wide variety of disciplines in the physical, biological and social sciences. They are particularly useful in settings where repeated measurements are made on the same statistical units (longitudinal study), or where measurements are made on clusters of related statistical units. Because of their advantage in dealing with missing values, mixed effects models are often preferred over more traditional approaches such as repeated measures ANOVA. 

Lecture 20 
Basic LME Concepts (slides, part 2)
Preview

08:19  
Lecture 21 
Basic LME Concepts (slides, part 3)

07:32  
Lecture 22 
MixedEffects Fertilized Plants Example using nlme Package (part 1)
Preview

08:04  
Lecture 23 
Finish Fertilized Plants Example and Begin Zuur Material

06:58  
Lecture 24 
TwoStage Beaches Example (part 1)
Preview

07:39  
Lecture 25 
TwoStage Beaches Example (part 2)

09:25  
Lecture 26 
Random Intercepts Model Example

07:33  
Lecture 27 
Random Intercepts and Slopes Model Example

08:36  
Section 3: Timber and Plasma Data Examples  
Lecture 28  09:23  
Splitplot designs result when a particular type of restricted randomization has occurred during the experiment. A simple factorial experiment can result in a splitplot type of design because of the way the experiment was actually executed. 

Lecture 29 
SplitPlot Exercise Solution (part 2)

08:25  
Lecture 30 
SplitPlot Exercise Solution (part 3)

09:46  
Lecture 31  07:44  
A random intercepts model is a model in which intercepts are allowed to vary, and therefore, the scores on the dependent variable for each individual observation are predicted by the intercept that varies across groups. This model assumes that slopes are fixed (the same across different contexts). In addition, this model provides information about intraclass correlations, which are helpful in determining whether multilevel models are required in the first place. 

Lecture 32  06:25  
A random slopes model is a model in which slopes are allowed to vary, and therefore, the slopes are different across groups. This model assumes that intercepts are fixed (the same across different contexts). 

Lecture 33  06:08  
A model that includes both random intercepts and random slopes is likely the most realistic type of model, although it is also the most complex. In this model, both intercepts and slopes are allowed to vary across groups, meaning that they are different in different contexts. 

Lecture 34 
Plasma Data Example (part 1)

07:49  
Lecture 35 
Plasma Data Example (part 2)

06:50  
Lecture 36 
Vatiance Components Analysis Exercise

03:14  
Section 4: Selecting LME Model Structures  
Lecture 37  10:23  
In statistics, a random effect(s) model, also called a variance components model, is a kind of hierarchical linear model. It assumes that the dataset being analysed consists of a hierarchy of different populations whose differences relate to that hierarchy. In econometrics, random effects models are used in the analysis of hierarchical or panel data when one assumes no fixed effects (it allows for individual effects). The random effects model is a special case of the fixed effects model. 

Lecture 38 
Variance Components Analysis Exercise Solution (part 2)

10:14  
Lecture 39 
Variance Components Analysis Exercise Solution (part 3)

08:50  
Lecture 40 
LME Model Structure Selection (part 1)
Preview

13:04  
Lecture 41 
LME Model Structure Selection (part 2)

13:07  
Lecture 42 
LME Model Structure Selection (part 3)

07:04  
Lecture 43 
LME Model Structure Selection (part 4)

06:55  
Lecture 44 
LME Model Structure Selection (part 5)

05:10  
Lecture 45 
Regression versus Fixed Effects Bias Exercise

01:44  
Section 5: Compare LM and LME Parameters  
Lecture 46  09:55  
In econometrics and statistics, a fixed effects model is a statistical model that represents the observed quantities in terms of explanatory variables that are treated as if the quantities were nonrandom. This is in contrast to random effects models and mixed models in which either all or some of the explanatory variables are treated as if they arise from random causes. 

Lecture 47 
Regression versus Fixed Effects Exercise Solution (part 2)

09:29  
Lecture 48 
Regression versus Fixed Effects Exercise Solution (part 3)

07:16  
Lecture 49 
Regression versus Fixed Effects Exercise Solution (part 4)

06:02  
Lecture 50 
Nestling Barn Owls (part 1)

05:55  
Lecture 51 
Nestling Barn Owls (part 2)

07:08  
Lecture 52 
Nestling Barn Owls: Find Optimal Structure (part 1)
Preview

06:42  
Lecture 53 
Nestling Barn Owls: Find Optimal Structure (part 2)

06:47  
Lecture 54 
Nestling Barn Owls: Find Optimal Structure (part 3)

06:35  
Lecture 55 
Exercise: Beat the Blues I

02:30  
Section 6: 10Step Protocol for Optimal Structure  
Lecture 56 
Beat the Blues I Exercise Solution (part 1)

09:00  
Lecture 57 
Beat the Blues I Exercise Solution (part 2)

09:36  
Lecture 58 
Beat the Blues I Exercise Solution (part 3)

10:15  
Lecture 59 
10Step Protocol for Optimal Structure (part 1)

10:13  
Lecture 60 
10Step Protocol for Optimal Structure (part 2)

10:07  
Lecture 61 
10Step Protocol for Optimal Structure (part 3)

09:47  
Lecture 62 
10Step Protocol for Optimal Structure (part 4)

08:36  
Lecture 63  12:13  
One application of multilevel modeling (MLM) is the analysis of repeated measures data. Multilevel modeling for repeated measures data is most often discussed in the context of modeling change over time (i.e. growth curve modeling for longitudinal designs); however, it may also be used for repeated measures data in which time is not a factor. The issue of subjects leaving the study ("dropouts") midway through the periodic intervals of data collection is a perennial problem with these types of studies. 

Lecture 64 
The Problem of Dropouts in Longitudinal Studies (part 2)

09:23  
Lecture 65 
Beat the Blues II Exercises

01:45  
Section 7: Violation of Independence Errors  
Lecture 66 
Beat the Blues II Exercise Solutions

07:23  
Lecture 67  12:28  
Just as with ordinary leastsquared linear regression, the observed distribution of error terms (residuals) is assumed to be normally distributed and characterized by statistical independence from other error terms in the practice of linear mixedeffects modeling. 

Lecture 68 
TimeBased Residual Patterns (part 1)

06:32  
Lecture 69 
TimeBased Residual Patterns (part 2)

07:15  
Lecture 70 
Independence and Compound Symmetry

10:55  
Lecture 71  07:39  
In the statistical analysis of time series, autoregressive–movingaverage (ARMA) models provide a parsimonious description of a (weakly) stationary stochastic process in terms of two polynomials, one for the autoregression and the second for the moving average. 

Lecture 72 
AR1 and ARMA Residual Dependence (part 2)

06:37  
Lecture 73 
Introduction to Spatial Dependence

07:59  
Lecture 74 
Prepare Data and Run Bubble Plot
Preview

11:22  
Lecture 75 
Candidate Spatial Correlative Structures

11:26  
Lecture 76 
Irish Rivers Acid Sensitivity

09:07  
Lecture 77 
Using Spatial Correlational Structures

10:53 
Dr. Geoffrey Hubona held fulltime tenuretrack, and tenured, assistant and associate professor faculty positions at 3 major state universities in the Eastern United States from 19932010. In these positions, he taught dozens of various statistics, business information systems, and computer science courses to undergraduate, master's and Ph.D. students. He earned a Ph.D. in Business Administration (Information Systems and Computer Science) from the University of South Florida (USF) in Tampa, FL (1993); an MA in Economics (1990), also from USF; an MBA in Finance (1979) from George Mason University in Fairfax, VA; and a BA in Psychology (1972) from the University of Virginia in Charlottesville, VA. He was a fulltime assistant professor at the University of Maryland Baltimore County (19931996) in Catonsville, MD; a tenured associate professor in the department of Information Systems in the Business College at Virginia Commonwealth University (19962001) in Richmond, VA; and an associate professor in the CIS department of the Robinson College of Business at Georgia State University (20012010). He is the founder of the Georgia R School (20102014) and of RCourseware (2014Present), online educational organizations that teach research methods and quantitative analysis techniques. These research methods techniques include linear and nonlinear modeling, multivariate methods, data mining, programming and simulation, and structural equation modeling and partial least squares (PLS) path modeling. Dr. Hubona is an expert of the analytical, opensource R software suite and of various PLS path modeling software packages, including SmartPLS. He has published dozens of research articles that explain and use these techniques for the analysis of data, and, with software codevelopment partner Dean Lim, has created a popular cloudbased PLS software application, PLSGUI.