Tutorial FilesBefore we begin, you may want to download the dataset (.csv) used in this tutorial. Be sure to right-click and save the file to your R working directory. This dataset contains a hypothetical sample of 300 responses on 6 items from a survey of college students' favorite subject matter. The items range in value from 1 to 5, which represent a scale from Strongly Dislike to Strongly Like. Our 6 items asked students to rate their liking of different college subject matter areas, including biology (BIO), geology (GEO), chemistry (CHEM), algebra (ALG), calculus (CALC), and statistics (STAT). This is where our tutorial ends, because all students rated all of these content areas as Strongly Dislike, thereby rendering insufficient variance for conducting EFA (just kidding).
Beginning StepsTo begin, we need to read our datasets into R and store their contents in variables.
- > #read the dataset into R variable using the read.csv(file) function
- > data <- read.csv("dataset_EFA.csv")
Psych PackageNext, we need to install and load the psych package, which I prefer to use when conducting EFA. In this tutorial, we will make use of the package's fa() function.
- > #install the package
- > install.packages("psych")
- > #load the package
- > library(psych)
Number of FactorsFor this tutorial, we will assume that the appropriate number of factors has already been determined to be 2, such as through eigenvalues, scree tests, and a priori considerations. Most often, you will want to test solutions above and below the determined amount to ensure the optimal number of factors was selected.
Factor SolutionTo derive the factor solution, we will use the fa() function from the psych package, which receives the following primary arguments.
- r: the correlation matrix
- nfactors: number of factors to be extracted (default = 1)
- rotate: one of several matrix rotation methods, such as "varimax" or "oblimin"
- fm: one of several factoring methods, such as "pa" (principal axis) or "ml" (maximum likelihood)
In this tutorial, we will use oblique rotation (rotate = "oblimin"), which recognizes that there is likely to be some correlation between students' latent subject matter preference factors in the real world. We will use principal axis factoring (fm = "pa"), because we are most interested in identifying the underlying constructs in the data.
- > #calculate the correlation matrix
- > corMat <- cor(data)
- > #display the correlation matrix
- > corMat
- > #use fa() to conduct an oblique principal-axis exploratory factor analysis
- > #save the solution to an R variable
- > solution <- fa(r = corMat, nfactors = 2, rotate = "oblimin", fm = "pa")
- > #display the solution output
- > solution
By looking at our factor loadings, we can begin to assess our factor solution. We can see that BIO, GEO, and CHEM all have high factor loadings around 0.8 on the first factor (PA1). Therefore, we might call this factor Science and consider it representative of a student's interest in science subject matter. Similarly, ALG, CALC, and STAT load highly on the second factor (PA2), which we might call Math. Note that STAT has a much lower loading on PA2 than ALG or CALC and that it has a slight loading on factor PA1. This suggests that statistics is less related to the concept of Math than algebra and calculus. Just below the loadings table, we can see that each factor accounted for around 30% of the variance in responses, leading to a factor solution that accounted for 66% of the total variance in students' subject matter preference. Lastly, notice that our factors are correlated at 0.21 and recall that our choice of oblique rotation allowed for the recognition of this relationship.
Of course, there are many other considerations to be made in developing and assessing an EFA that will not be presented here. The intent with this tutorial was simply to demonstrate the basic execution of EFA in R. For a detailed and digestible overview of EFA, I recommend the Factor Analysis chapter of Multivariate Data Analysis by Hair, Black, Babin, and Anderson.