People Analytics Deconstructed
Are you responsible for understanding an employees’ experience? Have you tried to incorporate people analytics in your organization but have struggled? Have you ever wondered what it means to have a data culture? Would you like to make more data-driven decisions? These are the kinds of discussions you can expect to hear on People Analytics Deconstructed. Co-hosts Ron Landis and Jennifer Miller are co-founders of Millan Chicago, a data science consulting company dedicated to helping organizations make the most out of their data. Each week, they will ‘deconstruct’ modern and contemporary topics in the People Analytics space.
People Analytics Deconstructed
Analytics in Practice: Using Factor Analysis to Evaluate Engagement Surveys, Part 2
•
Millan Chicago
•
Season 1
•
Episode 26
In this episode, co-hosts Jennifer Miller and Ron Landis continue their conversation about developing a measure of employee engagement. This episode is the second part of a discussion about how to use a statistical technique called factor analysis to examine the dimensions of employee engagement.
In this episode, we had conversations around these questions:
- What is rotation in factor analysis?
- How do we determine the number of factors to retain?
- How do we identify which items load on which factors?
- How do we interpret results?
Key Takeaways:
- We should look at the results after rotating the initial solution. Factor rotation can be orthogonal or oblique. Oblique solutions allow for correlations between factors and orthogonal solution force them to remain independent. In most situations, we would likely start with an oblique rotation.
- We can determine the number of factors using a few different approaches. Kaiser’s criterion retains factors with eigenvalues greater than 1.00. A scree plot visualizes the eigenvalues of each factor from largest to smallest. We look for where the plot “flattens out” to determine the number of factors to retain. A parallel analysis simulates results from random data that have the same structure as our focal data (i.e., same number of observations and items) and produces associated eigenvalues. We compare our observed eigenvalues to those produced through the parallel analysis and retain those in which our eigenvalues are greater.
- When we associate a given item to a particular factor, we are looking for the largest loading. We generally set a cut of at least .30 or .40 to associate an item with a factor. If an item has no loadings higher than our cut, we will say the item doesn’t load on any factor and discard from further analysis. If an item has a high loading on multiple factors, we will seek to understand why and may choose to either drop or retain the item based on the context.
- We ended the episode by briefly talking about confirmatory factor analysis (CFA) as an alternative to EFA.