-
Notifications
You must be signed in to change notification settings - Fork 38
Partial Measurement Invariance
This note illustrates how to use semTools to implement measurement invariance. Before going to the details, here are the functions in semTools that can help you run measurement invariance.
-
measurementInvariance: Run measurement invariance across groups for continuous variables -
measurementInvarianceCat: Run measurement invariance across groups for ordered categorical variables -
partialInvariance: Automatically find partial invariance model across groups for continuous variables -
partialInvarianceCat: Automatically find partial invariance model across groups for categorical variables -
longInvarianceRun measurement invariance across time (and groups) for continuous variables
User may take a look at the help page in R to see the details and examples of each function. This page will show an example of using the full potential of the partialInvariance function.
After installing semTools version 0.4-5 and updating lavaan to the recent version, you should install the packages in the workspace:
library(semTools)
I will use the HolzingerSwineford1939 data to show partial invariance across schools. The measurementInvariance function can be used to automatically run series of measurement-invariance models and compare them together:
models2 <- measurementInvariance(HW.model, data=HolzingerSwineford1939, group="school", std.lv = TRUE)

The results indicate that metric invariance can be established because the chi-square test was not significant and the change in CFA is only .002. However, scalar invariance cannot be established. The chi-square test was significant and the change in CFI was .038. Thus, the partial scalar invariance is searched in this example.
Note that std.lv = TRUE is used because the fixed-factor method for scale identification is used. The results of the measurementInvariance function is saved to the object called models2 because we will use this object in the next function.
Two partial invariance methods can be implemented here. First, we start with the least-restricted model (i.e., weak invariance model in this example) and add one constraint (i.e., making equal intercepts across groups) at a time. I will refer this method as build-up strategy. Second, we start with the most-restricted model (i.e., scalar invariance model in this example) and relax one constraint (i.e., freeing intercepts across groups) at a time. I will refer this method as tear-down strategy. The weakness of the tear-down strategy is that the initial model does not fit the data at the first place. The following model comparison statistics are biased. However, the tear-down strategy is not sensitive to the methods of scale identification. On the other hand, the weakness of the build-up strategy is that it is very sensitive to the methods of scale identification. The initial model, however, is not imposed by impossible constraints. If users use this method, I would recommend using fixed-factor method of scale identification because it does not assume any items to be invariance in advance.
Now, the partialInvariance function is used to free or fix each constraint by one at a time:
partialInvariance(models2, type = "scalar", p.adjust = "holm")

models2 is the result from the measurementInvariance function. type represents the type of partial invariance model, which could be "metric", "scalar", "strict", or "means". "scalar" is used here because partial scalar invariance is our goal. p.adjust is the method of adjusting multiple p values to prevent inflated Type I error rate. I used the Holm's method here. "bonferroni" is also an option. See the help page for further details. If p.adjust is not specified, p values will be not adjusted.
The p values are in the scientific notations. Why don't we round them and make them in the decimal forms.
round(partialInvariance(models2, type = "scalar", p.adjust = "holm"), 4)

There will be three sets of results here: free, fix, and wald. free is the likelihood-ratio test comparing the most-restricted model and the same model with relaxing one constraint. fix is the likelihood-ratio test comparing the least-restricted model and the same model with imposing one constraint. wald is similar to the fix method but using multivariate Wald test instead. The accuracy of the wald test would be less accurate than the fix method. Thus, free is appropriate for the tear-down strategy and fix (or wald) is appropriate for the build-up strategy.
Let's start with the build-up strategy. We would like to find the nonsignificant constraints such that adding the constraint will not change the fit of the model. We will see that the intercepts of X1, X2, X8, and X9 are not significant between groups. I will pick X1 because it provides the least value chi-square and fix it to be equal across groups. Then, we can compare other intercepts with fixed X1:
round(partialInvariance(models2, type = "scalar", fix = "x1", p.adjust = "holm"), 4)

X2 and X4 provide nonsignificant results and have equal chi-squares. I arbitrarily pick to fix X2:
round(partialInvariance(models2, type = "scalar", fix = c("x1", "x2"), p.adjust = "holm"), 4)

X9 was only nonsignificant result so I fix X9:
round(partialInvariance(models2, type = "scalar", fix = c("x1", "x2", "x9"), p.adjust = "holm"), 4)

All item intercepts are significantly different across groups so the partial invariance model is established. We can see the details of the partial invariance model by the following scripts:
result <- partialInvariance(models2, type = "scalar", fix = c("x1", "x2", "x9"), p.adjust="holm",
return.fit = TRUE)
summary(result$models$parent)
In the first line, the result of partialInvariance is assigned to object called result. Note that I specify return.fit = TRUE to return all of the original models. The result object is a list with two parts: the table of chi-square statistics like we have seen before and all models that are fitted by this function. In the list of models, the nested and parent models (with fixed X1, X2, and X9) are also saved. We would like to get the parent model because other items are not fixed across groups. Thus, I used result$models$parent to extract the final model.
Let's try to run the tear-down strategy. We will start with the partialInvariance function without any additional specifications:
partialInvariance(models2, type = "scalar", p.adjust = "holm")

Now, we will look at the free results. We would like to free significant constraints. We will see that X3 and X7 provided significant results. Because X3 provided the maximum value of chi-squares, I free X3 across group by the free argument:
round(partialInvariance(models2, type = "scalar", free="x3", p.adjust = "holm"), 4)

X7 is the only significant one so X7 is free across groups:
round(partialInvariance(models2, type = "scalar", free=c("x3", "x7"), p.adjust = "holm"), 4)

Now, we do not have any significant items so we get the final model. To check the result of the final model, the following script can be used:
result <- partialInvariance(models2, type = "scalar", free=c("x3", "x7"), p.adjust="holm",
return.fit = TRUE)
summary(result$models$nested)
We would like to get the nested model because other items are fixed across groups (except X3 and X7). Thus, I used result$models$nested to extract the final model.
Users may use measurementInvarianceCat and partialInvarianceCat to do the same things for categorical items.
Here is the summary of the whole script in this example.