6.3 Tau-Parallel Models#
Essentially tau-parallel measurement model#
The Essentially tau-parallel measurement model loses one restriction compared to the Tau-equivalent model by allowing the intercepts to vary. However, it constrains items to have equivalent reliability. It assumes that
items differ in their difficulty
items are equivalent in their discrimination power
items are equivalently reliable
We therefore only get estimates for the difficulty of the items (Intercepts
section). Discrimination power (Latent variables
section) and Reliability (Variances
section) are fixed across items.
Fit the model#
Usage#
This notebook compares essentially tau-parallel and tau-parallel models using the Data_EmotionalClarity.dat
items. After reading the data, each model is specified in lavaan
and the fits are contrasted.
# Specify the model
ro.r("""
metp = 'eta =~ item_1 + 1*item_2 + 1*item_3 + 1*item_4 + 1*item_5 + 1*item_6
item_1 ~~ b*item_1
item_2 ~~ b*item_2
item_3 ~~ b*item_3
item_4 ~~ b*item_4
item_5 ~~ b*item_5
item_6 ~~ b*item_6'
""")
# Fit the model
ro.r('fitmetp <- sem(metp, data=dat2, meanstructure=TRUE, estimator="ML")')
# Print the output of the model for interpretation
summary_fitmetp = ro.r("summary(fitmetp, fit.measures=TRUE, standardized=TRUE)")
print(summary_fitmetp)
lavaan 0.6-19 ended normally after 13 iterations
Estimator ML
Optimization method NLMINB
Number of model parameters 13
Number of equality constraints 5
Number of observations 238
Model Test User Model:
Test statistic 19.886
Degrees of freedom 19
P-value (Chi-square) 0.401
Model Test Baseline Model:
Test statistic 435.847
Degrees of freedom 15
P-value 0.000
User Model versus Baseline Model:
Comparative Fit Index (CFI) 0.998
Tucker-Lewis Index (TLI) 0.998
Loglikelihood and Information Criteria:
Loglikelihood user model (H0) -437.339
Loglikelihood unrestricted model (H1) -427.396
Akaike (AIC) 890.677
Bayesian (BIC) 918.455
Sample-size adjusted Bayesian (SABIC) 893.098
Root Mean Square Error of Approximation:
RMSEA 0.014
90 Percent confidence interval - lower 0.000
90 Percent confidence interval - upper 0.059
P-value H_0: RMSEA <= 0.050 0.884
P-value H_0: RMSEA >= 0.080 0.003
Standardized Root Mean Square Residual:
SRMR 0.059
Parameter Estimates:
Standard errors Standard
Information Expected
Information saturated (h1) model Structured
Latent Variables:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
eta =~
item_1 1.000 0.254 0.667
item_2 1.000 0.254 0.667
item_3 1.000 0.254 0.667
item_4 1.000 0.254 0.667
item_5 1.000 0.254 0.667
item_6 1.000 0.254 0.667
Intercepts:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
.item_1 1.504 0.025 60.923 0.000 1.504 3.949
.item_2 1.423 0.025 57.637 0.000 1.423 3.736
.item_3 1.392 0.025 56.392 0.000 1.392 3.655
.item_4 1.305 0.025 52.849 0.000 1.305 3.426
.item_5 1.346 0.025 54.537 0.000 1.346 3.535
.item_6 1.306 0.025 52.890 0.000 1.306 3.428
Variances:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
.item_1 (b) 0.081 0.003 24.393 0.000 0.081 0.556
.item_2 (b) 0.081 0.003 24.393 0.000 0.081 0.556
.item_3 (b) 0.081 0.003 24.393 0.000 0.081 0.556
.item_4 (b) 0.081 0.003 24.393 0.000 0.081 0.556
.item_5 (b) 0.081 0.003 24.393 0.000 0.081 0.556
.item_6 (b) 0.081 0.003 24.393 0.000 0.081 0.556
eta 0.064 0.007 9.001 0.000 1.000 1.000
In the output we see fixed parameters in the Latent variables
and Variances
sections. However, intercepts (i.e. item difficulties) are allowed to vary, as shown in the Intercepts
section. item_1
has the highest difficulty (1.504) and item_4
has the lowest (1.305). Thus, a person with a latent score of 0 (or the mean if centered) scores lowest on item_4
and highest on item_1
. Once more, refer to Tau Congeneric section for interpretation of the fit indexes.
Compare model fit#
As before, we can use the anova()
function to compare the model fits.
# Perform anova and print indexes
anova_mete_mept = ro.r("anova(fitmete, fitmetp)")
print(anova_mete_mept)
Chi-Squared Difference Test
Df AIC BIC Chisq Chisq diff RMSEA Df diff Pr(>Chisq)
fitmete 14 897.74 942.88 16.949
fitmetp 19 890.68 918.46 19.886 2.9369 0 5 0.7097
Note that we compare the Essentially tau-parallel measurement model with the Essentially tau-equivalent model (and not with the Tau-equivalent model). The latter comparison is invalid because only models that result from one another by adding or relaxing restrictions can be contrasted. The Essentially tau-parallel model restricts discrimination power and reliability, whereas the Tau-equivalent model additionally fixes item difficulty. Because these restrictions differ, the models are not nested. In contrast, the Essentially tau-parallel model arises from the Essentially tau-equivalent model by imposing equal reliability, so these two models can be compared.
The comparison yields an ambiguous result. While AIC and BIC favor the Essentially tau-parallel model, the \(\chi^2\) statistic points toward a better fit for the Essentially tau-equivalent model. However, the \(\chi^2\) values do not differ significantly (p > .05).
Tau-parallel measurement model#
Out of the models we looked at today, the Tau-parallel measurement model is the most restrictive one. It assumes that
items are equivalent in their difficulty
items are equivalent in their discrimination power
items are equivalently reliable
Therefore, all parameters (Intercepts
section, Latent variables
section and Variances
section) are restricted.
Fit the model#
# Specify the model
ro.r("""
mtp <<- 'eta =~ item_1 + 1*item_2 + 1*item_3 + 1*item_4 + 1*item_5 + 1*item_6
item_1 ~ a*1
item_2 ~ a*1
item_3 ~ a*1
item_4 ~ a*1
item_5 ~ a*1
item_6 ~ a*1
item_1 ~~ b*item_1
item_2 ~~ b*item_2
item_3 ~~ b*item_3
item_4 ~~ b*item_4
item_5 ~~ b*item_5
item_6 ~~ b*item_6'
""")
# Fit the model
ro.r('fitmtp <- sem(mtp, data=dat2, meanstructure=TRUE, estimator="ML")')
# Print the output of the model for interpretation
summary_fitmtp = ro.r("summary(fitmtp, fit.measures=TRUE, standardized=TRUE)")
print(summary_fitmtp)
lavaan 0.6-19 ended normally after 13 iterations
Estimator ML
Optimization method NLMINB
Number of model parameters 13
Number of equality constraints 10
Number of observations 238
Model Test User Model:
Test statistic 104.462
Degrees of freedom 24
P-value (Chi-square) 0.000
Model Test Baseline Model:
Test statistic 435.847
Degrees of freedom 15
P-value 0.000
User Model versus Baseline Model:
Comparative Fit Index (CFI) 0.809
Tucker-Lewis Index (TLI) 0.881
Loglikelihood and Information Criteria:
Loglikelihood user model (H0) -479.627
Loglikelihood unrestricted model (H1) -427.396
Akaike (AIC) 965.254
Bayesian (BIC) 975.670
Sample-size adjusted Bayesian (SABIC) 966.161
Root Mean Square Error of Approximation:
RMSEA 0.119
90 Percent confidence interval - lower 0.096
90 Percent confidence interval - upper 0.142
P-value H_0: RMSEA <= 0.050 0.000
P-value H_0: RMSEA >= 0.080 0.997
Standardized Root Mean Square Residual:
SRMR 0.109
Parameter Estimates:
Standard errors Standard
Information Expected
Information saturated (h1) model Structured
Latent Variables:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
eta =~
item_1 1.000 0.252 0.650
item_2 1.000 0.252 0.650
item_3 1.000 0.252 0.650
item_4 1.000 0.252 0.650
item_5 1.000 0.252 0.650
item_6 1.000 0.252 0.650
Intercepts:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
.item_1 (a) 1.379 0.018 76.247 0.000 1.379 3.561
.item_2 (a) 1.379 0.018 76.247 0.000 1.379 3.561
.item_3 (a) 1.379 0.018 76.247 0.000 1.379 3.561
.item_4 (a) 1.379 0.018 76.247 0.000 1.379 3.561
.item_5 (a) 1.379 0.018 76.247 0.000 1.379 3.561
.item_6 (a) 1.379 0.018 76.247 0.000 1.379 3.561
Variances:
Estimate Std.Err z-value P(>|z|) Std.lv Std.all
.item_1 (b) 0.087 0.004 24.393 0.000 0.087 0.577
.item_2 (b) 0.087 0.004 24.393 0.000 0.087 0.577
.item_3 (b) 0.087 0.004 24.393 0.000 0.087 0.577
.item_4 (b) 0.087 0.004 24.393 0.000 0.087 0.577
.item_5 (b) 0.087 0.004 24.393 0.000 0.087 0.577
.item_6 (b) 0.087 0.004 24.393 0.000 0.087 0.577
eta 0.063 0.007 8.858 0.000 1.000 1.000
As you can see, loadings, intercepts and errors are restricted. The interpretation of the fit indices is analogous to the Tau Congeneric measurement model (see above). You might notice that the model fit (as indicated by the \(\chi^2\) value) declines with more restrictions being added to the models. While the (least restrictive) Tau Congeneric measurement model has a \(\chi^2\) value of 9.568, the (most restrictive) Tau-parallel measurement model has a \(\chi^2\) value of 104.462.
Compare model fit#
Since the Tau-parallel measurement model results from further restricting the Essentially tau-parallel measurement model OR from further restricting the Tau-equivalent model we can test the Tau-parallel measurement model against both models.
# Perform anova and print indexes
anova_metp_mtp = ro.r("anova(fitmetp, fitmtp)") #Tau-parallel measurement model vs. Essentially tau-parallel measurement model
print(anova_metp_mtp)
anova_mte_mtp = ro.r("anova(fitmte, fitmtp)") #Tau-parallel measurement model vs. Tau-equivalent measurement model
print(anova_mte_mtp)
Chi-Squared Difference Test
Df AIC BIC Chisq Chisq diff RMSEA Df diff Pr(>Chisq)
fitmetp 19 890.68 918.46 19.886
fitmtp 24 965.25 975.67 104.462 84.576 0.25859 5 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Chi-Squared Difference Test
Df AIC BIC Chisq Chisq diff RMSEA Df diff Pr(>Chisq)
fitmte 19 970.91 998.69 100.12
fitmtp 24 965.25 975.67 104.46 4.3457 0 5 0.5008
The first comparison suggests that the Essentially tau-parallel measurement model provides a significantly better fit to the data as compared to the Tau-parallel measurement model as indicated by \(\chi^2\), AIC and BIC. Further, we see that there seems to be no significant different in model fit between the Tau-parallel measurement model and the Tau-equivalent model, although AIC and BIC slightly favor the Tau-parallel measurement model. This advantage in model fit is however not due to the model providing a better fit to the data but rather due to the Tau-parallel measurement model having less parameters to be estimated (i.e. it is a simpler model).
Conclusions#
One might ask what we should conclude / infer from these models.
Lets look again at the last comparisons.
anova_metp_mtp = ro.r("anova(fitmetp, fitmtp)") #Tau-parallel measurement model vs. Essentially tau-parallel measurement model
print(anova_metp_mtp)
Chi-Squared Difference Test
Df AIC BIC Chisq Chisq diff RMSEA Df diff Pr(>Chisq)
fitmetp 19 890.68 918.46 19.886
fitmtp 24 965.25 975.67 104.462 84.576 0.25859 5 < 2.2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
You should remember that the Tau-parallel measurement model restricts discrimination power (loadings), difficulty (intercepts) and reliability (errors). The Essentially tau-parallel measurement model only restricts discrimination power and reliability. We see in the model comparison that the Essentially tau-parallel measurement model provides a significantly better fit to the data. From this we can infer that freely estimating the intercepts provides a significantly better fit as compared to assuming them to be equivalent. In other words, the data suggests that our items are not equally difficult.
anova_mte_mtp = ro.r("anova(fitmte, fitmtp)") #Tau-parallel measurement model vs. Tau-equivalent measurement model
print(anova_mte_mtp)
Chi-Squared Difference Test
Df AIC BIC Chisq Chisq diff RMSEA Df diff Pr(>Chisq)
fitmte 19 970.91 998.69 100.12
fitmtp 24 965.25 975.67 104.46 4.3457 0 5 0.5008
Lets look at the other comparison. Again, you should remember that the Tau-parallel measurement model restricts discrimination power (loadings), difficulty (intercepts) and reliability (errors). The Tau-equivalent measurement model only restricts discrimination power and difficulty. The model comparison shows that the more flexible model (Tau-equivalent measurement model) does not provide a significantly better fit to the data, meaning there is no significant differences in model fit when we freely estimate the reliability as compared to assuming equal reliability across all items.
Think of it like that: When the more flexible model gives equal reliability scores (errors) for all items, restricting them in a less flexible model does not change a lot, hence there are no significant differences in model fit. In other words, from this comparison we can conclude that our items are equally reliable.
According to this approach, would you say our items are equal in discrimination power?
Extract factor scores#
Lastly, we can also extract person coefficients (i.e. factor scores) using the predict()
function.
ppar = ro.r("predict(fitmtc)")
print(ppar)
[[ 0.12073671]
[ 0.2399859 ]
[-0.00106554]
[-0.0080816 ]
[-0.20787835]
[-0.18320165]
[ 0.39365629]
[ 0.27002988]
[-0.0353721 ]
[-0.11238712]
[-0.03732313]
[ 0.27998964]
[-0.08850881]
[ 0.09130247]
[-0.10579118]
[ 0.11052924]
[ 0.10778979]
[-0.32484031]
[ 0.20636155]
[ 0.17023445]
[ 0.26021663]
[ 0.27197654]
[-0.17114061]
[-0.1423634 ]
[ 0.36606071]
[-0.07478522]
[-0.08269706]
[ 0.06433197]
[ 0.15894493]
[ 0.04430302]
[-0.50563021]
[ 0.02755227]
[ 0.0045081 ]
[ 0.05691382]
[ 0.06471399]
[ 0.11226692]
[ 0.01009598]
[ 0.03662523]
[ 0.0371341 ]
[-0.07487552]
[ 0.06196375]
[-0.08277975]
[ 0.11805626]
[ 0.16415042]
[-0.14367311]
[ 0.02914086]
[ 0.12025572]
[ 0.11489691]
[-0.21777332]
[ 0.15941502]
[ 0.22347693]
[ 0.06400319]
[ 0.01480666]
[-0.06774294]
[ 0.37054273]
[ 0.06061693]
[ 0.2043271 ]
[-0.21180907]
[ 0.02470239]
[ 0.22176066]
[-0.20330248]
[ 0.04473016]
[ 0.0934721 ]
[-0.23556783]
[ 0.01584813]
[ 0.2483084 ]
[-0.38746985]
[-0.07089748]
[ 0.21088851]
[ 0.07334115]
[ 0.17564439]
[-0.05144462]
[-0.07089966]
[ 0.24499844]
[ 0.10151722]
[-0.06901617]
[ 0.0456448 ]
[ 0.16319454]
[ 0.17989838]
[-0.13421496]
[-0.24170618]
[-0.46505992]
[ 0.03180718]
[ 0.0664597 ]
[ 0.03337448]
[-0.06008466]
[-0.28387795]
[ 0.08465731]
[ 0.13409875]
[ 0.05692584]
[-0.20550244]
[ 0.08144463]
[-0.04605298]
[ 0.21947592]
[-0.05537016]
[ 0.24685093]
[-0.17619315]
[ 0.05007634]
[ 0.13997748]
[ 0.07674666]
[-0.18686077]
[ 0.1622401 ]
[ 0.38272082]
[-0.14227858]
[-0.23937826]
[ 0.10285074]
[-0.1722375 ]
[ 0.08698164]
[-0.03229419]
[ 0.23368831]
[ 0.02511868]
[ 0.20433532]
[ 0.16165382]
[-0.24442257]
[ 0.18407926]
[-0.12782877]
[-0.48683382]
[-0.0370809 ]
[ 0.14700427]
[ 0.1175338 ]
[-0.02603344]
[ 0.01843564]
[ 0.14197215]
[-0.16042558]
[-0.12501578]
[ 0.39887482]
[-0.05839251]
[-0.27333906]
[ 0.09378877]
[-0.2859779 ]
[-0.06329123]
[ 0.28809748]
[ 0.02208326]
[ 0.02926573]
[-0.07572882]
[ 0.05136836]
[-0.13451622]
[-0.28482411]
[-0.05971757]
[-0.00696119]
[-0.2056506 ]
[ 0.03240509]
[ 0.31028298]
[ 0.00434217]
[-0.08589899]
[-0.11646003]
[-0.15879802]
[ 0.02091822]
[-0.0185422 ]
[ 0.19232778]
[ 0.21641285]
[-0.17676105]
[-0.13089798]
[ 0.08564287]
[ 0.29361047]
[-0.30384829]
[-0.15604862]
[ 0.22365418]
[-0.01888986]
[ 0.2558285 ]
[-0.0931723 ]
[-0.14190415]
[-0.19480303]
[-0.23080607]
[ 0.01103917]
[-0.05970047]
[-0.13721802]
[-0.22745276]
[-0.08843777]
[-0.25607412]
[-0.29596978]
[-0.21146914]
[ 0.18616632]
[ 0.07984204]
[ 0.10033143]
[ 0.16198517]
[-0.19764113]
[-0.1945785 ]
[ 0.01118844]
[-0.16381194]
[-0.11198058]
[-0.2099628 ]
[-0.02347381]
[ 0.24895667]
[-0.03948673]
[-0.1448985 ]
[ 0.31304109]
[-0.09518704]
[-0.16975589]
[-0.12045449]
[ 0.1817603 ]
[ 0.02893207]
[-0.15751139]
[ 0.11845567]
[-0.09532234]
[-0.78347861]
[ 0.10470245]
[ 0.12724124]
[ 0.50450487]
[ 0.16642292]
[-0.00875061]
[ 0.32828081]
[ 0.22736694]
[-0.20620184]
[ 0.36810431]
[ 0.25601399]
[ 0.04549933]
[ 0.13245698]
[-0.06223831]
[ 0.02825288]
[ 0.2033174 ]
[-0.17282055]
[ 0.11156832]
[ 0.26545191]
[-0.29160059]
[ 0.04155404]
[ 0.24093035]
[ 0.04186059]
[-0.29186336]
[-0.93529234]
[-0.58976593]
[ 0.18656583]
[ 0.24061333]
[-0.03180274]
[-0.00312996]
[-0.07429995]
[-0.01376578]
[ 0.30334668]
[ 0.05238216]
[-0.02575791]
[-0.36423244]
[ 0.03893627]
[-0.11997552]
[ 0.30172281]
[ 0.12893153]
[ 0.25059732]
[-0.15500374]
[-0.88093103]]