Predictive Model Assessment in PLS-SEM: Extensions and Guidelines: An Abstract

Link:
Autor/in:
Verlag/Körperschaft:
Hamburg University of Technology
Erscheinungsjahr:
2022
Medientyp:
Text
Schlagworte:
  • Assessment
  • Cross-validation
  • Evaluation
  • Model selection
  • PLS-SEM
  • Prediction
Beschreibung:
  • Model comparisons are an essential research tool for documenting the evolution of theoretical models and assessing whether the understanding of a phenomenon has improved (Sharma et al., 2019). Comparing alternative models places theories under sharp scrutiny and enables comparative testing to confirm or falsify theories (Gray & Cooper, 2009; Popper, 1959). Empirically, two different approaches have been proposed for conducting model comparisons. One approach is explanation, which focuses on comparing how well the observed data fit the competing models via in-sample measures such as goodness-of-fit, explained variance (e.g., R2), and information criteria, such as the Akaike and Bayesian information criteria (Hair et al., 2017; Sharma et al., 2021). The second approach is prediction which flips the relationship between the models and the data, so that the main focus is on comparing the accuracy (or error) of predicting data the models have not previously analyzed. This approach relies on out-of-sample prediction techniques such as cross-validation (Hofman et al., 2017; Yarkoni & Westfall, 2017). In other words, whereas the goal of explanation-oriented model comparison is to assess which model best explains the sample relationships at hand, the goal of the predictive approach is to assess which model more accurately predicts (holdout) data not previously used to optimize model fit (Shmueli, 2010). Partial least squares structural equation modeling (PLS-SEM) was developed as a “causal-predictive” approach (Jöreskog & Wold, 1982, p. 270) to enable simultaneous explanation and prediction-oriented model assessments (Chin et al., 2020; Hair et al., 2019b). Despite its dual focus, PLS-SEM has primarily been used in explanation-oriented studies that have paid lip service to prediction due to a lack of suitable prediction-oriented tools (Shmueli et al., 2016). To address this issue, Liengaard et al. (2021) recently proposed the cross-validated predictive ability test (CVPAT). However, the CVPAT currently lacks two critical capabilities: (1) the ability to compare a single proposed model with a naïve benchmark to ensure it meets minimum predictive accuracy, and (2) the ability to compare two models by focusing on the prediction accuracy of specific constructs, and not necessarily of all the constructs simultaneously. The ability to compare two models at the construct level (based on the prediction of a subset of constructs) is essential to develop managerial strategies designed to optimize key outcomes of the model. This research fills this important gap in existing research by extending the CVPAT framework to enable predictive benchmarking as well as comparing the predictive accuracy of models at the construct level.
Beziehungen:
DOI 10.1007/978-3-031-24687-6_30
Quellsystem:
TUHH Open Research

Interne Metadaten
Quelldatensatz
oai:tore.tuhh.de:11420/15164