Going beyond the untold facts in PLS–SEM and moving forward

Joe F. Hair (Department of Marketing, University of South Alabama, Mobile, Alabama, USA)
Marko Sarstedt (Institute for Marketing, Ludwig-Maximilians-Universität München, München, Germany and Faculty of Economics and Business Administration, Babeș-Bolyai University, Cluj-Napoca, Romania)
Christian M. Ringle (Department of Management Sciences and Technology, Hamburg University of Technology, Hamburg, Germany)
Pratyush N. Sharma (Department of Information Systems, Statistics and Management Science, Culverhouse College of Business, The University of Alabama, Tuscaloosa, Alabama, USA)
Benjamin D. Liengaard (Department of Economics and Business Economics, Aarhus University, Aarhus, Denmark)

European Journal of Marketing

ISSN: 0309-0566

Article publication date: 28 May 2024

12167

Abstract

Purpose

This paper aims to discuss recent criticism related to partial least squares structural equation modeling (PLS-SEM).

Design/methodology/approach

Using a combination of literature reviews, empirical examples, and simulation evidence, this research demonstrates that critical accounts of PLS-SEM paint an overly negative picture of PLS-SEM’s capabilities.

Findings

Criticisms of PLS-SEM often generalize from boundary conditions with little practical relevance to the method’s general performance, and disregard the metrics and analyses (e.g., Type I error assessment) that are important when assessing the method’s efficacy.

Research limitations/implications

We believe the alleged “fallacies” and “untold facts” have already been addressed in prior research and that the discussion should shift toward constructive avenues by exploring future research areas that are relevant to PLS-SEM applications.

Practical implications

All statistical methods, including PLS-SEM, have strengths and weaknesses. Researchers need to consider established guidelines and recent advancements when using the method, especially given the fast pace of developments in the field.

Originality/value

This research addresses criticisms of PLS-SEM and offers researchers, reviewers, and journal editors a more constructive view of its capabilities.

Keywords

Citation

Hair, J.F., Sarstedt, M., Ringle, C.M., Sharma, P.N. and Liengaard, B.D. (2024), "Going beyond the untold facts in PLS–SEM and moving forward", European Journal of Marketing, Vol. 58 No. 13, pp. 81-106. https://doi.org/10.1108/EJM-08-2023-0645

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Joe F. Hair, Marko Sarstedt, Christian M. Ringle, Pratyush N. Sharma and Benjamin Dybro Liengaard.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial & non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

Partial least squares structural equation modeling (PLS-SEM; Lohmöller, 1989; Wold, 1982) has long been a widely used method for estimating complex interrelationships between constructs and indicator variables in marketing (e.g. Guenther et al., 2023; Ramos et al., 2023; Sarstedt et al., 2022a, 2022b) and other fields of research (e.g. Nitzl and Chin, 2017; Russo and Stol, 2021; Zeng et al., 2021). Recent methodological advancements (e.g. Hair et al., 2024a, Richter et al., 2020; Sharma et al., 2023a) have further expanded PLS-SEM’s areas of application across disciplines but its adoption and use have also invited criticism (e.g., Rönkkö et al., 2023). While constructive criticism helps to advance researchers’ understanding of a method’s strengths and limitations (e.g. Cook and Forzani, 2023; Rigdon, 2012), some methodological studies have taken a very critical stance against PLS-SEM (e.g. Rönkkö et al., 2016a, Rönkkö et al., 2023), in what Petter (2018, p. 10) refers to as “anti-PLS rhetoric.”

This paper is particularly concerned with Rönkkö et al.’s (2023) “fallacies” and “untold facts” about PLS-SEM’s primary statistical objective (i.e. residual variance minimization) and assessment metrics including estimates of weights and potential biases triggered by correlated error terms. Some of these claims have already been addressed in prior research (e.g. Cook and Forzani, 2020; Henseler et al., 2014; Rigdon, 2012; Sarstedt et al., 2016) or rest on outdated concepts and understandings of composite-based SEM in general. For example, rather than considering the most recent research on PLS-SEM, the criticisms focus on the early writings on the PLS-SEM method, such as Hair et al.’s (2011) “Indeed a silver bullet” article, reflecting an understanding of the method from more than a decade ago, which the original study’s authors themselves have acknowledged as deficient (Sarstedt et al., 2023). Similarly, the first edition of the Primer on PLS-SEM (Hair et al., 2014) is used as the basis for the criticism despite the availability of newer editions that reflect updated practices in PLS-SEM use (e.g. Hair et al., 2017a, Hair et al., 2022). This is problematic, because PLS-SEM has experienced rapid methodological developments over the last decade, including updated use guidelines and numerous new model evaluation features (for an overview of PLS-SEM advances see, for example, Cepeda-Carrión et al., 2022 and Richter et al., 2022). For instance, the use of the necessary condition analysis (NCA) in combination with PLS-SEM (e.g. Hauff et al., 2024; Richter et al., 2020; Sukhov et al., 2022) “offers a unique contribution by comparing and combining approaches” (Bergh et al., 2022, p. 1842).

In other cases, critics highlight issues with limited relevance for applied research by showcasing unrealistic simulation set-ups and misspecified empirical models that lack measurement theory support. For example, Rönkkö et al. (2023) focus on structural models which represent at best extreme boundary conditions with little practical relevance for the models that researchers work with (see also Sharma et al., 2023b). Furthermore, ignoring quality criteria established in the PLS-SEM literature, such as discriminant validity assessment using the HTMT criterion (Henseler et al., 2015) and the PLSpredict procedure for predictive model assessment (Shmueli et al., 2016), fails to provide a complete and balanced picture of the method’s performance.

The purpose of this article is to provide an alternative perspective to help editors, reviewers, and researchers to understand the value of PLS-SEM, as well as its limitations, and to contribute toward a more constructive and balanced discussion about the method’s appropriate use and future developments. We first focus on evaluating the three “fallacies” of PLS-SEM presented by Rönkkö et al. (2023). In doing so, we expand on Hair et al.‘s (2024b) recent discussion of the shortcomings of equal weights estimation and the composite equivalence index (CEI) that Rönkkö et al. (2023) proposed. We then reflect on the current state of the PLS-SEM approach and end with several concluding observations regarding its future [1].

“Fallacy #1”: PLS-SEM maximizes explained variance or R2

In “Fallacy #1”, the critics claim that prior research maintains that PLS-SEM maximizes a structural model’s explained variance (R2), which is implicitly taken to mean that the method yields optimal indicator weights in this respect. At the same time, the critics note that it is unclear what PLS-SEM maximizes, and thereby call for a global optimization criterion. Finally, a question is posed regarding why a method should maximize R2 at all; but the critics then show that a different technique (i.e. canonical correlation analysis, CCA; e.g. Thompson, 1984; Thorndike, 2000) yields even higher R2 values than PLS-SEM. Despite the ambiguities in reasoning, these claims require closer scrutiny and clarification [2].

The PLS-SEM algorithm executes partial regressions to obtain composite scores that minimize the residual variances in the relationships between composites and indicators (i.e. in the measurement models) as well as between composites (i.e. in the structural model; Tenenhaus et al., 2005) [3]. This characteristic has been emphasized by decades of research on PLS-SEM, starting with the early writing of the original proponents Jöreskog and Wold (1982, p. 270), who note that:

The PLS procedure is partial LS in the sense that each step of the estimation minimizes a residual variance with respect to a subset of the parameters, given proxy or final estimates for other parameters. At the limit of convergence, the PLS estimates are coherent in the sense that all the residual variances are minimized jointly [emphasis added].

At the same time, we acknowledge that not all earlier literature on PLS-SEM described this characteristic accurately—including some of the prior writings of co-authors of this paper (e.g. Hair et al. 2014, Chap. 1). However, this issue has been rectified in later writings (e.g. Guenther et al., 2023; Sarstedt et al., 2023).

The objective of minimizing residuals jointly in both the measurement models and the structural model via a sequence of partial regressions is intended to establish a balance between these two key objectives when determining the parameters for the entire model (i.e. the measurement model as well as the structural model)—as extensively documented in the literature (e.g. Chin, 1998; Lohmöller, 1989; Chap. 2, Tenenhaus et al., 2005). After the composites have been established from a set of weights, the estimation of the structural model coefficients proceeds by applying ordinary least squares (OLS) regressions. Hence, the latter stage of the PLS-SEM estimation has the optimality property of the widely used OLS regression algorithm, namely the minimum distance property of orthogonal projections (Hanafi, 2007).

Critics also argue that the standard PLS-SEM method does not offer a single optimization criterion. The continued criticism based on the lack of a single optimization criterion is quite surprising. First, multi-optimization is well established in other disciplines, particularly in machine learning methods (e.g. Gunantara, 2018). Second, this is a defining characteristic of the PLS-SEM method, implemented deliberately by design: “[…] the PLS procedure remains ‘partial’ in the sense that there is no total residual variance or other overall optimum criterion that is strictly optimized” (Wold, 1982, p. 270). This is because PLS-SEM focuses on achieving a tradeoff between estimation bias and reduction of standard errors (Wold, 1982). While PLS-SEM applies a regression system that converges towards a stable, but not necessarily a formally defined optimal solution, research has continuously explored the core statistical properties underlying the method’s optimization process. For example, Hanafi (2007) confirmed the PLS-SEM algorithm is monotonically convergent for the centroid and factorial weighting schemes under Mode B. Third, and most importantly, in addition to the composite modeling alternatives such as generalized structured component analysis (GSCA; Hwang and Takane, 2004), which offers an optimization criterion, more recent research has already developed a full-information extension of PLS-SEM, referred to as the global least squares path modeling (GLSPM), which consistently minimizes a single least squares criterion via an iterative algorithm that simultaneously estimates all model parameters (i.e. component weights, loadings, and path coefficients, under both Mode A and Mode B; Hwang and Cho, 2020). Hence, the associated claim that “no amount of ad-hoc retrofitting will remove them” (Rönkkö et al., 2023, p. 1613; i.e. limitations of the original PLS-SEM method like the definition of a single optimization criterion) not only provides an incomplete picture of the literature, but also shows an overly pessimistic view of scientific progress.

Despite their concern about whether R2 maximization is useful for estimating parameters in a complex model with multiple equations, Rönkkö et al. (2023) rely on the R2 to show that a CCA of a revised version of the European Customer Satisfaction Index (ECSI) model produces a higher R2 value compared to PLS-SEM. They show that the CCA produces an R2 value that is 11% and 3% higher, respectively, compared to PLS-SEM’s Mode A and Mode B estimations.

Reproducing their analysis confirms their result, but also shows that this increase in R2 comes at the expense of the weight estimates that show bewildering outcomes in the CCA, since all but one of the indicators of the multi-item constructs produce negative indicator weights. For example, their analysis yields an indicator weight of −0.238 for the fifth image indicator (imag5; Figure 1, right panel), which incorrectly implies that innovative and forward-looking companies should have a more unfavorable image. Similarly, being perceived as stable and firmly established (imag2), making social contributions to society (imag3), and being concerned about customers (imag4), translates into a lower image for a company. These flawed CCA implications should deter readers from giving much weight to the empirical comparison between PLS-SEM and CCA results.

An even more fundamental question still remains: Does comparing PLS-SEM and CCA make sense in the first place? The rational answer is “no,” since CCA considers a different statistical model compared to PLS-SEM. Specifically, PLS-SEM processes the ECSI model configuration, which focuses on relationships among the Complaints, Image, Loyalty, and Satisfaction constructs (Figure 1, left panel), and relies on piecewise estimations of the model elements. In contrast, the CCA quantifies the amount of linear relationship between two sets of variables (Benesty and Cohen, 2018, Chap 2). Hence, the CCA postulates a simple two-construct model structure, which uses cusa1 to cusa3, cusco, and imag1 to imag5 for the indicators of block X; and cusl1 to cusl3 for the indicators of block Y in their example (Figure 1, right panel)—the indicators are simply separated ex post in their results presentation. Hence, the CCA ignores the structural relationships of the original PLS path model [4]. Clearly, the demonstration of higher R2 is based on a lack of structural theory and goes against PLS-SEM’s goal of testing a causal-predictive model structure (e.g. Chin et al., 2020; Wold, 1982) postulated on the grounds of theory as well as logic.

To summarize, it is hard to put much faith in “Fallacy #1.” First, calling for a single objective criterion adds little to the debate, especially since Hwang and Cho’s (2020) GLSPM extension of the original PLS-SEM method already achieves this objective. Second, contrasting PLS-SEM results with those from the canonical correlations obtained by the CCA amounts to a comparison of two different analytical techniques with different goals and objectives—and is akin to comparing apples and oranges (also see Marcoulides et al., 2012).

“Fallacy #2”: PLS-SEM weights do not improve reliability

“Fallacy #2” states that PLS-SEM-based weights do not improve reliability, a claim that has already been raised by Rönkkö and Evermann (2013)—despite contrary evidence in prior research. For example, Rigdon (2012) has analytically shown PLS-SEM-based weights adjust for unreliability due to the nature of weighted composites. In addition, Cook and Forzani (2023) characterize PLS-SEM as a method for estimating envelope models, which facilitates studying the method’s performance in terms of bias and small sample size behavior in a traditional model-based context. Based on their discussions, Cook and Forzani (2023) conclude that PLS-SEM effectively reduces the effect of the indicators’ measurement error.

In their descriptions, Rönkkö et al. (2023) acknowledge that “a few recent articles have presented simulations where PLS weights make a difference,” (p. 1604) noting that “[t]hese studies appear not to be designed to be representative of real data sets but simply to find scenarios where indicator weights make a maximal difference.” (p. 1604). As an example, one of the two studies they cite is Hair et al. (2017b), which presents a comparative evaluation of composite-based SEM methods. In fact, the Hair et al. (2017b) simulation design relies on a model that mirrors the structure of the American Customer Satisfaction Index (ACSI; Fornell et al., 1996), which ranks among the most prominent models in marketing research (Fornell et al., 2020, Chap. 1). The ACSI model is also one of the cornerstones of the CFI group’s activities—a highly successful market research firm that specializes in customer, citizen, and employee satisfaction studies (Morgeson et al., 2023). In their simulation study, Hair et al. (2017b) consider different numbers of indicators, indicator weights, data distributions, and sample sizes, for 120 factor-level combinations and 36,000 data sets in their assessment of PLS-SEM’s performance. Moreover, this simulations study investigates four sets of unequal indicator weights which range between 0.075 and 0.9, depending on each measurement model’s number of indicators. Thus, the claim that Hair et al. (2017b) were solely focusing on situations where weights make a maximum difference is hard to defend in light of the complexity and comprehensiveness of their study’s simulation design.

Hair et al. (2024b) recently illustrated the problems that emerge from ignoring differential indicator weights and applying sum scores. Their study draws on Rönkkö et al.’s (2023) application of the ECSI example to show the consequences of having an unreliable indicator in the construct. Specifically, the second Loyalty indicator (cusl2) has a very low loading of 0.202, suggesting that the construct explains only about 4% of this indicator’s variance and that it should be removed from the measurement model. Hair et al. (2024b) show that unlike equal weights estimation, PLS-SEM allows identifying the unreliable indicator. In addition, the method is less affected by the inclusion of the unreliable indicator as the PLS-SEM algorithm puts a low (correlation) weight on cusl2. More specifically, estimating the model using PLS-SEM with the standard data set (n = 250) produces a path coefficient between Satisfaction and Loyalty of 0.485 (Table 1; column: PLS-SEM with cusl2). In contrast, equal weights estimation yields a considerably lower path coefficient of 0.406 (Table 1; column: Equal weights with cusl2)—see Hair et al. (2024b) for further details.

While this example used a realistic setting that researchers may encounter in practice, the problems associated with equal weights estimation can also be highlighted using Rönkkö et al.’s (2023) approach involving randomly-generated indicators in their discussion of “chance correlations” (see below). For this illustration, three randomly generated indicators, die1 to die3, were assigned to the Satisfaction construct. The PLS-SEM results show that, besides cusl2, these three indicators have loadings close to zero (−0.014, −0.001, and −0.014) and are therefore clearly unreliable. Moreover, PLS-SEM produces the same path coefficient estimate of 0.489 for the relationship between Satisfaction and Loyalty as in the model where all unreliable indicators have been removed (Table 1; column: PLS-SEM with random indicators). Thus, PLS-SEM not only robustly estimates the relationship between Satisfaction and Loyalty in both situations but also reveals the problems with unreliable indicators (i.e. the random die1 to die3 indicators and cusl2). In contrast, estimating the model with equal weights produces a path coefficient of only 0.133 for this relationship (Table 1; column: Equal weights with random indicators).

Apart from reliability concerns, it is difficult to conceive why scholars would prefer equal weights over differentiated weights, since the latter offer practitioners concrete guidance on how to improve target constructs, particularly when formatively measured constructs are involved (Hair et al., 2024b). In light of continuing concerns regarding the relevance of marketing research for business practice (e.g. Jedidi et al., 2021; Schauerte et al., 2023), discarding such additional information further removes academia from offering concrete guidance to managerial decision makers.

In short, the call to utilize equal weights turns back the clock concerning advances made in multivariate data analysis techniques. In fact, the need to account for measurement error when estimating relationships among latent (as opposed to observed) variables was the primary motivation for the development of SEM (Jöreskog, 1973)—it constitutes the primary advantage of indicator-weighting techniques like PLS-SEM over regression analysis when estimating relationships among constructs (Haenlein and Kaplan, 2004). Even if one may not view PLS as an SEM method (Rönkkö and Evermann, 2013), its estimates undeniably account for measurement error (e.g. Cook and Forzani, 2023; Henseler et al., 2014; Rigdon, 2012).

“Untold fact”: PLS-SEM weights can bias correlations

Under “untold facts,” Rönkkö et al. (2023, p. 1604) claim that “if there are cross-loadings or correlated errors between different scales, PLS tends to inflate the resulting biases.” This is not an untold fact, but the result of a violation of a methodological assumption that has been well-documented in decades of research on PLS-SEM (e.g. Chin, 1998; Hanafi et al., 2021; Lohmöller, 1989; Chap. 2, Tenenhaus et al., 2005). In other words, this claim boils down to the observation that a violation of specified prerequisites of the PLS-SEM method leads to negative consequences. This is not surprising—otherwise, there would be no point behind such an assumption. While Rönkkö et al. (2023) focus on the violation of the assumptions of correlated errors between scales, they do not treat the case of correlated errors within scales. This is unfortunate, as Cook and Forzani (2023) note that this is precisely the case where PLS-SEM has the greatest potential. Cook and Forzani’s (2023) observation aligns with Wold (1982) who also noted that uncorrelated errors within scale are not a requisite to the PLS-SEM estimation.

Another concern being raised is that PLS-SEM inflates structural model coefficients when two constructs are only weakly correlated—another claim which is not new. For example, Rönkkö and Evermann (2013) argued that in a two-construct model with a zero relationship, the parameter estimate’s distribution is not normal but rather bimodal in shape, which violates the requirements of t-tests used for statistical inference. This two-construct model is not, however, a nomological net of related constructs as required for a PLS-SEM analysis (e.g. Henseler et al., 2014; Wold, 1982), but simply an ensemble of two standalone constructs. As Rigdon (2016, p. 602) notes, Rönkkö and Evermann (2013) specified a model “that violated the known conditions under which the PLS path modeling estimation algorithm works. This algorithm requires that every composite proxy must be correlated with at least one other composite,” and that their “simulation showed what happens when you ‘break’ a statistical method, asking it to work outside of its boundary conditions.” Follow-up research has shown that when increasing the standardized path coefficient to a moderate level or increasing the sample size or embedding the latent variables in a nomological net with moderate effects, the issue identified by Rönkkö and Evermann (2013) does not arise as demonstrated by Henseler et al. (2014).

While research efforts to shed light on PLS-SEM’s behavior in extreme conditions is laudable, one has to call a spade a spade: a two-construct model with zero relationship is clearly a boundary condition with little practical relevance. For example, Paxton et al. (2001) note that the design of any simulation study needs to closely resemble setups commonly encountered in applied research to ensure external validity. While it is conceivable for a PLS-SEM application to employ a model in which a construct has zero correlations with all other constructs simultaneously, such a scenario is highly improbable in practice. This is because theory and logic determine which constructs should be included in a model’s nomological network, and the inclusion of a construct with zero correlations with all other constructs in the model signals a catastrophic failure of the researcher’s theory and logic—a concern that is ideally addressed in the theory design and descriptive analysis stages of a study. Not surprisingly, Sharma et al.’s (2023b) review of seminal path models in information systems research shows that none of the models meet these conditions.

Yet, the same arguments—that have already been addressed extensively in the past—are used to argue that PLS-SEM estimates are not trustworthy because they “capitalize on chance” [5]. To make their case, Rönkkö et al. (2023) propose three variants of the ECSI model by including an additional random construct (named Die) with different relationships to Loyalty and Satisfaction. The constructs omitted from their models for unclear reasons are Complaints, Expectations, Image, Quality, and Value. In Models 1 and 3, the pronounced relationship between Satisfaction and Loyalty is supplemented by additional relationships from Loyalty to Die (Model 1) and Satisfaction to Die (Model 3), respectively. Model 2, however, has three standalone constructs, each with a null or close to null relationship with the other constructs in the model. This model is used to show that the indicator weights and path coefficient estimates deviate from those in Models 1 and 3, where the estimates are more stable. This result is far from surprising, as Model 2 does not offer sufficient context for PLS-SEM to reliably estimate the relationships, as already shown in Henseler et al.’s (2014) conceptual replication of Rönkkö and Evermann (2013). The critics simply took the two-construct with zero relationship example and extended it into a three-construct with zero relationships setting. By having pronounced relationships between Satisfaction and Loyalty, Models 1 and 3 offer a context that produces stable loadings and weights estimates, as shown in the replication of Rönkkö et al.’s (2023) analysis (Figure 2). Nevertheless, a chainlike model with three constructs, one of which is randomly generated, clearly does not resemble setups commonly encountered in applied research (Paxton et al., 2001).

More importantly, what Rönkkö et al. (2023) do not report is whether the relationships between Satisfaction, Loyalty and Die are actually statistically significant in PLS-SEM. Replicating their analyses by computing confidence intervals based on bootstrapping (10,000 subsamples, percentile approach) shows that none of the relationships involving the randomly generated Die construct are statistically significant at the 5% level. That is, PLS-SEM correctly identifies the relationships between the Satisfaction, Loyalty, and Die constructs as not differing from zero in the population (Figure 2). As in Henseler et al. (2014), the PLS-SEM estimation does not lead to false positives (Type I errors). It might very well be that in some cases with weak correlations between constructs, the path coefficient estimates fluctuate more. However, Figure 6 in Rönkkö et al. (2023) actually shows that this behavior is well captured by the bootstrap distribution. This is precisely the purpose of using bootstrapping—to approximate the distribution of a statistic for significance testing (Cameron and Trivedi, 2005, Chap. 11).

Further, and more importantly, their demonstration (i.e. in their Figure 5) does not allow general conclusions about PLS-SEM, as their analysis rests on a single replication. Therefore, we reran the analysis in a Monte Carlo simulation with 10,000 replications for each of the three models to systematically assess PLS-SEM’s Type I error rate [6]. The R package cSEM (Rademaker and Schuberth, 2022) was used for these computations. The results in Figure 3 provide three key takeaways. First, the average path coefficients linking the Die construct and the Satisfaction or Loyalty constructs are close to zero for all three models—thus, contrary to the claims, their example does not show a bias due to chance correlations. Second, the correlation between the Die construct and an adjacent construct is not always higher in PLS-SEM compared to equal weights. Third, testing the significance of the path coefficients related to the Die construct via bootstrapping shows that PLS-SEM falsely rejects the true null hypothesis of no relationship in approximately 5% of the simulations only. That is, PLS-SEM closely aligns with the expected Type I error rate. Therefore, even if PLS-SEM produces somewhat elevated path coefficient estimates in this situation compared to equal weights (possibly due to sampling variation in a single analysis), these estimates are neither statistically significant nor subject to false positives rates that are much different from the expected error rate of 5%. In fact, in some cases PLS-SEM is more conservative.

To conclude, the criticisms stem from the use of an extreme situation with limited practical relevance, rendering any sweeping generalizations inappropriate (Petter, 2018). But even in this extreme situation, PLS-SEM does not produce false positives rates that are much different from the expected error rate of 5%—an aspect neglected in their analysis. Nevertheless, it is commendable that the critics recommend researchers to avoid chain-like models where an endogenous construct is related to only one other construct—as also called for in Hair et al.’s (2012, p. 421) discussion of “focused models.”

“Fallacy #3”: using AVE and composite reliability with PLS-SEM to validate measurement models

“Fallacy #3” attempts to empirically demonstrate the limitations of the average variance extracted (AVE), composite reliability ρc, and the Fornell-Larcker criterion to validate construct measures—a discussion which is also not new. A decade ago, Evermann and Tate (2010) as well as Rönkkö and Evermann (2013) presented simulation evidence showing these statistics do not reliably detect misspecified models in a PLS-SEM framework. Rönkkö et al. (2016a) and Evermann and Rönkkö (2023) reiterated these findings. Apart from similar concerns in factor-based SEM (e.g. Franke and Sarstedt, 2019; Yang and Green, 2010), the question is whether applied researchers would fail to detect these model misspecifications in practice. The answer is a resounding “no,” as we describe below.

In their misspecified Model #1, Rönkkö et al.'s (2023) analysis relies on several indicators assigned to the wrong constructs. Their analysis of AVE and composite reliability ρc as well as the Fornell-Larcker criterion do not indicate any problems, erroneously providing support for the measures’ reliability and validity. However, their analysis rests on outdated measurement validation metrics. While early writings indeed recommended the use of these metrics (e.g. Hair et al., 2014, Chap. 4), more recent research clearly acknowledges their limitations and calls for more accurate methods (e.g. Hair et al., 2022; Chap. 4, Hair et al., 2019). In terms of internal consistency reliability assessment, Rönkkö et al. (2023) only consider ρc, which has long been identified as a liberal reliability measure (e.g. Hair et al., 2014; Chap. 4, Hair et al., 2017a, Chap. 4, Sarstedt et al., 2017). At the same time, they do not report Cronbach’s alpha, which is a conservative measure of reliability, and ρA, which recent research recommends (e.g. Dijkstra and Henseler, 2015; Sarstedt et al., 2021). Similarly, their discriminant validity assessment relies exclusively on the Fornell-Larcker criterion, which has been shown as ineffective (Franke and Sarstedt, 2019; Henseler et al., 2015). Instead, recent guidelines of PLS-SEM univocally call for using Henseler et al.’s (2015) HTMT criterion or its recent extensions (Ringle et al., 2023; Roemer et al., 2021) for discriminant validity assessments. Furthermore, their concern that HTMT is not a “PLS-specific method” is irrelevant to the debate. All methodological techniques borrow measures that were not a part of their initial design (Sharma et al., 2023b)—to name one example, the computation of Cronbach’s alpha draws on the (average) correlations of indicators in a measurement model, independent of the actual model estimates. This is a normal part of methodological toolkit advancement [7].

When we consider these recently proposed, and now widely accepted metrics, we arrive at a fundamentally different picture. The results in Table 2 confirm that the misassignment of indicators leads to a drop in the internal consistency reliability of the COMP construct below the recommended 0.7 threshold for both Cronbach’s alpha and ρA. More importantly, the misspecification triggers severe discriminant validity problems, as indicated by the HTMT statistic (Table 2). Because of the model misspecification, the HTMT values of CUSL and COMP as well as LIKE and COMP fail even the most liberal threshold, as their confidence intervals include 1.

To summarize, Rönkkö et al.’s (2023) objections are grounded in outdated model evaluation practices that have been updated years ago. They mention the HTMT statistic in their discussion but fail to report it in their measurement model assessment, despite conducting related research on this topic (Rönkkö and Cho, 2022). Similarly, considering the published research on ρA (Rönkkö et al., 2016b), this advancement in PLS-SEM-based internal consistency reliability assessment could have also been utilized in their analysis, but was not. These and other criteria recommended in the literature are supposed to be applied together because they are intended to measure different aspects of the PLS path model.

Besides these issues, there are several face and content validity issues in Rönkkö et al.’s (2023) analyses. Why would a researcher use the item “I regard [the company] as a likeable company” (Hair et al., 2022, Chap. 2) as a single-item measure of customer satisfaction, as done in the misspecified Model #1? Similarly, why would a researcher consider “I will remain a customer of [the company] in the future” (Hair et al., 2022, Chap. 2) as a measure of a company’s Competence, rather than Customer Loyalty, especially as the latter scale has been validated in numerous studies (e.g. Zeithaml et al., 1996)? While empirical concerns are important, content validity is imperative when applying PLS-SEM, as is the case with any research method. Ideally, such issues should be dealt with based on face validity during the scale-design phase. A purely data-driven approach is not what SEM methods have been designed for, or what researchers in the social sciences advocate. For example, Roberts and Thatcher (2009, p. 9) note that measurement theory specifies a relationship between constructs and indicators and seeks to bridge the gap between abstract theoretical constructs and measured phenomena without which “the mapping of theoretical constructs onto empirical phenomena is ambiguous, and theories cannot be meaningfully tested”. Similarly, Petter et al. (2012, p. 147) note that:

It is critically important for researchers to achieve correspondence between the measurement specification and the conceptual meaning of the construct so as to not alter the theoretical meaning of the construct at the operational layer of the model. Such alignment between theory and measurement will safeguard against threats to construct and statistical conclusion validity.

Similar concerns apply to misspecified Model #2, where all measures of a company’s Likeability are assigned to the Competence construct. In this case, extant reliability and validity statistics do not give rise to concern, but the construct measure violates the unidimensionality criterion, which should precede any SEM analysis. To illustrate this point, we computed Revelle’s (1979) beta metric using the hierarchical item clustering as implemented in the R package psych (Revelle, 2024) on a one-construct solution, as used in misspecified Model #2, as well as a two-construct solution used in the original model. This analysis produces an average beta of 0.75 for the two-construct solution (betaCOMP = 0.70, betaLIKE = 0.81), which is higher than the beta (0.74) of the one-construct solution. In addition, the one-construct solution’s beta value is 0.12 units lower than the scale’s alpha (0.86), which indicates the scale is not unidimensional (Cooksey and Soutar, 2006). Considering the fact that measurement theory would hardly support measuring a company’s likeability (the affective dimension of corporate reputation) using the items that represent cognitive aspects (e.g. “[The company] is a top competitor in its market”; Hair et al., 2022, Chap. 2), there is no reason why one would choose to merge these two item sets, especially as the original measurement instrument has been validated using both factor-based (e.g. Schwaiger, 2004) and composite-based methods (Hair et al., 2024a, Chap. 5).

In the misspecified Model #3, the variables are randomly permuted across all constructs, showing that the model evaluation metrics considered in their study indicate a well-fitting model in 50% of the cases. But instead of using selected random examples, the critics should have generated results for all 8,400 combinations of indicators that can be assigned to the four constructs, each with three (COMP, CUSL, and LIKE) or one (CUSA) indicator(s), to produce a more complete picture of PLS-SEM’s performance.

We do so here and find that the results of these computations for the full set of model evaluation criteria again paint a very different picture (Table 3). Specifically, the HTMT, assuming a threshold of 0.90, identifies issues in at least one pair of constructs in 99.0% of the cases. Similarly, Cronbach’s alpha raises a red flag in at least one construct in 86.7% of the cases, and 78.4% cases for ρA. When considering all three criteria jointly, the models are rejected in 99.26% of the cases. This changes to 99.23% when only relying on HTMT and ρA.

Overall, these results clearly demonstrate that researchers using PLS-SEM would confidently reject Model #3 when considering the full set of recommended criteria [8]. Extending this perspective, the PLS-SEM literature has proposed other criteria that may also effectively disclose issues such as the one raised in Model #3. For example, evaluating the model’s SRMR values (Schuberth et al., 2023) would reject 99.9% of the permutations, assuming the common threshold of 0.08 for this metric (Table 3). Rather than relying on a limited set of outdated criteria, a more constructive approach would have considered the efficacy of SRMR and related metrics that recent research recommends.

One can, of course, always find misspecifications of a model that achieve sufficient levels of reliability and validity. For example, the corporate reputation model used in the illustration allows for 8,400 configurations of the ten indicators to the four constructs and a prespecified (fixed) number of indicators per construct. For the complete reputation model—as used in, for example, Hair et al. (2014), Hair et al. (2017a), and Hair et al. (2022)—with eight constructs, 31 indicators and a varying number of indicators per construct (with at least one indicator per construct), the number of combinations based on the Stirling number of the second kind is 2.152  ·  1023. Not surprisingly, if one searches hard enough, one will always find model misspecifications that do not raise a red flag and likely leaves any substantiation based on measurement theory considerations aside.

Despite these issues, the discussion underlines the need for dimensionality assessment. Existing PLS-SEM assessment guidelines—including our own (e.g. Sarstedt et al., 2021)—do not emphasize this kind of analysis, which typically needs to be run outside PLS-SEM, for example, by applying Revelle’s (1979) beta metric—rather than applying the Kaiser criterion, which is well-known to produce inflated Eigenvalues [9].

Discussion and reflections on PLS-SEM

PLS-SEM has enjoyed substantial development in the last decade (Sarstedt et al., 2022a, 2022b). As the technique has evolved rapidly, so has the appreciation of its strengths and limitations. As this knowledge has accumulated and continued to evolve, it is not surprising that past misapplications have emerged (Petter and Hadavi, 2021). For example, the fact that PLS-SEM generally converges and provides a solution at smaller sample sizes has led researchers to the method’s (mis)application in underpowered studies (for a discussion on this issue, see Marcoulides and Saunders, 2006). Should this fact be used to criticize the technique per se, or rather the weak research design of the study as Marcoulides and Saunders (2006) correctly note? Clearly, robust statistical conclusions rest on the quality of the sample, and no technique can guard against weak research designs (Kock and Hadaya, 2018; Rigdon, 2016; Sarstedt et al., 2016). In the same way, questionable research practices, such as p-hacking and hypothesizing after the results are known (i.e. HARKing), can affect the replicability of results regardless of the techniques used for data analysis (Adler et al., 2023a).

The philosophical lesson of the “no free lunch” theorem is that a reasonable academic debate should begin by describing why a method has been successful and which real-world assumptions explain its success (Forster, 2005). It is not enough to merely point out one or two selective instances where a method underperforms, as eventually all statistical methods will be found to underperform in certain situations, but rather why the method succeeds in so many practical situations. This understanding creates useful knowledge and moves the field forward. On the positive side, Rönkkö et al. (2023) did suggest a metric to assess the relative efficacy of equal weights over differentiated weights, referred to as the composite equivalence index (CEI). However, Hair et al. (2024b) show that the CEI suffers from serious shortcomings (e.g. lack of discriminatory power), which severely limit its usefulness. Table 4 summarizes the main points raised in their article and responses based on the empirical demonstrations, simulation evidence, and sound theoretical reasoning presented in this commentary.

Concluding observations

Reflecting on the various articles, commentaries, and rejoinders published over the last few years, one may ask why critics and proponents of the PLS-SEM method arrive at fundamentally different conclusions. Apart from the issues described in this article (e.g. in terms of simulation model design or choice of model evaluation metrics), different assumptions regarding the nature of the concepts may explain these inconsistencies—that is, whether assuming that theoretical concepts can validly be measured using composites, factors, or both (e.g. Rigdon et al., 2017). These assumptions have tangible consequences for the model estimation since composite-based and factor-based methods estimate different population parameters (Cook et al., 2023). The essential point here is that by clarifying the assumptions underlying construct measurement, many of the “fallacies” and “untold facts” are no longer an issue. It is also worth noting that common factors obtained through factor-based SEM are not inherently more relevant than composites in the measurement of theoretical concepts. This aspect has been thoroughly examined in recent research discussions (e.g. Hair and Sarstedt, 2019; Rhemtulla et al., 2020; Rigdon, 2012; Rigdon, 2016; Rigdon et al., 2017, 2019, Rossiter, 2011; Sarstedt et al., 2016). As a recent example, Rigdon and Sarstedt (2022) conceptually show that the common factor model is rarely correct in the population and often does not correspond to the quantity the researcher intends to measure.

No single statistical method holds the “carte blanche” when it comes to complex multivariate data. Every technique has its strengths and weaknesses that depend on its specific assumptions. One can always showcase a particular method’s limited performance by probing its boundary conditions where it fails. Instead, future research should move the debate to more constructive grounds, focusing on challenges that PLS-SEM researchers face in realistic settings and data they frequently encounter. Indeed, as indicated in Table 5 quite a few issues regarding PLS-SEM and other analytical techniques warrant attention—some address general conceptual concerns, while others relate to specific methodological problems. Where possible, we hope our comments will serve as starting point for further reading and development.

Despite the debates, PLS-SEM has been instrumental in advancing social sciences research by helping to create seminal theories and models, such as the ACSI, ECSI, technology acceptance model (Davis, 1989), and unified theory of acceptance and use of technology (Venkatesh et al., 2003), which have become cornerstones in their respective disciplines. These models have been replicated in numerous settings, using various techniques pointing to the robustness of their original PLS-SEM analyses. We agree with the critics that PLS-SEM literature certainly requires more clarity in exposition to aid robust application of the technique by researchers, especially as the volume of research related to the method’s ecosystem has rapidly evolved and expanded (Ciavolino et al., 2022; Hwang et al., 2020; Khan et al., 2019). In this context, the continuous review of open science practices and their application to PLS-SEM (Sarstedt et al., 2024), for example, by using a method-specific preregistration template that researchers can use to foster transparency (Adler et al., 2023b), is important for the appropriate use of the method across disciplines in high-ranking journals (Petter and Hadavi, 2023). We also agree with Cook and Forzani (2023) on the necessity of an in-depth mathematical exploration of the statistical properties of PLS-SEM. Such investigations can help establish a common ground allowing for constructive development and delineation of the methodology. For example, Cook and Forzani (2023) show that PLS-SEM estimates different population parameters compared to factor-based SEM, rendering direct comparisons between these methods less meaningful. This critical insight, if identified earlier, might have avoided much of the debate surrounding PLS-SEM. We welcome new research that takes a constructive stance in developing and critically investigating the PLS-SEM methodology in this regard. Since previous “rules of thumb” may no longer be relevant, textbooks and guidelines need to be updated continually, as new information emerges. The rapid pace of progress in the field can make it difficult for users of the technique, reviewers, and journal editors to keep up with the latest developments. Readers are urged to adopt a long-term scientific perspective when assessing the trajectory of PLS-SEM developments. PLS-SEM has come a long way, and it still has a long way to go, as do other emerging analytical approaches.

Figures

PLS-SEM and CCA results comparison

Figure 1.

PLS-SEM and CCA results comparison

Replication of the ECSI demonstration with additional results

Figure 2.

Replication of the ECSI demonstration with additional results

Simulation study results

Figure 3.

Simulation study results

Model estimates for different configurations of the ECSI model

Path relationship PLS-SEM with cusl2 Equal weights with cusl2 PLS-SEM without cusl2 Equal weights without cusl2 PLS-SEM with random indicators* Equal weights with random indicators*
Complaints → loyalty 0.067 0.088 0.058 0.054 0.064 0.179
Expectation → quality 0.557 0.553 0.557 0.553 0.557 0.553
Expectation → satisfaction 0.063 0.076 0.063 0.076 0.061 0.078
Expectation → value 0.050 0.062 0.050 0.062 0.050 0.062
Image → expectation 0.505 0.508 0.505 0.508 0.505 0.508
Image → loyalty 0.196 0.189 0.195 0.206 0.194 0.354
Image → satisfaction 0.179 0.172 0.179 0.172 0.179 0.141
Quality → satisfaction 0.512 0.513 0.512 0.513 0.510 0.392
Quality → value 0.558 0.538 0.558 0.538 0.558 0.538
Satisfaction → complaints 0.528 0.519 0.528 0.519 0.530 0.318
Satisfaction → loyalty 0.485 0.406 0.489 0.464 0.489 0.133
Value → satisfaction 0.195 0.187 0.195 0.187 0.198 0.096
R2 (loyalty) 0.457 0.365 0.454 0.427 0.458 0.299
Notes:

The grey shaded rows highlight relationships with Loyalty that demonstrate particularly strong variations across different model estimations. *This demonstration assigns three additional randomly generated indicators to the Satisfaction construct

Source: Authors’ own work

Results assessment (misspecified model #1)

Cronbach’s α ρA ρC AVE
Reliability and validity measures
COMP 0.613 0.626 0.791 0.560
CUSA
CUSL 0.829 0.832 0.898 0.746
LIKE 0.746 0.788 0.855 0.665
Discriminant validity: HTMT
COMP CUSA CUSL LIKE
COMP
CUSA 0.721 [0.642; 0.802]
CUSL 0.952 [0.868; 1.039] 0.561 [0.478; 0.638]
LIKE 1.090 [1.025; 1.170] 0.791 [0.734; 0.845] 0.758 [0.682; 0.830]
Note:

Numbers on brackets represent the 90% confidence intervals (10,000 subsamples)

Source: Authors’ own work

Model evaluation metrics for model #3

CriterionAssessmentAnalysis Result
ρA Should be ≥ 0.7 Fraction of times that the ρA is below 0.7 in at least one construct 0.784
Cronbach’s alpha Should be ≥ 0.7 Fraction of times that the Cronbach’s alpha is below 0.7 in at least one construct 0.867
HTMT Should be ≤ 0.90 Fraction of times that the HTMT is above 0.90 in at least one construct 0.990
All criteria above together Each criterion should meet its threshold Fraction of times that the permuted model failed at least one of the criteria’s thresholds 0.993
SRMR Should be ≤ 0.08 Fraction of times that the SRMR is above 0.08 0.999

Source: Authors’ own work

Summary of conclusions

Aspect Rönkkö et al.’s (2023) position Our response
“Fallacy #1”:
PLS-SEM maximizes explained variance or R²
PLS-SEM’s optimization criterion is ambiguous. The method does not maximize the R2, and canonical correlations achieve higher levels of explained variance PLS-SEM seeks to minimize the residuals in the relationships between composites and indicators (i.e., in the measurement models) as well as the relationships between composites (i.e., in the structural model). While related, the CCA and PLS-SEM rely on different models, making the authors’ empirical comparison of the two methods meaningless. Specifically, the methods produce equivalent results for a two-construct model estimated via PLS-SEM Mode B
“Fallacy #2”:
PLS-SEM weights improve reliability
PLS-SEM-based weights do not improve reliability and using equal weights is a simpler and more robust solution PLS-SEM’s ability to improve reliability has been shown both analytically and through simulation studies. The assumption of equal weights overlooks the associated reliability and validity issues and limits the model’s practical utility
“Untold fact”: PLS-SEM weights can bias correlations When two constructs are only weakly correlated, PLS-SEM inflates path coefficients. Cross-loadings further inflates these biases PLS-SEM only inflates path coefficient estimates in models where the constructs are perfectly uncorrelated. Such a setting constitutes a well-known boundary condition for PLS-SEM, which is extremely unlikely to occur in empirical applications. More importantly, this feature has no consequences for inference testing, as it does not trigger false positives much different from the expectation (e.g., 5%). Researchers should avoid models where an endogenous construct is related to only one other construct (e.g., chainlike models). Cross-loadings violate a fundamental requirement of the PLS-SEM method. Future research should assess the impact of cross-loadings on model estimates and establish measures to assess the severity of their effect
The composite equivalence index (CEI) Researchers should routinely use the CEI to assess whether the indicator weighting provides any value-added beyond equal weights We do not respond in this article on this aspect but refer to Hair et al. (2024b). Their article shows that the CEI lacks discriminatory power, conceals reliability concerns in reflective measurement models as well as differences in relative indicator contributions in formative measurement models. Researchers should therefore not use the CEI as such a step would have adverse consequences on the validity of results
“Fallacy #3”: using AVE and composite reliability with PLS-SEM to validate measurement The AVE, the Fornell-Larcker criterion, and the composite reliability (ρA) do not disclose model misspecifications The critics selectively use metrics and settings in which PLS-SEM does not identify misspecified models. Considering the standard range of model evaluation metrics discloses the misspecifications in all cases. In addition, content validity concerns would prevent any researcher from using the model set-ups the authors considered
General conclusion PLS-SEM use should generally be avoided PLS-SEM perfectly fits into the marketing research landscape, which not only aims to test theories, but also to derive managerial implications that are predictive in nature. PLS-SEM works well in achieving this objective, as the method follows a causal-predictive paradigm, where the aim is to test the predictive power within the confinements of a model carefully developed on the grounds of theory and logic

Source: Authors’ own work

Examples of future research areas

Research area Research question and potential areas to advance PLS-SEM References
Measurement-theoretic foundations Can composites be assumed to have the same significance as factors in representing conceptual variables? How does metrological uncertainty contribute to this assessment? Under which conditions should composites or factors be preferred for measuring conceptual variables? Rhemtulla et al. (2020), Rigdon et al. (2019), Rigdon and Sarstedt (2022), Rigdon et al. (2020)
Statistical assumptions of the standard PLS-SEM algorithm Assessing the impact of violating a method’s statistical assumptions (e.g., cross-loadings) on parameter bias and predictive performance Lohmöller (1989, Chap. 2)
Modeling capabilities Extending the modeling capabilities, for example by allowing for relationships of an indicator to multiple composites, setting model constraints, and implementing circular and bidirectional relationships. Further extensions include different forms of moderated mediation analyses and hierarchical component models Lohmöller (1989, Chaps. 2 and 3), Sarstedt et al. (2019; 2020)
Big data analytics How can PLS-SEM support big data and machine learning research? Akter et al. (2017), Richter and Tudoran (2024)
Model specification search Improve the model specification search based, for instance, on Cohen’s path method to explore path directionality (Callaghan et al., 2007) and the fuzzy-set qualitative comparative analysis (fsQCA) in PLS-SEM (Rasoolimanesh et al., 2021). Thereby, research can benchmark their theoretically established model against model alternatives with, for example, the best predictive capabilities Cho et al. (2022), Marcoulides and Drezner (2001), Marcoulides and Drezner (2003), Marcoulides et al. (1998)
Model misspecification assessment Extending the set of model evaluation criteria, for example to identify measurement model misspecifications Gudergan et al. (2008)
Congruence assessment Introduce congruence assessment to examine whether constructs in the nomological network have proportional correlations Franke et al. (2021)
Striking a balance between explanation and prediction How can explanatory and predictive goals be best accommodated in PLS-SEM-based modeling, especially when considering model selection? When considering out-of-sample prediction, should the focus be on predicting certain specific constructs or the overall model? Liengaard et al. (2021), Sharma et al. (2019; 2021)
Robustness checks Robustness checks of the estimated model, including common method bias, endogeneity, nonlinear relationships, impact of collinearity in formative measurement models, necessary condition analysis, and fuzzy-set qualitative comparative analysis in PLS-SEM Chin et al. (2013), Hult et al. (2018), Rasoolimanesh et al. (2021), Richter et al. (2020)
Latent class analysis Improve the validity of latent class techniques by including explanatory variables as covariates in the model estimation and by analyzing the heterogeneity of intercepts and unstandardized coefficients Bray et al. (2015), Sarstedt et al. (2022a, 2022b)
Longitudinal data analysis How can researchers compare models across time in longitudinal analysis? Jung et al. (2012), Lohmöller (1989, Chap. 6), Roemer (2016)
Multilevel modeling How can PLS-SEM be used for multilevel modelling when we are analyzing data that are drawn from a number of different levels. For instance, levels such as a country’s gross domestic income and gender may be used for PLS path models on job satisfaction (Drabe et al., 2015), sustainable consumption behavior (Saari et al., 2021), and circular innovation (Saari et al., 2024) Hwang et al. (2007), Jung et al. (2015)

Source: Authors’ own work

Notes

1.

The following link allows you to download the R code and the SmartPLS (Ringle et al., 2024) projects used in this research paper: https://osf.io/zrnjm/?view_only=None

2.

The long-established CCA abbreviation for the canonical correlation analysis (e.g., Holbrook and Moore, 1982) must not be confused with the CCA abbreviation, which Hair et al. (2018, Chap. 13) and Schuberth et al. (2018) introduced for Henseler et al.’s (2014) confirmatory composite analysis.

3.

Note that we use the terms composites and components interchangeably throughout this research (see also Hwang et al., 2020).

4.

Nevertheless, one can replicate Rönkkö et al.’s (2023) CCA results using the PLS-SEM algorithm (Lohmöller, 1989, Chap. 3, Tenenhaus and Esposito Vinzi, 2005). In the example, one creates a single Mode B construct with all block X indicators (e.g., cusa1 to cusa3, cusco, and imag1 to imag5) as well as a dependent Mode B construct with all block Y indicators (e.g., cusl1 to cusl3). The PLS-SEM results of this model return outer weights that are identical to the canonical correlation weights and the same R² value (see Figure 1, panel B). Chin (1998) has already established the relation between CCA and PLS-SEM estimation for a two-block analysis (i.e., a model with two constructs), showing that the methods produce equivalent results under Mode B estimation. “Thus, indicators for each block are weighted optimally in order to maximize the correlation between the two LV component scores […] Therefore, [in this two-block case] the results from applying the PLS algorithm are equivalent to performing a canonical correlation analysis.” (Chin, 1998, p. 307).

5.

Figure 4 in Rönkkö et al. (2023) is practically identical to Figure 1 in Rönkkö et al. (2016a).

6.

Rönkkö et al. (2023) roll a six-sided dice to get the data for the indicators and the latent variable but also the resulting errors. Hence, even though not directly obvious from Rönkkö et al.’s (2023) explications, they prespecify an equal weights population model for which they generate the data. This almost perfectly matches the equal weights estimation method. PLS-SEM on the other hand carries out additional computations in the measurement model. This extra computational work does not pay off in this special setting. The picture changes in favor of PLS-SEM as soon as the data is no longer generated for a population model with equal weights (e.g., Hair et al., 2017b, Sarstedt et al., 2016).

7.

For example, airbag and lane assist technologies were developed independently of, and have no relation with, car engine designs. However, no one would argue against using these technologies in tandem to drive safely. Relatedly, criticizing the use of HTMT and other methodologies as not “PLS-specific” is irrelevant.

8.

One might of course complain that HTMT does not draw on the PLS-SEM estimates—as Rönkkö et al. (2023) do. Yet, the metric is an integral part of any PLS-SEM analysis, as called for in all recent guidelines (e.g., Hair et al., 2022, Chap. 12, Hair et al., 2019, Sarstedt et al., 2021, Wong, 2019, Chap. 4).

9.

On the contrary, Hwang et al. (2023) primer on integrated GSCA considers dimensionality assessment as an integral element of the analysis.

References

Adler, S.J., Röseler, L. and Schöniger, M.K. (2023a), “A toolbox to evaluate the trustworthiness of published findings”, Journal of Business Research, Vol. 167, p. 114189.

Adler, S.J., Sharma, P.N. and Radomir, L. (2023b), “Toward open science in PLS-SEM: assessing the state of the art and future perspectives”, Journal of Business Research, Vol. 169, p. 114291.

Akter, S., Fosso Wamba, S. and Dewan, S. (2017), “Why PLS-SEM is suitable for complex modelling? An empirical illustration in big data analytics quality”, Production Planning and Control, Vol. 28 Nos 11/12, pp. 1011-1021.

Benesty, J. and Cohen, I. (2018), Canonical Correlation Analysis in Speech Enhancement, Springer, Cham.

Bergh, D.D., Boyd, B.K., Byron, K., Gove, S. and Ketchen, D.J. (2022), “What constitutes a methodological contribution?”, Journal of Management, Vol. 48 No. 7, pp. 1835-1848.

Bray, B.C., Lanza, S.T. and Tan, X. (2015), “Eliminating bias in classify-analyze approaches for latent class analysis”, Structural Equation Modeling: A Multidisciplinary Journal, Vol. 22 No. 1, pp. 1-11.

Callaghan, W., Wilson, B., Ringle, C.M. and Henseler, J. (2007), “Exploring causal path directionality for a marketing model: using cohen’s path method”, in Martens, H. and Naes, T. (Eds), PLS'07: The 5th International Symposium on PLS and Related Methods, Ås, Norway, pp. 57-61.

Cameron, A.C. and Trivedi, P.K. (2005), Microeconometrics: Methods and Applications, Cambridge University Press, Cambridge.

Cepeda-Carrión, G., Hair, J.F., Ringle, C.M., Roldán, J.L. and García-Fernández, J. (2022), “Guest editorial: sports management research using partial least squares structural equation modeling (PLS-SEM)”, International Journal of Sports Marketing and Sponsorship, Vol. 23 No. 2, pp. 229-240.

Chin, W.W. (1998), “The partial least squares approach to structural equation modeling”, in Marcoulides, G.A. (Ed.), Modern Methods for Business Research, Lawrence Erlbaum, Mahwah, NJ, pp. 295-358.

Chin, W., Cheah, J.-H., Liu, Y., Ting, H., Lim, X.-J. and Cham, T.H. (2020), “Demystifying the role of causal-predictive modeling using partial least squares structural equation modeling in information systems research”, Industrial Management and Data Systems, Vol. 120 No. 12, pp. 2161-2209.

Chin, W.W., Thatcher, J.B., Wright, R.T. and Steel, D. (2013), “Controlling for common method variance in PLS analysis: the measured latent marker variable approach”, in Abdi, H., Chin, W.W., Esposito Vinzi, V., Russolillo, G. and Trinchera, L. (Eds), New Perspectives in Partial Least Squares and Related Methods, Springer New York, NY, pp. 231-239.

Cho, G., Hwang, H., Sarstedt, M. and Ringle, C.M. (2022), “A prediction-oriented specification search algorithm for generalized structured component analysis”, Structural Equation Modeling: A Multidisciplinary Journal, Vol. 29 No. 4, pp. 611-619.

Ciavolino, E., Aria, M., Cheah, J.-H. and Roldán, J.L. (2022), “A tale of PLS structural equation modelling: episode I — a bibliometrix citation analysis”, Social Indicators Research, Vol. 164 No. 3, pp. 1323-1348.

Cook, R.D. and Forzani, L. (2020), “Fundamentals of path analysis in the social sciences”, arXiv, No. 2011.06436.

Cook, D.R. and Forzani, L. (2023), “On the role of partial least squares in path analysis for the social sciences”, Journal of Business Research, Vol. 167, p. 114132.

Cook, R.D., Forzani, L. and Liu, L. (2023), “Partial least squares for simultaneous reduction of response and predictor vectors in regression”, Journal of Multivariate Analysis, Vol. 196, p. 105163.

Cooksey, R.W. and Soutar, G.N. (2006), “Coefficient beta and hierarchical item clustering: an analytical procedure for establishing and displaying the dimensionality and homogeneity of summated scales”, Organizational Research Methods, Vol. 9 No. 1, pp. 78-98.

Davis, F.D. (1989), “Perceived usefulness, perceived ease of use, and user acceptance of information technology”, MIS Quarterly, Vol. 13 No. 3, pp. 319-340.

Dijkstra, T.K. and Henseler, J. (2015), “Consistent partial least squares path modeling”, MIS Quarterly, Vol. 39 No. 2, pp. 297-316.

Drabe, D., Hauff, S. and Richter, N.F. (2015), “Job satisfaction in aging workforces: an analysis of the USA, Japan and Germany”, The International Journal of Human Resource Management, Vol. 26 No. 6, pp. 783-805.

Evermann, J. and Rönkkö, M. (2023), “Recent developments in PLS”, Communications of Association for Information Systems, Vol. 52 No. 1, pp. 663-667.

Evermann, J. and Tate, M. (2010), “Testing models or fitting models? Identifying model misspecification in PLS”, 2010 International conference on Information Systems (ICIS), St. Louis, paper 21.

Fornell, C.G., Johnson, M.D., Anderson, E.W., Cha, J. and Bryant, B.E. (1996), “The American customer satisfaction index: nature, purpose, and findings”, Journal of Marketing, Vol. 60 No. 4, pp. 7-18.

Fornell, C., Morgeson, F.V., Hult, G.T.M. and VanAmburg, D. (2020), The Reign of the Customer: Customer-Centric Approaches to Improving Satisfaction, Palgrave Macmilla, Cham.

Forster, M.R. (2005), “Notice: no free lunches for anyone, Bayesians included”, Department of Philosophy, University of Wisconsin–Madison Madison, USA, available at: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.161.4878&rep=rep1&type=pdf

Franke, G.R. and Sarstedt, M. (2019), “Heuristics versus statistics in discriminant validity testing: a comparison of four procedures”, Internet Research, Vol. 29 No. 3, pp. 430-447.

Franke, G.R., Sarstedt, M. and Danks, N.P. (2021), “Assessing measure congruence in nomological networks”, Journal of Business Research, Vol. 130, pp. 318-334.

González, I. and Déjean, S. (2021), “R package CCA: canonical correlation analysis (version 1.2.1)”, available at: https://cran.r-project.org/web/packages/CCA/

Gudergan, S.P., Ringle, C.M., Wende, S. and Will, A. (2008), “Confirmatory tetrad analysis in PLS path modeling”, Journal of Business Research, Vol. 61 No. 12, pp. 1238-1249.

Guenther, P., Guenther, M., Ringle, C.M., Zaefarian, G. and Cartwright, S. (2023), “Improving PLS-SEM use for business marketing research”, Industrial Marketing Management, Vol. 111, pp. 127-142.

Gunantara, N. (2018), “A review of multi-objective optimization: methods and ots applications”, Cogent Engineering, Vol. 5 No. 1, p. 1502242.

Haenlein, M. and Kaplan, A.M. (2004), “A beginner's guide to partial least squares analysis”, Understanding Statistics, Vol. 3 No. 4, pp. 283-297.

Hair, J.F. and Sarstedt, M. (2019), “Factors versus composites: guidelines for choosing the right structural equation modeling method”, Project Management Journal, Vol. 50 No. 6, pp. 619-624.

Hair, J.F., Black, W.C., Babin, B.J. and Anderson, R.E. (2018), Multivariate Data Analysis, Cengage Learning, London.

Hair, J.F., Hult, G.T.M., Ringle, C.M. and Sarstedt, M. (2014), A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM), Sage, Thousand Oaks, CA.

Hair, J.F., Hult, G.T.M., Ringle, C.M. and Sarstedt, M. (2017a), A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM), 2nd edition, Sage, Thousand Oaks, CA.

Hair, J.F., Hult, G.T.M., Ringle, C.M. and Sarstedt, M. (2022), A Primer on Partial Least Squares Structural Equation Modeling (PLS-SEM), 3rd edition, Sage, Thousand Oaks, CA.

Hair, J.F., Hult, G.T.M., Ringle, C.M., Sarstedt, M. and Thiele, K.O. (2017b), “Mirror, mirror on the wall: a comparative evaluation of composite-based structural equation modeling methods”, Journal of the Academy of Marketing Science, Vol. 45 No. 5, pp. 616-632.

Hair, J.F., Ringle, C.M. and Sarstedt, M. (2011), “PLS-SEM: indeed a silver bullet”, Journal of Marketing Theory and Practice, Vol. 19 No. 2, pp. 139-151.

Hair, J.F., Risher, J.J., Sarstedt, M. and Ringle, C.M. (2019), “When to use and how to report the results of PLS-SEM”, European Business Review, Vol. 31 No. 1, pp. 2-24.

Hair, J.F., Sarstedt, M., Ringle, C.M. and Gudergan, S.P. (2024a), Advanced Issues in Partial Least Squares Structural Equation Modeling (PLS-SEM), 2nd edition, Sage, Thousand Oaks, CA.

Hair, J.F., Sarstedt, M., Ringle, C.M. and Mena, J.A. (2012), “An assessment of the use of partial least squares structural equation modeling in marketing research”, Journal of the Academy of Marketing Science, Vol. 40 No. 3, pp. 414-433.

Hair, J.F., Sarstedt, M., Ringle, C.M., Sharma, P.N. and Liengaard, B.D. (2024b), “The shortcomings of equal weights estimation and the composite equivalence index in PLS-SEM”, European Journal of Marketing, Vol. 58 No. 13, pp. 30-55.

Hanafi, M. (2007), “PLS path modelling: computation of latent variables with the estimation mode B”, Computational Statistics, Vol. 22 No. 2, pp. 275-292.

Hanafi, M., Dolce, P. and El Hadri, Z. (2021), “Generalized properties for Hanafi–Wold’s procedure in partial least squares path modeling”, Computational Statistics, Vol. 36 No. 1, pp. 603-614.

Hauff, S., Richter, N.F., Sarstedt, M. and Ringle, C.M. (2024), “Importance and performance in PLS-SEM and NCA: introducing the combined importance-performance map analysis (cIPMA)”, Journal of Retailing and Consumer Services, Vol. 78, p. 103723.

Henseler, J., Dijkstra, T.K., Sarstedt, M., Ringle, C.M., Diamantopoulos, A., Straub, D.W., Ketchen, D.J., Hair, J.F., Hult, G.T.M. and Calantone, R.J. (2014), “Common beliefs and reality about partial least squares: comments on Rönkkö & Evermann (2013)”, Organizational Research Methods, Vol. 17 No. 2, pp. 182-209.

Henseler, J., Ringle, C.M. and Sarstedt, M. (2015), “A new criterion for assessing discriminant validity in variance-based structural equation modeling”, Journal of the Academy of Marketing Science, Vol. 43 No. 1, pp. 115-135.

Holbrook, M.B. and Moore, W.L. (1982), “Using canonical correlation to construct product spaces for objects with known feature structures”, Journal of Marketing Research, Vol. 19 No. 1, pp. 87-98.

Hult, G.T., Hair, M., Proksch, J.F., Sarstedt, D., Pinkwart, M.A. and Ringle, C.M. (2018), “Addressing endogeneity in international marketing applications of partial least squares structural equation modeling”, Journal of International Marketing, Vol. 26 No. 3, pp. 1-21.

Hwang, H. and Cho, G. (2020), “Global least squares path modeling: a full-information alternative to partial least squares path modeling”, Psychometrika, Vol. 85 No. 4, pp. 947-972.

Hwang, H. and Takane, Y. (2004), “Generalized structured component analysis”, Psychometrika, Vol. 69 No. 1, pp. 81-99.

Hwang, H., Sarstedt, M., Cheah, J.H. and Ringle, C.M. (2020), “A concept analysis of methodological research on composite-based structural equation modeling: bridging PLSPM and GSCA”, Behaviormetrika, Vol. 47 No. 1, pp. 219-241.

Hwang, H., Sarstedt, M., Cho, G., Choo, H. and Ringle, C.M. (2023), “A primer on integrated generalized structured component analysis”, European Business Review, Vol. 35 No. 3, pp. 261-284.

Hwang, H., Takane, Y. and Malhotra, N. (2007), “Multilevel generalized structured component analysis”, Behaviormetrika, Vol. 34 No. 2, pp. 95-109.

Jedidi, K., Schmitt, B.S., Sliman, M.B. and Li, Y. (2021), “R2M index 1.0: assessing the practical relevance of academic marketing articles”, Journal of Marketing, Vol. 85 No. 5, pp. 22-41.

Jöreskog, K.G. (1973), “A general method for estimating a linear structural equation system”, in Goldberger, A.S. and Duncan, O.D. (Eds), Structural Equation Models in the Social Sciences, Seminar Press, New York, NY, pp. 85-112.

Jöreskog, K.G. and Wold, H. (1982), “The ML and PLS techniques for modeling with latent variables: historical and comparative aspects”, in Jöreskog, K.G. and Wold, H. (Eds), Systems under Indirect Observation, Part I, North-Holland, Amsterdam, pp. 263-270.

Jung, K., Takane, Y., Hwang, H. and Woodward, T.S. (2012), “Dynamic GSCA (generalized structured component analysis) with applications to the analysis of effective connectivity in functional neuroimaging data”, Psychometrika, Vol. 77 No. 4, pp. 827-848.

Jung, K., Takane, Y., Hwang, H. and Woodward, T.S. (2015), “Multilevel dynamic generalized structured component analysis for brain connectivity analysis in functional neuroimaging data”, Psychometrika, Vol. 81 No. 2, pp. 1-17.

Khan, G.F., Sarstedt, M., Shiau, W.-L., Hair, J.F., Ringle, C.M. and Fritze, M. (2019), “Methodological research on partial least squares structural equation modeling (PLS-SEM): an analysis based on social network approaches”, Internet Research, Vol. 29 No. 3, pp. 407-429.

Kock, N. and Hadaya, P. (2018), “Minimum sample size estimation in PLS-SEM: the inverse square root and gamma-exponential methods”, Information Systems Journal, Vol. 28 No. 1, pp. 227-261.

Liengaard, B.D., Sharma, P.N., Hult, G.T.M., Jensen, M.B., Sarstedt, M., Hair, J.F. and Ringle, C.M. (2021), “Prediction: coveted, yet forsaken? Introducing a cross-validated predictive ability test in partial least squares path modeling”, Decision Sciences, Vol. 52 No. 2, pp. 362-392.

Lohmöller, J.-B. (1989), Latent Variable Path Modeling with Partial Least Squares, Physica, Heidelberg.

Marcoulides, G.A. and Drezner, Z. (2001), “Specification searches in structural equation modeling with a genetic algorithm”, in Marcoulides, G.A. and Schumacker, R.E. (Eds), Advanced Structural Equation Modeling: New Developments and Techniques, Lawrence Erlbaum, Mahwah, NJ, pp. 247-268.

Marcoulides, G.A. and Drezner, Z. (2003), “Model specification searches using ant colony optimization algorithms”, Structural Equation Modeling, Vol. 10 No. 1, pp. 154-164.

Marcoulides, G.A. and Saunders, C. (2006), “PLS: a silver bullet?”, MIS Quarterly, Vol. 30 No. 2, pp. 3-9.

Marcoulides, G.A., Chin, W.W. and Saunders, C. (2012), “When imprecise statistical statements become problematic: a response to Goodhue, Lewis, and Thompson”, MIS Quarterly, Vol. 36 No. 3, pp. 717-728.

Marcoulides, G.A., Drezner, Z. and Schumacker, R.E. (1998), “Model specification searches in structural equation modeling using tabu search”, Structural Equation Modeling: A Multidisciplinary Journal, Vol. 5 No. 4, pp. 365-376.

Morgeson, F.V., Hult, G.T.M., Sharma, U. and Fornell, C. (2023), “The American customer satisfaction index (ACSI): a sample dataset and description”, Data in Brief, Vol. 48, p. 109123.

Nitzl, C. and Chin, W.W. (2017), “The case of partial least squares (PLS) path modeling in managerial accounting”, Journal of Management Control, Vol. 28 No. 2, pp. 137-156.

Paxton, P., Curran, P.J., Bollen, K.A., Kirby, J. and Chen, F. (2001), “Monte Carlo experiments: design and implementation”, Structural Equation Modeling, Vol. 8 No. 2, pp. 287-312.

Petter, S. (2018), “Haters gonna hate’: PLS and information systems research”, ACM SIGMIS Database: The DATABASE for Advances in Information Systems, Vol. 49 No. 2, pp. 10-13.

Petter, S. and Hadavi, Y. (2021), “With great power comes great responsibility: the use of partial least squares in information systems research”, ACM SIGMIS Database: The DATABASE for Advances in Information Systems, Vol. 52 No. SI, pp. 10-23.

Petter, S. and Hadavi, Y. (2023), “Use of partial least squares path modeling within and across business disciplines”, in Latan, H., Hair, J.F. and Noonan, R. (Eds), Partial Least Squares Path Modeling: Basic Concepts, Methodological Issues and Applications, Springer, Cham, pp. 55-79.

Petter, S., Rai, A. and Straub, D. (2012), “The critical importance of construct measurement specification: a response to Aguirre-Urreta and Marakas”, MIS Quarterly, Vol. 36 No. 1, pp. 147-156.

Rademaker, M.E. and Schuberth, F. (2022), “R package cSEM: composite-Based structural equation modeling (version 0.5.0)”, available at: https://cran.r-project.org/web/packages/cSEM/

Ramos, R., Rita, P. and Vong, C. (2023), “Mapping research in marketing: Trends, influential papers and agenda for future research”, Spanish Journal of Marketing - ESIC, Vol. 28 No. 2, pp. 187-206.

Rasoolimanesh, S.M., Ringle, C.M., Sarstedt, M. and Olya, H. (2021), “The combined use of symmetric and asymmetric approaches: partial least squares-structural equation modeling and fuzzy-set qualitative comparative analysis”, International Journal of Contemporary Hospitality Management, Vol. 33 No. 5, pp. 1571-1592.

Revelle, W. (1979), “Hierarchical clustering and the internal structure of tests”, Multivariate Behavioral Research, Vol. 14 No. 1, pp. 57-74.

Revelle, W. (2024), “R package psych: Procedures for psychological, psychometric, and personality research (version 2.4.1)”.

Rhemtulla, M., van Bork, R. and Borsboom, D. (2020), “Worse than measurement error: consequences of inappropriate latent variable measurement models”, Psychological Methods, Vol. 25 No. 1, pp. 30-45.

Richter, N.F. and Tudoran, A.A. (2024), “Elevating theoretical insight and predictive accuracy in business research: combining PLS-SEM and selected machine learning algorithms”, Journal of Business Research, Vol. 173, p. 114453.

Richter, N.F., Hauff, S., Ringle, C.M. and Gudergan, S.P. (2022), “The use of partial least squares structural equation modeling and complementary methods in international management research”, Management International Review, Vol. 62 No. 4, pp. 449-470.

Richter, N.F., Schubring, S., Hauff, S., Ringle, C.M. and Sarstedt, M. (2020), “When predictors of outcomes are necessary: guidelines for the combined use of PLS-SEM and NCA”, Industrial Management and Data Systems, Vol. 120 No. 12, pp. 2243-2267.

Rigdon, E.E. (2012), “Rethinking partial least squares path modeling: in praise of simple methods”, Long Range Planning, Vol. 45 Nos 5/6, pp. 341-358.

Rigdon, E.E. (2016), “Choosing PLS path modeling as analytical method in European management research: a realist perspective”, European Management Journal, Vol. 34 No. 6, pp. 598-605.

Rigdon, E.E. and Sarstedt, M. (2022), “Accounting for uncertainty in the measurement of unobservable marketing phenomena”, in Baumgartner, H. and Weijters, B. (Eds), Review of Marketing Research: Measurement in Marketing, Emerald, Bingley, pp. 53-73.

Rigdon, E.E., Becker, J.-M. and Sarstedt, M. (2019), “Factor indeterminacy as metrological uncertainty: implications for advancing psychological measurement”, Multivariate Behavioral Research, Vol. 54 No. 3, pp. 429-443.

Rigdon, E.E., Sarstedt, M. and Becker, J.-M. (2020), “Quantify uncertainty in behavioral research”, Nature Human Behaviour, Vol. 4 No. 4, pp. 329-331.

Rigdon, E.E., Sarstedt, M. and Ringle, C.M. (2017), “On comparing results from CB-SEM and PLS-SEM: five perspectives and five recommendations”, Marketing ZFP, Vol. 39 No. 3, pp. 4-16.

Ringle, C.M., Sarstedt, M., Sinkovics, N. and Sinkovics, R.R. (2023), “A perspective on using partial least squares structural equation modelling in data articles”, Data in Brief, Vol. 48, p. 109074.

Ringle, C.M., Wende, S. and Becker, J.-M. (2024), “SmartPLS 4”, SmartPLS, Bönningstedt.

Roberts, N. and Thatcher, J. (2009), “Conceptualizing and testing formative constructs: tutorial and annotated example”, SIGMIS Database, Vol. 40 No. 3, pp. 9-39.

Roemer, E. (2016), “A tutorial on the use of PLS path modeling in longitudinal studies”, Industrial Management and Data Systems, Vol. 116 No. 9, pp. 1901-1921.

Roemer, E., Schuberth, F. and Henseler, J. (2021), “HTMT2–an improved criterion for assessing discriminant validity in structural equation modeling”, Industrial Management and Data Systems, Vol. 121 No. 12, pp. 2637-2650.

Rönkkö, M. and Cho, E. (2022), “An updated guideline for assessing discriminant validity”, Organizational Research Methods, Vol. 25 No. 1, pp. 6-14.

Rönkkö, M. and Evermann, J. (2013), “A critical examination of common beliefs about partial least squares path modeling”, Organizational Research Methods, Vol. 16 No. 3, pp. 425-448.

Rönkkö, M., Antonakis, J., McIntosh, C.N. and Edwards, J.R. (2016a), “Partial least squares path modeling: time for some serious second thoughts”, Journal of Operations Management, Vols 47/48 No. 1, pp. 9-27.

Rönkkö, M., McIntosh, C.N. and Aguirre-Urreta, M.I. (2016b), “Improvements to PLSc: remaining problems and simple solutions”, Aalto University, Aalto.

Rönkkö, M., Lee, N., Evermann, J., McIntosh, C.N. and Antonakis, J. (2023), “Marketing or methodology? Exposing fallacies of PLS with simple demonstrations”, European Journal of Marketing, Vol. 57 No. 6, pp. 1597-1617.

Rossiter, J.R. (2011), “Marketing measurement revolution: the C-OAR-SE method and why it must replace psychometrics”, European Journal of Marketing, Vol. 45 Nos 11/12, pp. 1561-1588.

Russo, D. and Stol, K.-J. (2021), “PLS-SEM for software engineering research: an introduction and survey”, ACM Computing Surveys, Vol. 54 No. 4, pp. 1-38.

Saari, U.A., Damberg, S., Frömbling, L. and Ringle, C.M. (2021), “Sustainable consumption behavior of europeans: the influence of environmental knowledge and risk perception on environmental concern and behavioral intention”, Ecological Economics, Vol. 189, p. 107155.

Saari, U.A., Damberg, S., Schneider, M., Aarikka-Stenroos, L., Herstatt, C., Lanz, M. and Ringle, C.M. (2024), “Capabilities for circular economy innovation: factors leading to product/service innovations in the construction and manufacturing industries”, Journal of Cleaner Production, Vol. 434, p. 140295.

Sarstedt, M., Adler, S.J., Ringle, C.M., Cho, G., Diamantopoulos, A., Hwang, H. and Liengaard, B.D. (2024), “Same model, same data, but different outcomes: evaluating the impact of method choices in structural equation modeling”, Journal of Product Innovation Management, forthcoming.

Sarstedt, M., Hair, J.F., Cheah, J.-H., Becker, J.-M. and Ringle, C.M. (2019), “How to specify, estimate, and validate higher-order constructs in PLS-SEM”, Australasian Marketing Journal, Vol. 27 No. 3, pp. 197-211.

Sarstedt, M., Hair, J.F., Nitzl, C., Ringle, C.M. and Howard, M.C. (2020), “Beyond a tandem analysis of SEM and PROCESS: use of PLS-SEM for mediation analyses!”, International Journal of Market Research, Vol. 62 No. 3, pp. 288-299.

Sarstedt, M., Hair, J.F., Pick, M., Liengaard, B.D., Radomir, L. and Ringle, C.M. (2022a), “Progress in partial least squares structural equation modeling use in marketing research in the last decade”, Psychology and Marketing, Vol. 39 No. 5, pp. 1035-1064.

Sarstedt, M., Hair, J.F. and Ringle, C.M. (2023), “PLS-SEM: indeed a silver bullet” – retrospective observations and recent advances”, Journal of Marketing Theory and Practice, Vol. 31 No. 3, pp. 261-275.

Sarstedt, M., Hair, J.F., Ringle, C.M., Thiele, K.O. and Gudergan, S.P. (2016), “Estimation issues with PLS and CBSEM: where the bias lies!”, Journal of Business Research, Vol. 69 No. 10, pp. 3998-4010.

Sarstedt, M., Radomir, L., Moisescu, O.I. and Ringle, C.M. (2022b), “Latent class analysis in PLS-SEM: a review and recommendations for future applications”, Journal of Business Research, Vol. 138, pp. 398-407.

Sarstedt, M., Ringle, C.M. and Hair, J.F. (2017), “Partial least squares structural equation modeling”, in Homburg, C., Klarmann, M. and Vomberg, A. (Eds), Handbook of Market Research, Springer, Cham, pp. 1-40.

Sarstedt, M., Ringle, C.M. and Hair, J.F. (2021), “Partial least squares structural equation modeling”, in Homburg, C., Klarmann, M. and Vomberg, A.E. (Eds), Handbook of Market Research, Springer, Cham, pp. 1-47.

Schauerte, N., Becker, M., Imschloss, M., Wichmann, J.R. and Reinartz, W.J. (2023), “The managerial relevance of marketing science: properties and genesis”, International Journal of Research in Marketing, Vol. 40 No. 4, pp. 801-822.

Schuberth, F., Henseler, J. and Dijkstra, T.K. (2018), “Confirmatory composite analysis”, Frontiers in Psychology, Vol. 9, p. 2541.

Schuberth, F., Rademaker, M.E. and Henseler, J. (2023), “Assessing the overall fit of composite models estimated by partial least squares path modeling”, European Journal of Marketing, Vol. 57 No. 6, pp. 1678-1702.

Schwaiger, M. (2004), “Components and parameters of corporate reputation: an empirical study”, Schmalenbach Business Review, Vol. 56 No. 1, pp. 46-71.

Sharma, P.N., Liengaard, B.D., Hair, J.F., Sarstedt, M. and Ringle, C.M. (2023a), “Predictive model assessment and selection in composite-based modeling using PLS-SEM: extensions and guidelines for using CVPAT”, European Journal of Marketing, Vol. 57 No. 6, pp. 1662-1677.

Sharma, P.N., Liengaard, B.D., Sarstedt, M., Hair, J.F. and Ringle, C.M. (2023b), “Extraordinary claims require extraordinary evidence: a comment on ‘the recent developments in PLS”, Communications of the Association for Information Systems, Vol. 52 No. 1, pp. 739-742.

Sharma, P.N., Sarstedt, M., Shmueli, G., Kim, K.H. and Thiele, K.O. (2019), “PLS-Based model selection: the role of alternative explanations in information systems research”, Journal of the Association for Information Systems, Vol. 20 No. 4, pp. 346-397.

Sharma, P.N., Shmueli, G., Sarstedt, M., Danks, N. and Ray, S. (2021), “Prediction-oriented model selection in partial least squares path modeling”, Decision Sciences, Vol. 52 No. 3, pp. 567-607.

Shmueli, G., Ray, S., Velasquez Estrada, J.M. and Chatla, S.B. (2016), “The elephant in the room: evaluating the predictive performance of PLS models”, Journal of Business Research, Vol. 69 No. 10, pp. 4552-4564.

Sukhov, A., Olsson, L.E. and Friman, M. (2022), “Necessary and sufficient conditions for attractive public transport: combined use of PLS-SEM and NCA”, Transportation Research Part A: Policy and Practice, Vol. 158, pp. 239-250.

Tenenhaus, M. and Esposito Vinzi, V. (2005), “PLS regression, PLS path modeling and generalized procrustean analysis: a combined approach for multiblock analysis”, Journal of Chemometrics, Vol. 19 No. 3, pp. 145-153.

Tenenhaus, M., Esposito Vinzi, V., Chatelin, Y.-M. and Lauro, C. (2005), “PLS path modeling”, Computational Statistics and Data Analysis, Vol. 48 No. 1, pp. 159-205.

Thompson, B. (1984), Canonical Correlation Analysis: Uses and Interpretation, Sage, Thousand Oaks, CA.

Thorndike, R.M. (2000), “Canonical correlation analysis”, in Tinsley, H.E.A. and Brown, S.D. (Eds), Handbook of Applied Multivariate Statistics and Mathematical Modeling, Academic Press, San Diego, CA, pp. 237-263.

Venkatesh, V., Morris, M.G., Davis, G.B. and Davis, F.D. (2003), “User acceptance of information technology: toward a unified view”, MIS Quarterly, Vol. 27 No. 3, pp. 425-478.

Wold, H. (1982), “Soft modeling: the basic design and some extensions”, in Jöreskog, K.G. and Wold, H. (Eds), Systems under Indirect Observations: Part II, North-Holland, Amsterdam, pp. 1-54.

Wong, K.K.-K. (2019), Mastering Partial Least Squares Structural Equation Modeling (PLS-SEM) with SmartPLS in 38 Hours, iUniverse, Bloomington, IN.

Yang, Y. and Green, S.B. (2010), “A note on structural equation modeling estimates of reliability”, Structural Equation Modeling: A Multidisciplinary Journal, Vol. 17 No. 1, pp. 66-81.

Zeithaml, V.A., Berry, L.L. and Parasuraman, A. (1996), “The behavioral consequences of service quality”, Journal of Marketing, Vol. 60 No. 2, pp. 31-46.

Zeng, N., Liu, Y., Gong, P., Hertogh, M. and König, M. (2021), “Do right PLS and do PLS right: a critical review of the application of PLS-SEM in construction management research”, Frontiers of Engineering Management, Vol. 8 No. 3, pp. 356-369.

Acknowledgements

The authors thank Jan-Michael Becker (BI Norwegian Business School, Norway) for his helpful comments on an earlier version of this manuscript. In the process of writing this manuscript, the authors have used DeepL and ChatGPT to enhance its readability and language quality. The authors thoroughly examined the linguistic modifications made by these tools. In addition, a professional proofreader helped to correct linguistic problems. Nevertheless, the authors bear full responsibility for the content of the publication. Some analyses in this article use the SmartPLS statistical software (www.smartpls.com/). Christian M. Ringle acknowledges that he has a financial interest in SmartPLS.

Corresponding author

Marko Sarstedt is the corresponding author and can be contacted at: sarstedt@lmu.de

About the authors

Joe F. Hair is Director of the PhD Program and Cleverdon Chair of Business, Mitchell College of Business, University of South Alabama. In 2018, 2019 and 2020, Joe was recognized by Clarivate Analytics for being in the top 1% globally of all Business and Economics professors based on his citations and scholarly accomplishments. He has authored over 75 book editions and has published numerous articles in scholarly journals such as the Journal of Marketing Research, Journal of Academy of Marketing Science, European Journal of Marketing, Organizational Research Methods, Journal of Family Business Studies, Journal of Retailing, and others.

Marko Sarstedt is a chaired professor of marketing at the Ludwig Maximilians University Munich, Germany, and an adjunct research professor at Babeș-Bolyai University, Romania. His main research interest is the advancement of research methods to further the understanding of consumer behavior. His research has been published in Nature Human Behaviour, Journal of Marketing Research, Journal of the Academy of Marketing Science, Multivariate Behavioral Research, Organizational Research Methods, MIS Quarterly, Decision Sciences, and Psychometrika, among others. Marko has been named member at Clarivate Analytics’ Highly Cited Researchers List, which includes the “world's most impactful scientific researchers.”

Christian M. Ringle is a chaired professor of management at the Hamburg University of Technology, Germany, and an adjunct research professor at the James Cook University, Australia. His research focuses on management and marketing topics, method development, business analytics, machine learning, and the application of business research methods to decision making. His articles have been published in journals such as Decision Sciences, the European Journal of Marketing, Information Systems Research, International Journal of Research in Marketing, Journal of the Academy of Marketing Science, Organizational Research Methods, and MIS Quarterly. Since 2018, Christian has been included in the Clarivate Analytics' Highly Cited Researchers list. He is a co-developer and co-founder of the statistical software SmartPLS (www.smartpls.com). More information about Christian M. Ringle can be found at: www.tuhh.de/mds/team/prof-dr-c-m-ringle.html

Pratyush N. Sharma is an associate professor in the department of Information Systems, Statistics, and Management Science in the University of Alabama’s Culverhouse College of Business. His research interests include online collaboration communities, open-source software development, technology use and adoption, and research methods used in information systems, particularly partial least squares path modeling. His research has been published in highly acclaimed journals such as the Journal of the Association for Information Systems, Journal of Retailing, Decision Sciences, Journal of Information Systems, Journal of Business Research, and Journal of International Marketing.

Benjamin D. Liengaard is an associate professor in the Department of Economics and Business Economics, Aarhus University. His main research interest is in partial least squares path modeling and quantitative analysis in the field of business analytics. His research has been published in journals such as Journal of Applied Econometrics, Psychology and Marketing, European Journal of Marketing, and Decision Sciences.

Related articles