ArticlePDF Available

Abstract and Figures

Should P values associated with path coefifcients, as well as with other coefficients such as weights and loadings, be one-tailed or two-tailed? This question is answered in the context of structural equation modeling employing the Partial Least Squares Method (PLS-SEM), based on an illustrative model of the effect of e-collaboration technology use on job performance. A one-tailed test is recommended if the coefficient is assumed to have a sign (positive or negative), which should be reflected in the hypothesis that refers to the corresponding association. If no assumptions are made about coefficient sign, a two-tailed test is recommended. These recommendations apply to many other statistical methods that employ P values, including path analyses in general, with or without latent variables, plus univariate and multivariate regression analyses.
Content may be subject to copyright.
1
One-tailed or two-tailed P values in PLS-SEM?
Ned Kock
Full reference:
Kock, N. (2015). One-tailed or two-tailed P values in PLS-SEM? International Journal of e-
Collaboration, 11(2), 1-7.
Abstract
Should P values associated with path coefficients, as well as with other coefficients such as
weights and loadings, be one-tailed or two-tailed? This question is answered in the context of
structural equation modeling employing the partial least squares method (PLS-SEM), based on
an illustrative model of the effect of e-collaboration technology use on job performance. A one-
tailed test is recommended if the coefficient is assumed to have a sign (positive or negative),
which should be reflected in the hypothesis that refers to the corresponding association. If no
assumptions are made about coefficient sign, a two-tailed test is recommended. These
recommendations apply to many other statistical methods that employ P values; including path
analyses in general, with or without latent variables, plus univariate and multivariate regression
analyses.
Keywords: E-collaboration; partial least squares; structural equation modeling; latent variable;
indicator; one-tailed test; two-tailed test; Monte Carlo simulation.
2
Introduction
A common question often arises in the context of discussions about structural equation
modeling (SEM) employing the partial least squares (PLS) method, referred to here as PLS-SEM
(Kock, 2013b; 2014; Kock & Lynn, 2012), among researchers in the field of e-collaboration
(Kock, 2005; Kock & Nosek, 2005) as well as many other fields. Should P values associated
with path coefficients be one-tailed or two-tailed?
This is an important question because normally one-tailed tests yield lower P values than two-
tailed tests. In fact, this is always the case when symmetrical distributions of path coefficients are
assumed, such as Student’s t-distributions. Therefore, the decision as to whether to use one-tailed
or two-tailed tests can influence whether one or more hypotheses are accepted or rejected. This
decision also influences the statistical power of a PLS-SEM analysis, with the power being
higher with tests employing one-tailed P values.
We try to provide an answer to this question, which requires brief ancillary discussions of
related topics – e.g., PLS-SEM’s treatment of measurement error. While our discussion
addresses path coefficients, it also applies to other coefficients such as weights and loadings.
Even though the focus is on PLS-SEM, much of what is said here applies to many other
statistical analysis techniques. Among these are path analyses in general, without or without
latent variables, as well as univariate and multivariate regression analyses.
Illustrative model
The discussion presented in this study is based on the illustrative model shown in Figure 1.
This model contains two latent variables, e-collaboration technology use (L) and job
performance (J). Each latent variable is measured indirectly through three indicators.
Figure 1: Illustrative model
Let us assume that , ,

and

(= 1 … 3) are scaled to have a mean of zero and a
standard deviation of one (i.e., these variables are standardized). Our illustrative model can then
be described by equations (1), (2), and (3).

=

+

, = 1 … 3.
(1)

=

+

, = 1 … 3.
(2)
= +.
(3)
The path coefficient and loadings

and

(= 1 … 3) are assumed to describe the model
at the population level, as true values. The population is made of teams of individuals who use an
3
integrated e-collaboration technology including e-mail and voice conferencing to different
degrees. That is, the unit of analysis is the team, not the individual.
The e-collaboration technology facilitates the work of the teams. Different values of job
performance by the teams, where performance is evaluated by managers, are associated with
different degrees of use of the e-collaboration technology.
PLS-SEM and measurement error
PLS-SEM algorithms estimate latent variable scores as exact linear combinations of their
indicators (i.e., as “composites”). As such, they do not properly account for measurement error.
This can be illustrated through (4) and (5); where latent variable scores are calculated properly
accounting for, and not properly accounting for, the measurement error . Both equations denote
the number of indicators as .
=



+.
(4)
=



.
(5)
A full discussion of the effects of PLS-SEM not properly accounting for measurement error is
outside the scope of this study. Nevertheless, one effect that will be noticed in the next section is
that the path coefficient is attenuated, due to the correlation attenuation property (Nunnally &
Bernstein, 1994) expressed in (6).

,
=
,
.
(6)
In this correlation attenuation equation,
and
denote the true reliabilities of the true latent
variables
and
, which are estimated via PLS-SEM as
and
. These true reliabilities can be
estimated through the Cronbach’s alpha coefficients for the latent variables.
Equation (7) expresses this general correlation attenuation equation in the more specific
context of our illustrative model. In it,
and
are the PLS-SEM estimates of the true latent
variables and .

,
=(,)
.
(7)
In our model the standardized path coefficient is in fact equal to the true correlation (,),
since the endogenous latent variable has only one predictor (). Even when this is not the case
in more complex models, path coefficients tend to be attenuated in concert with their
corresponding correlations.
Distribution of estimated coefficients across multiple samples
Figure 2 shows the distribution of values of the estimated path coefficient
across 500
samples of size 100. The samples were generated through a Monte Carlo simulation (Robert &
Casella, 2005) based on the illustrative model. The data was created to follow normal
distributions. Each sample was analyzed with the software WarpPLS, version 4.0 (Kock, 2013).
4
The analyses were conducted using the PLS Regression algorithm, which has been increasingly
used in PLS-SEM (Guo et al., 2011; Kock, 2010).
Figure 2: Distribution of path coefficient estimates
Notes: N=100; PLS Regression algorithm used; values obtained through a Monte Carlo simulation with 500 samples
(replications); shift to the left (from . 3) in the distribution mean due to correlation attenuation.
As we can see, this distribution of values of the estimated path coefficient
across many
samples does not appear to have a mean of . 3, which is the true population mean. There appears
to be a shift to the left. The reason for this is the correlation attenuation property discussed in the
previous section; due to PLS-SEM algorithms in general, including PLS Regression, not properly
accounting for measurement error.
The standard deviation of this distribution of values of the estimated path coefficient
across
many samples is what is often referred to as the “standard error” associated with the estimate,
denoted here as . With the standard error and the mean estimated path coefficient
one can
obtain the
ratio via (8).
=
.
(8)
The
ratio can then be used as a basis for the estimation of the one-tailed P value
for
via
integration through (9). In this equation || is the absolute value of and () is a function that
refers to a Student’s t-distribution.
=()

||
.
(9)
Student’s t-distributions are symmetrical about the mean. Therefore, the two-tailed P value
for
can be obtained by multiplication of
by 2, as indicated in (10).
= 2
.
(10)
Researchers employing PLS-SEM do not know the true population values of path coefficients
and loadings prior to their analyses, and thus do not conduct Monte Carlo simulations to obtain
5
estimates. They instead obtain estimates via resampling techniques, of which bootstrapping is the
most widely used.
Resampling techniques can in fact be seen as part of a special class of Monte Carlo simulation
techniques. They yield values for that approximate the true values, usually slightly
underestimating them.
Also, in practice the value of
is obtained via approaches other than integration, such as:
specialized multivariate statistics and PLS-SEM software such as WarpPLS (which perform the
integration themselves); general-purpose numeric calculation software such as R, MATLAB, and
Excel; and published tables in statistics books and websites.
Using one-tailed and t w o -tailed P value estimations
Let us assume that we obtained the estimate
= .3 for the path coefficient in our model. Do
we use a one-tailed or two-tailed P value to estimate its significance? To answer this question we
need to consider the hypothesis to which the estimate refers. The hypothesis is stated beforehand
and incorporates the event whose complement’s probability we are trying to ascertain via the
test. Let us say that our hypothesis is as follows.
H
1
: An increase in e-collaboration technology use (L) by a team is associated with an increase
in job performance (J).
To test the significance of the estimate
= .3 in the context of this hypothesis, via the
calculation of a P value, is essentially to calculate the probability that the estimate
= .3 is due
to chance (the complement of what is stated in the hypothesis) given a set of pre-specified
conditions. In this case, the set contains only one condition, which is that the path coefficient is
positive, which is stated in the hypothesis.
If the effect is “real”, and therefore not due to chance, the probability that it comes from a
distribution that refers to no effect should be small. That is, this probability should be lower than
a certain threshold, usually .05 (hence the oft-used P < .05 significance level). In PLS-SEM
typically a distribution that refers to no effect is defined as a Student’s t-distribution with a
standard deviation that equals the standard error and that has a mean of zero.
The graph on the left in Figure 3 shows how this distribution would look like. The P value is
calculated via integration. It is equal to the area indicated under the curve at the far right. Clearly
the resulting probability refers to the one-tailed P value
discussed in the previous section. This
is the probability that the path coefficient estimate would be equal to or greater than . 3.
When would a two-tailed test be used? The answer, again, builds on the prior knowledge
incorporated into the hypothesis being tested. Without the prior knowledge that the association
between e-collaboration technology use (L) and job performance (J) is positive (i.e., that an
increase in L leads to an increase in J), our hypothesis would likely be different. For example, it
could be along the following lines:
H
2
: There is an association between e-collaboration technology use (L) and job performance
(J ).
Here any value of
significantly lower or greater than zero would support the hypothesis. To
test this hypothesis for a given estimate
obtained through a PLS-SEM analysis, we would
again assume that a “no effect” path estimate would come from a Students’ t-distribution with a
mean of zero and with a standard deviation that equals the standard error . But now we do not
assume that the path coefficient is positive. Therefore we calculate the probability that we would
6
obtain a path coefficient estimate that would be: equal to or greater than
, or equal to or lower
than 
.
The graph on the right in Figure 3 shows how this probability would be calculated via
integration as the sum of the areas indicated under the curve to the far left and far right. Clearly
the resulting probability refers to the two-tailed P value
discussed earlier.
Figure 3: One-tailed and two-tailed P value estimations
Notes: s chematic representations; axes scales adjusted for illustration purposes.
Both graphs in Figure 3 are schematic representations, with the axes scales adjusted for
illustration purposes. In the graphs of the actual distributions the areas used for P value
estimation are often too small to be effectively used in visual illustrations of those areas under
the probability distribution curves.
It is noteworthy that, in the discussion above, the hypothesized direction of causality of the
effect (L J or J L) is not as important in defining whether the test is one-tailed or two-tailed
as the hypothesized sign of the effect. The hypothesized direction of causality could, under
certain conditions, be important in defining the method of estimation of the path coefficient. This
is particularly true if we assume that the relationship between the latent variables is nonlinear. In
this case, the hypothesized direction of causality of the effect would become much more
important.
Should the relationship be assumed to be nonlinear, thus leading to a nonlinear analysis (Kock,
2010; 2013), we would obtain different estimates for the nonlinear path estimate going in one
direction

and the other

. This is an interesting property of nonlinear analyses that may
have many useful applications. Among these applications is possibly that of causality assessment
(Kock, 2013).
The meaning of the nonlinear path estimate would be different from that of the linear path
estimate, since it would no longer refer to a fixed gradient, as a linear path estimate does. In the
nonlinear case the gradients 

and 

would change for different values of the latent
variables. This would have implications that arguably go beyond the scope of this study.
Generally speaking, the sign of the nonlinear path estimate refers to the overall sign of the
nonlinear relationship, or the sign of the “linear equivalent” of the nonlinear relationship.
7
Discussion and concluding remarks
The path attenuation phenomenon discussed earlier, stemming from PLS-SEM algorithms in
general not properly accounting for measurement error, has an interesting influence on P value
estimation using the approach discussed. It makes it more conservative. The reason is that the
path coefficients estimated via PLS-SEM are closer to zero than the true path coefficients, which
makes the area under the curve that refers to the P value normally greater than it would have
been should an unbiased method be used. This leads to higher P values, other things being equal
(e.g., the same resampling technique is used).
It may seem peculiar that prior knowledge incorporated into a hypothesis influences the test of
the hypothesis. Nevertheless, this is consistent with the notion that, in frequentist inference, the
conditional probability of any event is calculated based on a smaller set of possible events than
the corresponding unconditional probability. This applies to events specified in hypotheses.
This leads to an interesting question. If our hypothesis incorporates the prior knowledge that
> 0 and our estimate turns out to violate this prior knowledge (e.g.,
=.3), would a one-
tailed test applied to
=.3 be acceptable? The answer is “no”, because if the prior knowledge
incorporated into the hypothesis is not supported by the evidence (i.e., the negative path
coefficient estimate), then the hypothesis is falsified outright. If a hypothesis incorporates the
belief that > 0 and we obtain an estimate
=.3 then the hypothesis is in fact falsified
without the need for the calculation of a P value.
This highlights the fact that prior knowledge is important in the theorizing process that often
precedes empirical research. Prior knowledge comes from thorough reviews of pertinent theories
and past empirical research. The more prior knowledge is brought into empirical research, the
more the research moves toward the confirmatory end of the exploratory-confirmatory spectrum.
Generally speaking, bringing credible prior knowledge into empirical research is a “good thing”,
and allows one to lower the threshold of evidence needed to ascertain the likelihood of an event
that builds on that prior knowledge. Nevertheless, prior knowledge comes with an “inferential
cost”, as discussed above.
Some researchers have suggested that P value estimation should be carried out directly from
bootstrapping distributions. However, it should be clear that if we had used the distribution of
path estimates obtained via bootstrapping in our tests instead of a Student’s t-distribution, a one-
tailed estimation of P values would likely yield distorted results. This would have happened
because bootstrapping distributions are usually asymmetrical, as our Monte Carlo-generated
distribution was, with the degree of asymmetry varying depending on both data distributions and
model characteristics.
We hope that the discussion presented here will help e-collaboration researchers who employ
PLS-SEM, as well as researchers in other fields who use this multivariate analysis method, to
decide whether to use one-tailed or two-tailed P values under different circumstances. Even
though our discussion addresses primarily path coefficients, it also applies to other coefficients
such as weights and loadings. While the discussion focuses on PLS-SEM, it applies to many
other statistical analysis techniques. Among these are path analyses in general, with or without
latent variables, as well as univariate and multivariate regression analyses.
8
Acknowledgments
The author is the developer of the software WarpPLS, which has over 7,000 users in more
than 33 different countries at the time of this writing, and moderator of the PLS-SEM e-mail
distribution list. He is grateful to those users, and to the members of the PLS-SEM e-mail
distribution list, for questions, comments, and discussions on topics related to the use of
WarpPLS.
References
Guo, K.H., Yuan, Y., Archer, N.P., & Connelly, C.E. (2011). Understanding nonmalicious
security violations in the workplace: A composite behavior model. Journal of Management
Information Systems, 28(2), 203-236.
Kock, N. (2005). What is e-collaboration. International Journal of e-Collaboration, 1(1), 1-7.
Kock, N. (2010). Using WarpPLS in e-collaboration studies: An overview of five main analysis
steps. International Journal of e-Collaboration, 6(4), 1-11.
Kock, N. (2013). WarpPLS 4.0 User Manual. Laredo, TX: ScriptWarp Systems.
Kock, N. (2013b). Using WarpPLS in e-collaboration studies: What if I have only one group and
one condition? International Journal of e-Collaboration, 9(3), 1-12.
Kock, N. (2014). Advanced mediating effects tests, multi-group analyses, and measurement
model assessments in PLS-based SEM. International Journal of e-Collaboration, 10(3), 1-
13.
Kock, N., & Lynn, G.S. (2012). Lateral collinearity and misleading results in variance-based
SEM: An illustration and recommendations. Journal of the Association for Information
Systems, 13(7), 546-580.
Kock, N., & Nosek, J. (2005). Expanding the boundaries of e-collaboration. IEEE Transactions
on Professional Communication, 48(1), 1-9.
Nunnally, J.C., & Bernstein, I.H. (1994). Psychometric theory. New York, NY: McGraw-Hill.
Robert, C.P., & Casella, G. (2005). Monte Carlo statistical methods. New York, NY: Springer.
... The model was checked for common method bias using the variance inflation factor (IVF). The outcomes from the analysis indicate that the model is devoid of common technique bias because all IVF values were less than 3.3 (Kock, 2015). As can be seen in Table 4, the heterotrait-monotrait (HTMT) ratio of correlations technique was used to further assess the discriminant validity. ...
Article
Abstract The ever-increasing organizations’ quest for multinational investments despite cross-border tensions and cultural diversity in the international business environment necessitates acculturation research. This research aims to determine the nature of acculturation strategies adopted among local workers in UK and US MNCs’ subsidiaries in Nigeria and how the acculturation strategies relate to internal legitimacy and job satisfaction among the workers. Utilizing a quantitative research design, the study uses a structured questionnaire to draw data from 454 Nigerians working in UK and US multinational corporations’ subsidiaries in Nigeria. The research finds that Nigerian workers in UK and US MNCs’ subsidiaries in Nigeria predominantly adopt the integrated acculturation strategy. The study further finds that the integration strategy significantly results in internal legitimacy and job satisfaction among the local workers. Contrary to expectations, assimilation, separation, and marginalization acculturation strategies insignificantly relate to internal legitimacy and job satisfaction. We highlighted the acculturation strategy the UK and US MNC subsidiaries in Nigeria can use to achieve internal legitimacy and job satisfaction among local employees.
... The SEM-PLS approach is used to analyze the data. In the context of PLS-SEM, hypothesis testing frequently entails calculating a p-value for each path coefficient (Peng and Lai, 2012;Kock, 2015Kock, , 2016Mumtaz et al., 2017). ...
Article
Purpose The external business environment of the organization is always changing at a rapid pace. For a firm to adapt to changing client requirements, it must implement the right business procedures and strategies. To improve competitive advantage, this study investigates the roles that supply chain partnerships, cross-functional integration, responsiveness and resilience play in achieving competitive advantages in Palestine. Design/methodology/approach Industrial institutions in Palestine constitute the study population. Data are collected by distributing surveys via Google Forms linked to manufacturers in industries such as the Leather and shoe Industry, metal industries, chemical industries, construction industries, textile industries, stone and marble industries, pharmaceutical industry, veterinary industry, food industry, plastic industry, paper industry, major advantages and disadvantages. The SEM-PLS approach is used to analyze the data. Findings The findings demonstrate that supply chain responsiveness, resilience and cooperation are all improved by cross-functional integration in inventory data integration and immediate operation. Supply chain partnerships improve the supply chain’s responsiveness, resilience and competitive advantage by involving partners in work teams and exchanging best practices. The enhancement of supply chain resilience and competitive advantage is influenced by the company’s capacity to act promptly in response to variations in demands. Research limitations/implications This paper faces some limitations and it can be drawn as follows: To enhance supply chain risk management, the study continues to concentrate on manufacturing organizations that have internal integration. It also emphasizes the necessity of supply chain integration, which establishes direct connections with outside partners. Practical implications The findings of this study suggest some policy implications, as follows: To provide the manufacturing sector with a competitive edge, operations supervisors must be able to track and assess processes to ensure they are meeting demand. Firms that possess the ability to adjust to novel procedures or advancements in technology gain a competitive edge by guaranteeing consistent and high-quality delivery of products. Originality/value By implementing IT integration, this study theoretically and practically advances the understanding of the resource-based view of competitive advantages. This study focuses on providing insights into the nature of the relationship between supply chain partnership, cross-functional integration, responsiveness and flexibility and competitive advantages in the manufacturing sector in the Palestinian market.
... The outcomes of the analysis (see (Henseler et al., 2012). The result from the variance inflation factor (VIF) analysis shows that all factor-level VIFs from the collinearity test are less than 3.3, an indication that the model is free from common method bias (Kock, 2015). ...
Article
Full-text available
The complex nature of formal financial products and services and the frequently associated innovations occasioned by disruptive technologies inform researchers’ calls for studies on financial literacy, particularly in African rural communities. As a response to the calls, this study explores how financial literacy relates to performance, access to credit facilities, and payment preferences among smallholder rural farmers in Nigeria. Further, the rural farmers’ financial literacy level on each of the four dimensions of the Standard and Poor Global Financial Literacy criteria was assessed. A random sample was drawn from the registered rural farmers in the Central Bank of Nigeria’s Anchored Borrower’s Program for the 2022 farming year. Quantitative data were collected from rural farmers using the Standard and Poor Global Financial Literacy questionnaire. The proposed hypotheses were tested with partial least squares structural equation modeling (PLS-SEM), while descriptive statistics were used to analyze the data. The outcomes show that financial literacy significantly predicts performance, access to credit facilities, and mode of payment preferences among smallholder rural farmers. Also, the analyses of the four dimensions of financial literacy show that the farmers are more literate in risk diversification and inflation than numeracy and compound interest. It is concluded that financial literacy is cardinal to profitable investments in rural farming, and as such, there is a need for the Nigerian government and financial authorities to embark on financial literacy drive with more emphasis on numeracy and compound interest where the rural farmers are more deficient.
... The findings are presented in Table 4. First, we assessed the significance of the anticipated correlations among the constructs. A onetailed test is advised if the relationship between variables is directional when assessing associations in PLS-PM (Kock, 2015); therefore, we used a one-tailed test for analysis and calculated the t-values and p-values on the basis of bootstrapping. The analysis result indicated that 8 paths were supported in the model at α = 0.05 where attitude (β = 0.29, t = 5.72, p = 0.000), escape (β = 0.24, t = 4.33, p = 0.000), perceived behavioural control (β = 0.10, t = 2.07, p = 0.020), personal development (β = 0.17, t = 3.16, p = 0.001), and subjective norm (β = 0.31, t = 6.16, p = 0.000) significantly predicted behavioural intentions, resulting in the acceptance of H1, H2, H3, H6, and H10. ...
Article
Purpose Previous research suggests the crucial role of parents in developing social behaviors of their children. However, less evidence is available on the role of parents in shaping responsible financial management behavior among children for their later life. This study bridges this gap by investigating the role of financial parenting in improving well-being among young Saudi people. Particularly, this study examines the role of financial parenting, childhood financial socialization and childhood financial experiences in developing responsible financial self-efficacy and financial coping behaviors to determine financial well-being among young adults in Saudi Arabia. Design/methodology/approach This study uses a two-step mixed-method approach comprising analyses of symmetric (net effects) and asymmetric (combinatory effects) modelling to test the proposed model. A symmetrical analysis examines the role of financial parenting factors that are sufficient for improving financial well-being among Saudis. An asymmetrical analysis is used to explore that a set of combinations of financial parenting conditions lead to high performance of financial well-being. Data have been collected from 350 students enrolled in undergraduate and postgraduate programs in Saudi Arabia. Findings According to asymmetric modeling (i.e. fsQCA) analysis, parents and practitioners can combine financial parenting, childhood financial socialization and childhood financial experiences along with financial self-efficacy and financial coping behaviors in a way that satisfied the conditions (i.e. causal antecedent conditions) leading to high financial well-being. Importantly, the condition of high financial well-being is not mirror opposite of causal antecedent conditions of low financial well-being. Research limitations/implications This study contributes to the current knowledge by applying both symmetrical and asymmetrical modelling to indicate a high level of financial well-being. Besides, there is sparse empirical evidence available in the context of Saudi Arabia on how financial parenting, socialization and financial experiences in childhood improve children's financial well-being in their later life. Practical implications According to asymmetric modeling (i.e. fsQCA) analysis, parents and practitioners can combine financial parenting, childhood financial socialization and childhood financial experiences along with financial self-efficacy and financial coping behaviors in a way that satisfied the conditions (i.e. causal antecedent conditions) leading to high financial well-being. Importantly, the condition of high financial well-being is not mirror opposite of causal antecedent conditions of low financial well-being. The parents and practitioners must be cautious to regulate the condition in which the combination of the antecedents is not in line with the causal recipes of financial well-being negation. Originality/value This study deepens the current knowledge by employing both symmetrical and asymmetrical analysis for testing structural and configurational models indicating the high performance of financial well-being . The study proposes and tests an integrated model to bring new contributions to prior literature. This study also attempts to propose valuable research directions for future researchers interested in the topic.
Article
Purpose Rigorous applications of analytical tools in information systems (IS) research are important for developing new knowledge and innovations in the field. Emerging tools provide building blocks for future inquiry, practice and innovation. This article summarizes the findings of an analysis of the adoption and reporting of partial least squares structural equation modeling (PLS-SEM) analytical tools by Industrial Management & Data Systems authors in the most recent five-year period. Design/methodology/approach Selected emerging advanced PLS-SEM analytical tools that have experienced limited adoption are highlighted to broaden awareness of their value to IS researchers. Findings PLS-SEM analytical tools that facilitate understanding increasingly complex theoretical models and deliver improved prediction assessment are now available. IS researchers should explore the opportunities to apply these new tools to more fully describe the contributions of their research. Research limitations/implications Findings demonstrate the increasing acceptance of PLS-SEM as a useful alternative research methodology within IS. PLS-SEM is a preferred structural equation modeling (SEM) method in many research settings and will become even more widely applied when IS researchers are aware of and apply the new analytical tools. Practical implications Emerging PLS-SEM methodological developments will help IS researchers examine new theoretical concepts and relationships and publish their work. Researchers are encouraged to engage in more complete analyses by applying the applicable emerging tools. Originality/value Applications of PLS-SEM for prediction, theory testing and confirmation have increased in recent years. Information system scholars should continue to exercise sound practice by applying these new analytical tools where applicable. Recommended guidelines following Hair et al . (2019; 2022) are included.
Article
Full-text available
Use of the partial least squares PLS method has been on the rise among e-collaboration researchers. It has also seen increasing use in a wide variety of fields of research. This includes most business-related disciplines, as well as the social and health sciences. The use of the PLS method has been primarily in the context of PLS-based structural equation modeling SEM. This article discusses a variety of advanced PLS-based SEM uses of critical coefficients such as standard errors, effect sizes, loadings, cross-loadings and weights. Among these uses are advanced mediating effects tests, comprehensive multi-group analyses, and measurement model assessments.
Article
Full-text available
Variance-based structural equation modeling is extensively used in information systems research, and many related findings may have been distorted by hidden collinearity. This is a problem that may extent to multivariate analyses in general, in the field of information systems as well as in many other fields. In multivariate analyses, collinearity is usually assessed as a predictor-predictor relationship phenomenon, where two or more predictors are checked for redundancy. This type of assessment addresses vertical, or “classic,” collinearity. However, another type of collinearity may also exist, called here “lateral” collinearity. It refers to predictor-criterion collinearity. Lateral collinearity problems are exemplified based on an illustrative variance-based structural equation modeling analysis. The analysis employs WarpPLS 2.0, with the results double-checked with other statistical analysis software tools. It is shown that standard validity and reliability tests do not properly capture lateral collinearity. A new approach for the assessment of both vertical and lateral collinearity in variance-based structural equation modeling is proposed and demonstrated in the context of the illustrative analysis.
Article
Full-text available
This article defines e-collaboration and provides a historical glimpse at how and when e- collaboration emerged. The discussion suggests that the emergence of e-collaboration had more to do with military considerations than with the solution of either organizational or broad societal problems. It also is argued that e-collaboration, as an area of research and industrial development, is broader than what is often referred to as computer-mediated communication. The article concludes with a discussion of six key conceptual elements of e-collaboration: (1) the collaborative task, (2) e-collaboration technology, (3) individuals involved in the collaborative task, (4) mental schemas possessed by the individuals, (5) the physical environment surrounding the individuals, and (6) the social environment surrounding the individuals.
Article
Full-text available
Most relationships between variables describing natural and behavioral phenomena are nonlinear, with U-curve and S-curve relationships being particularly common. Yet, structural equation modeling software tools do not estimate coefficients of association taking nonlinear relationships between latent variables into consideration. This can lead to misleading results, particularly in multivariate and complex phenomena like those related to e-collaboration. One notable exception is WarpPLS available from: warppls.com, a new structural equation modeling software currently available in its first release. The discussion presented in this paper contributes to the literature on e-collaboration research methods by providing a description of the main features of WarpPLS in the context of an e-collaboration study. The focus of this discussion is on the software's features and their use and not on e-collaboration study itself. Particular emphasis is placed on the five steps through which a structural equation modeling analysis is conducted through WarpPLS.
Article
Full-text available
La simulation est devenue dans la dernière décennie un outil essentiel du traitement statistique de modèles complexes et de la mise en oeuvre de techniques statistiques avancées, comme le bootstrap ou les méthodes d'inférence simulée. Ce livre présente les éléments de base de la simulation de lois de probabilité (génération de variables uniformes et de lois usuelles) et de leur utilisation en Statistique (intégration de Monte Carlo, optimisation stochastique). Après un bref rappel sur les chaînes de Markov, les techniques plus spécifiques de Monte Carlo par chaînes de Markov (MCMC) sont présentées en détail, à la fois du point de vue théorique (validité et convergence) et du point de vue de leur implémentation (accélération, choix de paramètres, limitations). Les algorithmes d'échantillonnage de Gibbs sont ainsi distingués des méthodes générales de Hastings-Metropolis par leur plus grande richesse théorique. Les derniers chapitres contiennent un exposé critique sur l'état de l'art en contrôle de convergence de ces algorithmes et une présentation unifiée des diverses applications des méthodes MCMC aux modèles à données manquantes. De nombreux exemples statistiques illustrent les méthodes présentées dans cet ouvrage destiné aux étudiants de deuxième et troisième cycles universitaires en Mathématiques Appliquées ainsi qu'aux chercheurs et praticiens désirant utiliser les méthodes MCMC. Monte Carlo statistical methods, particularly those based on Markov chains, are now an essential component of the standard set of techniques used by statisticians. This new edition has been revised towards a coherent and flowing coverage of these simulation techniques, with incorporation of the most recent developments in the field. In particular, the introductory coverage of random variable generation has been totally revised, with many concepts being unified through a fundamental theorem of simulation There are five completely new chapters that cover Monte Carlo control, reversible jump, slice sampling, sequential Monte Carlo, and perfect sampling. There is a more in-depth coverage of Gibbs sampling, which is now contained in three consecutive chapters. The development of Gibbs sampling starts with slice sampling and its connection with the fundamental theorem of simulation, and builds up to two-stage Gibbs sampling and its theoretical properties. A third chapter covers the multi-stage Gibbs sampler and its variety of applications. Lastly, chapters from the previous edition have been revised towards easier access, with the examples getting more detailed coverage. This textbook is intended for a second year graduate course, but will also be useful to someone who either wants to apply simulation techniques for the resolution of practical problems or wishes to grasp the fundamental principles behind those methods. The authors do not assume familiarity with Monte Carlo techniques (such as random variable generation), with computer programming, or with any Markov chain theory (the necessary concepts are developed in Chapter 6). A solutions manual, which covers approximately 40% of the problems, is available for instructors who require the book for a course. oui
Article
Full-text available
This article provides an introduction to the special issue on Expanding the Boundaries of E-Collaboration. It presents an operational definition of the term e-collaboration, and a historical review of the development of e-collaboration tools and related academic research. That is followed by an introductory development of the notion of e-collaboration boundaries. The article concludes with a summarized discussion of the articles published in the special issue.
Article
What if a researcher obtains empirical data by asking questions to gauge the effect of an e-collaboration technology on task performance, but does not obtain data on the extent to which the e-collaboration technology is used? This characterizes what is referred to here as a scenario with one group and one condition, where the researcher is essentially left with only one column of data to be analyzed. When this happens, often researchers do not know how to analyze the data, or analyze the data making incorrect assumptions and using unsuitable techniques. Some of WarpPLS's features make it particularly useful in this type of scenario, such as its support for small samples and the use of data that does not meet parametric assumptions. The main goal of this paper is to help e-collaboration researchers use WarpPLS to analyze data in this type of scenario, where only one group and one condition are available. Two other scenarios are also discussed-a typical scenario, and a scenario with one group and two before-after technology introduction conditions. While the focus here is on e-collaboration, the recommendations apply to many other fields.
Article
End users are said to be "the weakest link" in information systems (IS) security management in the workplace. They often knowingly engage in certain insecure uses of IS and violate security policies without malicious intentions. Few studies, however, have examined end user motivation to engage in such behavior. To fill this research gap, in the present study we propose and test empirically a nonmalicious security violation (NMSV) model with data from a survey of end users at work. The results suggest that utilitarian outcomes (relative advantage for job performance, perceived security risk), normative outcomes (workgroup norms), and self-identity outcomes (perceived identity match) are key determinants of end user intentions to engage in NMSVs. In contrast, the influences of attitudes toward security policy and perceived sanctions are not significant. This study makes several significant contributions to research on security-related behavior by (1) highlighting the importance of job performance goals and security risk perceptions on shaping user attitudes, (2) demonstrating the effect of workgroup norms on both user attitudes and behavioral intentions, (3) introducing and testing the effect of perceived identity match on user attitudes and behavioral intentions, and (4) identifying nonlinear relationships between constructs. This study also informs security management practices on the importance of linking security and business objectives, obtaining user buy-in of security measures, and cultivating a culture of secure behavior at local workgroup levels in organizations.