Group Potency and Its Implications for Team Effectiveness

Over time, the people in a group assess the group's potential more realistically. This text demonstrates that the potency of the group changes over time. As you read, be attentive to the literature review and background of the study. Also, pay attention to the discussion of the findings, which surprisingly found that group potency decreases over time. You may want to take note of the limitations of the research.

Materials and Methods

Measures

Conscientiousness

Conscientiousness was measured with ten items from the International Personality Item Pool (IPIP; Goldberg et al.; α = 0.81). The IPIP items correlate highly with Costa and McCrae's NEO-PI-R. There were five positively worded and five negatively worded items. A sample item is "I am always prepared". Participants responded to these items on a five-point Likert-type agreement scale (1, strongly disagree; 5, strongly agree).


Extraversion

Extraversion was also measured with ten items from the IPIP (Goldberg et al.; α = 0.86) that correlate highly with the NEO-PI-R. There were five positively worded and five negatively worded items. A sample item is "I feel comfortable around people". Participants responded to these items on a five-point Likert-type agreement scale (1, strongly disagree; 5, strongly agree).


Group Potency

Group potency was measured with seven items from Guzzo et al., which measure a team's confidence in their general ability to be effective. A sample item is "No task is too tough for this team". Participants responded to these items on a five-point Likert-type agreement scale (1, strongly disagree; 5, strongly agree). Sosik et al. found that these group potency items have strong internal consistency with a Cronbach's α ranging from 0.87 to 0.98 across three time points.


Team Effectiveness

Associated with the large design project, teams submitted a comprehensive written report that was typically about 100 pages in length. The report contained a variety of detailed information pertaining to the project including, design sketches, mathematical models, and implications for practice. Team reports were rated based on their overall quality by experienced course instructors, who were blind to this study's objectives, and grades were assigned to the team as a whole (i.e., no unique grades were assigned to individual members). Each rater rated a unique subset of the reports.


Analytical Procedure

Using Mplus 7.4 throughout for our focal analyses, we implemented a sequential model testing procedure to conduct (1) longitudinal measurement invariance analyses, (2) latent growth modeling, and (3) consensus emergence modeling. The full model assessed is illustrated in Figure 1. Examinations of change over time requires measurement invariance to ensure that a measure functions and means the same thing over time, and to facilitate meaningful longitudinal inferences. Longitudinal measurement invariance assesses the stability of a scale's measurement model over time, and without this support misleading interpretations may result, akin to comparing apples to oranges over time. Demonstrating invariance requires several analytical steps, which include: (a) configural invariance, (b) metric invariance, (c) scalar invariance, and (d) strict invariance. Ployhart and Vandenberg (2010) noted that configural, metric, and scalar invariance are sufficient for longitudinal invariance, yet strict invariance was also investigated as it can provide additional insight into the structure and function of a scale. The configural invariance model assesses whether the same pattern of factor loadings holds over time. For determining configural invariance, we – in part – assumed support because all seven potency items, which measure a single factor, were assessed at each time point. In addition, we also considered indicators of model-data fit rendered by the comparative fit index (CFI) and root mean square error of approximation (RMSEA). CFI values > 0.95 and RMSEA values <0.08 can be taken as evidence for acceptable model fit. Building on the configural invariance model, metric invariance then constrains respective factor loadings to equality, scalar invariance places additional equality constraints on respective intercepts, and strict invariance places equality constraints on respective item residuals. To assess plausibility of each of these sets of invariance constraints, the Δχ2 test can be used because each set of constraints imposed represent a nested model. However, as Δχ2 may be overly sensitive to sample size, changes in the CFI of less than 0.010 and/or changes in the RMSEA of less than 0.015 can support invariance in each step. In each longitudinal invariance analysis, autocorrelated residuals were specified between respective items.

figure 1

Figure 1. Focal analytical model. Numeric factor loadings for LGM presented. Direct effects, c'paths, and shown in dashed lines. Indirect effects, comprising respective a and b paths and associated aj × bj effects, and shown in solid lines.

Our invariance analyses used individual-level data in order to achieve a balance between sample size and model complexity. However, to account for the nested nature of our data (i.e., individuals within teams), we used robust maximum likelihood estimation, implemented as Mplus' MLR estimator, in conjunction with the TYPE = COMPLEX specification to furnish model fit indices and standard errors that were robust to non-independence. Given the use of the MLR estimator, Δχ2 nested model comparisons were facilitated through Satorra and Bentler's scaled Δχ2 statistic.

An additional wrinkle in estimating the longitudinal invariance models concerns the correct specification of the longitudinal null model, which is used in the derivation of the CFI. If the null model is incorrect, the CFIs used to judge invariance may also be biased and may result in erroneous inferences. As discussed by Widaman and Thompson, the correct longitudinal null model should specify zero covariances between any indicators (as in the typical null model), but equal variances and equal means for respective indicators across time points. As such, our use of the CFI was based on the corrected longitudinal null model.

Then, using latent growth modeling, and the aggregated potency scores, we examined the dynamics involved with group potency. First, we estimated an unconditional model to estimate the mean and variability around the latent intercept and slope of group potency. The latent growth model was specified in a typical fashion with the factor loadings for the latent intercepts all fixed at 1.00, and the factor loadings for the latent slope were fixed at zero, 1.00, and 2.00, for each of the measures (i.e., Time 2, 3, and 4; see above), respectively. The parameterization for the slope follows from equal time spacing between Times 2 and 3, and Times 3 and 4, as both reflected 3-month time lags. We then incorporated team effectiveness, as a simultaneous outcome of both the latent intercept and slope, and the personality predictors to assess the indirect effects. Using bias-corrected bootstrapping, with 10,000 samples, indirect effects were deemed significant if their 95% confidence intervals (CIs) excluded zero. Notably, the personality predictors used the mean-aggregation of scores from each individual member and as mean-aggregated personality is not a shared-unit property of a team justifying aggregation (via ICCs, etc) is therefore not required.

Finally, we used Lang et al.'s multilevel procedure to examine consensus emergence of group potency. This allowed us to assess emergence of the group-level potency construct from the sharedness, or more specifically the increasing degree of sharedness, of individual members' ratings over time.