Mathematically, factor analysis is somewhat similar to multiple regression analysis, in that each variable is expressed as a linear combination of underlying factors. The amount of variance a variable shares with all other variables included in the analysis is referred to as commonality. The covariation among the variables is described in terms of a small number of common factors plus a unique factor for each variable. These factors are not overtly observed. If the variables are standardized, the factor model may be represented as:

where

Xi = ith standardized variable

Aá = standardized multiple regression coefficient of variable i on common factor j

F = common factor

Vi = standardized regression coefficient of variable i on unique factor i

Ui = the unique factor for variable i

m = number of common factors

The unique factors are uncorrelated with each other and with the common factors. The common factors themselves can be expressed as linear combinations of the observed variables.

where

Fi = estimate of ith factor

Wi = weight or factor score coefficient

k = number of variables

It is possible to select weights or factor score coefficients so that the ftrst factor explains the largest portion of the total variance. Then a second set of weights can be selected, so that the second factor accounts for most of the residual variance, subject to being uncorrelated with the first factor. This same principle could be applied to selecting additional weights for the additional factors. Thus, the factors can be estimated so that their factor scores, unlike the values of the original variables, are not correlated. Furthermore, the rust factor accounts for the highest variance in the data, the second: factor the second highest, and so OD. A simplified graphical illustration of factor analysis in the case of two variables is presented in Figure 192. Several statistics are associated with factor analysis.