## Chisquare Test

The chi-square (c2) test was developed by Karl Pearson in 1900, an event often regarded as one of the most important breakthroughs in the history of statistics. The test and the statistical distribution on which it is based have a wide variety of applications in psychological research. Its two principal uses are to test the independence of two variables and to assess how well a theoretical model or set of a priori probabilities fits a set of data. In both cases the chi-square test is typically thought of as a nonparametric procedure involving observed (O) and expected (E) frequencies. The expected frequencies may be determined either theoretically or empirically. The basic formula for calculating the chi-square statistic is

The c2 test is commonly applied to a wide variety of designs, including k x 1 groups, 2 x k groups, 2 x 2 contingency tables, and R x C contingency tables. It is most appropriately used with nominal-level (categorical) data but is frequently used with ordinal-level data as well. The c2 statistic is related to several measures of association, including the phi coefficient (f), contingency coefficient (C), and Cramer's phi (f' or fC). f2 = c2/N is frequently used as a measure of practical significance or effect size for 2 x 2 tables.

Historically, there has been concern over the use of the chi-square test when any E was small (e.g., < 5-10) because the underlying c2 distribution is continuous whereas the distribution of observations is discrete. For 2 x 2 tables this led to the development of the widely used and recommended Yates' correction for continuity. However, most recent evidence seems to suggest that the use of Yates' correction is unnecessary even with very small E.

The c2 distribution is related to the normal distribution, such that the square of a standard normal deviate (z2) is distributed as a c2 with one degree of freedom. The chi-square distribution also describes the sampling distribution of the variance, s2, such that c2 = (N - 1) s2/o2 with N -1 degrees of freedom. These relationships form the basis for many tests of statistical significance. For example, the analysis of variance F statistic may be thought of as the ratio of two c2 statistics. The c2 statistic is also used in many multivariate statistical tests and in calculating multinomial probabilities, especially for log-linear models. Multivariate statistics that use both generalized least squares and maximum likelihood procedures also rely on the c2 statistic. For example, in structural equation modeling, the c2 statistic forms the basis for many goodness-of-fit tests. In the 1930s, Fisher developed a procedure using the c2 test to combine the results of several independent tests of the same hypothesis, an early version of meta-analysis.

Joseph S. Rossi

University of Rhode Island