A form of regression analysis that is specifically tailored to the situation in which the dependent variable is dichotomous (or binary). For example, among a sample of people under investigation, a researcher might be interested in what factors are associated with the likelihood of someone being employed rather than unemployed, receiving university education or not receiving university education, or voting Republican rather than Democrat. However, so-called multinomial logistic regression is increasingly common, involving analyses in which the possible causal effects of independent variables on a categoric dependent variable having three (rarely more than three) outcome categories are assessed via comparison of a series of dichotomous outcomes: for example, the probability of someone identifying themselves as lower class rather than middle class or upper class (taken together); as compared with the probability of someone claiming an upper-class rather than a middle-class or lower-class identification (again with these last two taken together).
The results of logistic regression models can be expressed in the form of odds ratios, telling us how much change there is in the probability of being unemployed, receiving university education, voting Republican (or whatever), given a unit change in any other given variable—but holding all other variables in the analysis constant. More simply, the results (as measured by the changed odds on being found in a particular category) tell us how much a hypothesized cause has affected this outcome, taking the role of all other hypothesized causes into account.
Most published accounts of research using this particular technique report three statistics for the models. The first of these is the beta (parameter estimate or standardized regression coefficient), which is—crudely speaking—a measure of the size of the effect that an independent variable (let us say social class) has on a dependent variable (for example the probability of being found in employment rather than among the unemployed), after the effects of another variable (such as educational attainment) have been taken into account. The standard error provides us with a means of judging the accuracy of our predictions about the effect in question. One rule of thumb is that the beta should be at least twice the size of the standard error. Finally, many investigators include the odds ratios themselves, since these tend to make the relative probabilities being described in the model intuitively easier to grasp.
For a short introduction to the technique see Anthony Walsh, Statistics for the Social Sciences (1990). A more advanced discussion will be found in J. Aldridge and F. Nelson, Linear Probability, Logit, and Probit Models (1984).