Every social science researcher needs to be able to use and understand numbers in research
By Stephen Gorard
As I write this blog, the UK lockdown remains in place, with vaccinations proceeding at different rates across the world. The news is awash with discussions about the effectiveness and reported side effects of specific vaccines, and ongoing patterns of infections, hospitalisations and fatalities. We routinely use numbers from catching a train to buying a house, and read news reports on elections and polls, currency conversion rates and shares, sports scores and weather forecasts. Numbers are needed for all research, because all researchers have to read and judge the trustworthiness of the research of others, including research that involves numbers.
Dealing with numbers in research is easier than commentators usually portray
The most technical part of understanding numbers in social science research is based on significance tests, confidence intervals, and power calculations. Many people find such statistics incomprehensible. All of these calculations are based on the cases being completely randomised – meaning cases are selected purely by chance, and with no non-response or missing values. This never happens in real-life large-scale research. You cannot adjust a sample to make it random, by definition. This means that significance tests are irrelevant in any research. Where they are used, you can generally ignore them, and look at the underlying results instead. This simplifies reading research for most people.
True statistical analysis is more fascinating than commentators usually portray
What makes working with numbers so fascinating is that the figures themselves, the graphs and tables, are not the real research findings. True analysis is really about judging their meaning. An early step is to judge how much we can trust the findings, through consideration of the study design and how well it fits the objectives of our research.
For example, a simple descriptive study could address a question about how many cases had a certain characteristic but could not say whether that characteristic was a risk factor for a specific outcome. A longitudinal study, collecting data from the same cases repeatedly, can answer questions about risk factors for a specific outcome, but cannot address causal questions directly. A randomised experiment, where cases were allocated to one of two treatments, and only one group had the risk factor administered, could address a causal question about its impact.
Judging the trustworthiness of a study also involves looking at its scale (larger is better, all other things being equal), the number of missing values, the quality of any measurements, and a range of other factors. None of these judgements is technical, although they may involve simple calculations such as how much the data would have to vary for the substantive findings to alter. All of these issues and more are discussed in my new book How to Make Sense of Statistics.
What does it mean?
Only when we are sufficiently confident in the trustworthiness of a piece of research that we consider if its findings could be generalised to other cases not involved in the study. Conflating the issues of trustworthiness and generality, and emphasising the latter, is a widespread error. Again the judgement about generality is not technical, and it cannot come just from the data in the study.
A later step is to offer implications – what do the findings mean?. Here the emphasis should be on the argument leading from the tables and graphs to the conclusions – a process omitted in much research. Even if the findings are trustworthy, the conclusions can be wrong if the assumptions in the logical argument leading to the conclusion are wrong. This step is more like simple philosophy than mathematics. For example, imagine a trustworthy comparative study showing that Group A scored higher in a test than Group B. We might conclude therefore that resources should be spent on an intervention for Group B, to improve their test scores. This argument assumes that test scores matter, and that we should want both groups to score the same. There is step missing to link the findings to the conclusion. Always consider:
If this conclusion were not true, how else might we explain the research findings?
Working with numbers is a skilled task, but it is not technically difficult.
Book details
How to make sense of statistics
Stephen Gorard
February 2021
ISBN: 9781526413826
About the author