Reporting of Results
The way results are reported also affects how a study is interpreted, and it is important to recognize how study findings are reported and what this means in relation to the “big picture.”
Many studies report relative risk, which indicates how likely the outcome is in a single group compared with another group.3 Although relative risk does not indicate how likely an outcome is to occur, or how much higher or lower the risk is between the groups, relative risk is often misinterpreted as absolute risk. It is often reported as a hazard ratio or incidence rate ratio, and a rate higher than 1.0 indicates a relative increase in risk between the experimental and control groups. Although relative risk is a valuable way to report the findings of a study, focusing solely on estimates of relative risk may overemphasize the impact of a trial’s results.
Absolute risk, or attributable risk, indicates the difference between the risk in the exposed and unexposed groups.3 This number provides a better interpretation of the results in terms of how the risk may affect all individuals.
For example, there were reports that infants in Japan exposed in the Fukushima Daiichi nuclear disaster had a 70% increase in risk of thyroid cancer.3 The 70% value is a relative risk estimate based on data showing that 1.25% of exposed infants developed thyroid cancer compared with the natural rate of 0.75%. However, the absolute risk is actually 0.50%, meaning that an exposed individual is 0.5% more likely to develop thyroid cancer than an unexposed individual. The absolute risk takes into consideration the baseline levels of thyroid cancer, which are low.
Several organizations now recommend that studies report both relative and absolute risk.3
Statistics help to summarize and interpret results from studies by helping to determine whether the outcome was likely due to chance or not.4 Studies are frequently misinterpreted, even by the researchers conducting the study, and it has been established that there is a high rate of false positive conclusions in the literature, regardless of study design.
One reason for this is the misuse and misinterpretation of the P value.4 The P value assumes that the null hypothesis—that the exposure is not associated with an outcome — is true. Therefore, the P value represents the likelihood of an observed result assuming the null hypothesis is true. This is in contrast to what most people think the P value represents, which is that if the P value is .05 or less, then it is unlikely that the observed results are due to chance. Using the P value in this manner overestimates positive results.
Confidence intervals can improve the interpretation of the results and should be reported in addition to the P value.4 The confidence interval provides the plausible or likely range of the effect size, while also indicating how precise the results are. If the values being reported, such as a relative risk or hazard ratio, have a confidence interval that crosses 1.0, it means that the exposure is likely not due to an association, meaning consumption of food X is likely not associated with cancer.
When reading reports of associations between dietary intakes or patterns and cancer risk, it is important consider factors such as study design characteristics, how the results are being reported, and what type of statistics are provided. All of these factors help to interpret the true meaning of a study’s results and their clinical significance. For patients, absolute risk is likely a more relevant representation of their true risk related to an exposure, such as a particular food or dietary pattern. Noting the confidence interval of any reported risks can help determine whether a finding is positive or not.
1. Tarasuk VS, Brooker A-S. Interpreting epidemiologic studies of diet-disease relationships. J Nutr. 1997;127(9):1847-1852.
2. Faber J, Fonseca LM. How sample size influences research outcomes. Dental Press J Orthod. 2014;19(4):27-29.
3. Noordzij M, van Diepen M, Caskey FC, Jager KJ. Relative risk versus absolute risk: one cannot be interpreted without the other. Nephrol Dial Transplant. 2017;32(suppl_2):ii13-ii18.
4. Mellis C. Lies, damned lies and statistics: Clinical importance versus statistical significance in research. Paediatr Respir Rev. 2018;25:88-93.