Peer-review involves independent ‘peers’ critically evaluating a piece of research to decide if it’s suitable for publication. This process helps to protect the quality of research publications and prevents falsified or unreliable work from being published.
Randomisation means allocating study participants to different groups randomly. It is used to prevent selection bias. Computer-generated randomisation is one acceptable randomisation method.
The control group is the standard to which the intervention group is compared to. The intervention being tested should be the only difference between the way the control and intervention groups are treated and analysed. The control group should be concurrent, this means they should be observed and analysed at the same time as the intervention group. Historical controls are not acceptable.
Bias is a systematic deviation from the ‘true’ result due to the way in which the study was conducted leading to research finding(s) being over or under-estimated. The two most common types of bias are sampling bias (the collected sample is systematically different from the target population), and measurement bias (the measurement process systematically over-/under-estimates the true result).
Power is a statistical test that calculates the minimum number of participants needed for the study to show whether an intervention makes a significant difference or not. For intervention studies, the power should be at least 0.80 or 80 per cent. This means that the study is likely to detect a difference, if a difference really exists, 8 times out of ten.
Confounders occur when the measure of the outcome is distorted because of the association of exposure with other factor(s) that also influence the study outcome. For example, in a study about how activity level contributes to weight gain, age would be a confounding factor that needs to be considered in the analysis. This is because activity level is likely to vary with age and influence the outcome.
This is the likelihood that the study results have not occurred by chance. Statistical significance gives you confidence in the results of the study. P-values are a common way to demonstrate statistical significance. In health studies usually a p-value of less than 0.05 is conventionally used as a threshold for significance-which means the likelihood of the result being due to chance, is less than 5 per cent.
Precision refers to how close each measurement is to each other. Results are typically reported within a 95 per cent confidence interval which means that the reader can be 95 per cent sure that the true value lies between the upper and lower boundaries. The smaller the interval is between the upper and lower boundary, the more precise the result is.
A systematic review uses an a priori research question and a standardised method to select and review all relevant studies on a particular topic with a view to eliminating bias in critical appraisal and synthesis.
A research question that is generated before a research study takes place.
A statistical technique to pool results from more than one independent clinical trial to achieve more stable and precise conclusions. To use this method, each study needs to have used the same intervention and measured the same outcome in a study population with similar characteristics.
Is an experimental design to study the effect of a particular intervention. Study participants are randomised to receive the intervention of interest or not.
Groups of people are selected on the basis of an exposure and followed up for specified outcomes.
Cases (individuals) with the condition are matched with controls that do not have the condition and a retrospective analysis is conducted to identify differences between the groups.