Australian Health Practitioner Regulation Agency - Factors for assessing if evidence is acceptable
Look up a health practitioner

Close

Check if your health practitioner is qualified, registered and their current registration status

Factors for assessing if evidence is acceptable

To avoid misleading the public, claims made in advertising about regulated health services must be supported by acceptable evidence. The Australian Health Practitioner Regulation Agency (Ahpra) and National Boards’ approach to assessing evidence to support claims is consistent with the wider scientific and academic community.

In general, evidence is assessed as ‘acceptable’ where a body of evidence rates highly against the six factors highlighted below. These factors should be considered when assessing advertising claims about the benefits or effectiveness of a health service.

  • Your evidence must be publicly available in an English language source.
  • Peer-reviewed journal publications and evidence-based clinical practice guidelines can be a reliable source of evidence to support your advertising claim. Any findings that have not undergone a peer-review process are unlikely to be acceptable.
  • Are there any conflicts of interest that may invalidate the research findings? For example: Who funded the study? Have any of the authors declared a conflict of interest?

Note: Peer-review is a method used to screen the quality and reliability of research evidence. If your evidence is not peer-reviewed, it is unlikely that you have acceptable evidence to support your claim.

Is the research question stated in the study directly relevant to your advertising claim?

If the research question is not clearly stated, the evidence is at risk of being poor quality and unacceptable. Also, if the research question is not directly applicable to your advertising claim you cannot reasonably use the study as evidence.

Can the findings be applied to the patient population targeted by your advertising claim?

  • Animal studies cannot be used to support advertising claims for treatment of humans.
  • Look at the inclusion and exclusion criteria for the study. Is the population that you want to target by your advertisement similar or different to the participants included in the study(s)? Also consider the differences between the study setting and your setting and if this could influence the outcome. The evidence is likely to be unacceptable if it is not specific to the population and/or location of your advertising claim.
  • Are the study findings relevant to current practices? Is your evidence up to date?

Note: It is important that the evidence can be directly translated to the patient group you are targeting in your advertising claim. If the participants in the study are different to your patient population, it may not be possible to generalise the research findings to your advertising claim.

Have the relevant sources of evidence been identified and considered equally? Is it possible that any important evidence was missed?

If you only select evidence that supports your advertising claim, your claim is at risk of reporting bias. This is not acceptable evidence selection.

Did more than one study show the same thing?

  • Is your evidence from independent studies? You cannot use evidence from the same study twice. For example, the individual papers used within a systematic review cannot be included separately as additional evidence for your advertising claim.
  • Does all the evidence come from one research group? The most reliable evidence comes from several different and independent research groups with the same conclusions.
  • Have the results been tested over a wide range of conditions/scenarios? This is to ensure the results are reliable and robust.
  • Can any variations in the results between studies be explained?

Note: If many well-conducted and independent studies support your claim, you can be more confident in the evidence. However, if there are some studies that contradict your claim, you need to acknowledge and carefully consider these before you can make your claim.

What study design did the researchers use?

  • Systematic reviews of all relevant randomised controlled trials (RCT), RCTs and pseudo-randomised RCTs (e.g. alternative allocation for cases and controls1) represent the highest-level evidence to analyse the efficacy of an intervention. Internationally/nationally accepted evidence-based guidelines may also be cited as acceptable evidence.
  • When highest-level studies are not available, it may be acceptable to use a comparative study with a concurrent control group. However, these study designs are generally unable to determine the effectiveness of an intervention. Other study designs (e.g. qualitative research) are unable to provide more than an indicative finding and should, therefore, not be used to support an advertising claim.

Is the study design appropriate to answer the research question?

Consider aspects of the study design such as the methods used to analyse the outcome (do they measure what they should be measuring?), the timeline of the study, the testing protocol, what guidelines were followed and if they are suitable.

Note: Higher-level study designs use methods that reduce the risk of bias, chance and confounding factors from influencing the results. By using high-level evidence, you can have more confidence in the results of the study.


1Or another method to allocate participants to cases and controls that avoids biasing the results. 

Was the selection criteria used appropriate?

Is there anything about how study participants were selected that could influence the results or how the results can be generalised to a larger population? Selection criteria that favour one population type over another can cause sampling bias.

Was the study conducted reliably?

Were the study methods described clearly and do they reflect common protocols/guidelines? Features such as randomisation, the use of control groups, blinding, and adherence to guidelines are important to consider and are specific to the chosen study design.

Is the sample size sufficient to support the research findings with confidence?

  • Does the study provide a power/sample size calculation to justify the sample size? There are different ways to calculate sample size but if the study does not outline the reason/method for selecting the sample size the quality of the evidence is likely to be low and may not be acceptable.
  • Avoid relying on studies with very low numbers of participants as these are unlikely to meet the requirements for acceptable evidence because it is difficult to rule out that the results were due to chance.

Were all sources of potential bias and confounding factors discussed?

  • Do you think the researchers considered all the factors that could have contributed to the final results? It is difficult to completely eliminate all sources of bias and confounding factors, but high-quality studies will consider all the potential sources and list them.
  • Do the chosen measurement methods actually measure what they should be measuring? The measurement methods should be carefully chosen to limit any measurement bias.

Note: When considering the quality of your evidence it is important to consider all aspects of the research methodology that could contribute to the end result. For example, a randomised controlled trial, while considered a high-level study design, would be considered as unacceptable evidence if it used a very small sample size (e.g. 10 participants).

Was all the data reported on and discussed?

  • Did the researchers consider all the data? If the researchers ignore some data to reach a conclusion this is known as ‘cherry-picking’ and is not acceptable.
  • Is there any missing information that could influence the results?

How confident are you in the results?

  • Was the data analysis suitably rigorous? Could the results have occurred by chance? Consider the statistics presented, such as significance.
  • How sure are you about the results? Do the results make sense biologically/physiologically? Consider the statistics presented, such as confidence intervals, odds-ratio, likelihood ratio, etc.

Is the evidence clinically significant?

Statistical significance is not the same as clinical significance. You need to assess whether the intervention is meaningful in your clinical setting.

Note: The research that you use as evidence should have statistical outcomes and clinical significance that is relevant to your claim. This is to ensure the study findings are meaningful in a real-world setting.

 
 
 
Page reviewed 14/12/2020