Scientific Method

Published on June 16th, 2014 | by Rayne

7

Not all scientific studies are created equal

In the long and drawn out war against anti-vaxxers, it seems the other side has started to learn.

Sort of.

In an effort to produce evidence the government is keeping us sick by the way of vaccines, peddlers of pseudoscience and quackery have realised evidence means more than just anecdotes told over and over by your magnet wearing uncle who proudly proclaims magnets helped realign his chi by working with the iron in your blood. The anti-vaxxer crowd have created their own “forensic lab” to test vaccines and “report” on their safety.

This sounds plausible but falls down for a number of reasons.

1. Forensics labs are used to test materials and substances as part of criminal investigations only and 2. The anti-vaxxers who test the vaccines never report any of their findings in study form. They may report their results but I have yet to see a study that outlines the method used to test the vaccines, how they reduced bias in their testing and what instrumentation was used to test said vaccines.

In trying to show they understand how research studies work, the anti-vaxxer crowd have (attempted) to critique many of the studies that show no supporting evidence for a link between vaccines and autism. Not surprisingly, they’ve also been spreading low quality studies as “proof” there is a link between vaccines and autism. I thought I would give them a lesson in the difference between low quality and high quality studies.

We’ll use the following (statistical) meta-analysis as an example:

In the latest meta-analysis conducted by Taylor, L., Swerdfeger, A., and Eslick, G (2014) “Vaccines are not associated with autism: An evidence-based meta-analysis of case-control and cohort studies” which concludes “This meta-analysis of five case-control and five cohort studies has found no evidence for the link between vaccination and the subsequent risk of developing autism or autistic spectrum disorder. Subgroup analyses looking specifically at MMR vaccinations, cumulative mercury dosage, and thimerosal exposure individually were similarly negative, as were subgroup analyses looking specifically at development of autistic disorder versus other autistic spectrum disorder.” has anti-vaxxers pitching a fit with a comments on anti-vaxxer facebook pages citing bias towards the studies that were examined. Researchers, they state, only picked the studies that provided the conclusion of “no link”.

Let’s explore that shall we.

Scientific papers, as you may or may not know from high school science class have a number of different sections which are used to house the different components of the study or experiment being conducted. There are two different types of research – quantitative and qualitative. Quantitative researches phenomenon collecting data via statistics and numbers, while qualitative researches lived experiences and human behaviour which collects data via observational surveys and interviews. The meta-analysis used as our example is a quantitive statistical study.

Firstly within a paper you have an abstract. This is a concise summary of the paper. It includes a brief summary of the hypothesis, the methods, the design of the study, the results and the conclusions based on the results. A hypothesis is a proposed explanation for a phenomenon (say my missing underwear) – in the case of my missing underwear for example – my first hypothesis could be “My missing underwear is due to underwear gnomes stealing them”. If I were to conduct a quantitative study using statistics – I would also have a  null hypothesis, the null hypothesis is a statement that asserts there is no relationship between the things that are being tested – “My missing underwear is not due to underwear gnomes stealing them”. There is no relationship (or link) between my underwear being stolen and underwear gnomes.

In statistics, the job of the researcher is to refute or show the null hypothesis is wrong. Rather than trying to prove my first hypothesis is right – that underwear gnomes stole my underwear, I need to show that my null hypothesis is likely to be wrong – I need to show the idea my missing underwear was not due to underwear gnomes stealing them is wrong. A null hypothesis needs to be falsifiable – that is have the ability to be proven false. The null hypothesis “my missing underwear is not due to underwear gnomes stealing them” has the ability to be proven false by doing experiments. Making unfalsifiable claims is not within the realm of rational discussion, since unfalsifiable claims are often faith-based, and not founded on evidence and reason.

I’ll let Dr Paul Offit explain this one in terms of vaccines:

For example: In a quantitative study dealing with statistics, determining whether the MMR vaccine causes autism, the null hypothesis would be “MMR does not cause autism”. Studies have two possible outcomes: (1) Investigators might generate data that rejects the null hypothesis (Simply put: The results show evidence the statement “MMR does not cause autism” is wrong – data suggests there is a relationship between vaccines and autism).

(2) Investigators might generate data that does not reject the null hypothesis (The results don’t show evidence the statement “MMR does not causes autism” is wrong – the data suggests there is no relationship).

But there is one thing those who use the scientific method cannot do; they cannot accept the null hypothesis, they can only reject it – I cannot 100% accept the statement “my missing underwear is not due to underwear gnomes stealing them”. This means that scientists can’t prove MMR doesn’t cause autism in absolute terms (100% does not) because the scientific method will only allow them to say it at a certain level of statistical confidence. The statistical method doesn’t allow 100% acceptance of a statement. I can say the data supports evidence there is no relationship between my missing underwear and underwear gnomes stealing them but I cannot say underwear gnomes did not steal my underwear.

Back to Dr Offit: An example of the problem with not being able to accept the null hypothesis can be found in an experiment some children might have tried after watching the television show Superman. Suppose a little boy believed that if he stood in his backyard and held his arms in front of him, he could fly. He could try once or twice or a thousand times. But at no time would he ever be able to prove with absolute certainty that he couldn’t fly. The more times he tried and failed, the more unlikely it would be that he would ever fly. but even if he tried to fly a billion times, he wouldn’t have disproved his contention; he would only have made it all the more unlikely.

The hypothesis for the little boy could be “I can fly”. The null hypothesis for the little boy would by “I can’t fly” – which he would attempt to disprove by his use of experimentation. Since we can’t accept 100% the null hypothesis he can’t fly – with the data presented to us “he attempted to fly a billion times and didn’t”, we can only say “All of the results to date doesn’t support the hypothesis the little boy can fly”.

In the case of the meta-analysis above – with the data presented to date, no evidence has been found linking autism to vaccines. This isn’t a 100% “autism isn’t caused by vaccines” but that also isn’t an open invitation to say there is a relationship between vaccines and autism. The burden of proof is on anti-vaxxers using high quality studies to find evidence that supports a relationship between vaccines and autism. No anecdotes. No testimonials. High quality science.

Studies also have other sections within it that give the reader a complete look at how the study was conducted and the results it discovered.

The introduction to the study outlines why the study was done, it might include a brief history of other studies (called a literature review – mostly found in qualitative studies). The study will include the method section which is pretty self explanatory, it explains how the study was undertaken. It will have the design of the study, the inclusion and exclusion criteria of the study and the reasons behind them, the ways in which bias and extraneous variables that could cloud or distort results are taken into account and reduced, reducing bias is vital in research – the data needs to speak for itself free from influence, it will include how the data was collected and the type of study. The study will also have the results section, which outlines the data collected. It will also feature a discussion section to discuss the results and a conclusion section in which the conclusion reached from examining the data will be summarised.

As stated before – the most important thing in research is to let the data speak for itself free from bias and influence. Researchers do this in a number ways.

HIGH QUALITY STUDIES VS LOW QUALITY STUDIES
A high quality paper will include the following:

Randomisation of a participants: Randomisation of participants is a method to make sure the each participant has an equal chance of being assigned to either the control group for the study – the group that receives the placebo (the sugar pill), or the intervention group – the group that receives the drug or treatment being tested. It is a good way to decrease bias results by assigning each participant to a group impartially. Proper randomisation needs to make sure no-one involved in the study has knowledge of who has been assigned to what group.

The Jaded Scale discussed below considers the method of randomisation in a trial appropriate “if it allows each study participant to have the same chance of receiving each intervention and the investigators could not predict which treatment was next. Methods of allocation using date of birth, date of admission, hospital number or alternation should be not regarded as appropriate“. Methods used to randomise participants into trial groups can be found here.

Double blinding: Double blinding simply is where neither the researchers nor the participants know who is in what group. No-one knows who is in the placebo group and who is in the group receiving the treatment – the pills may be identical in appearance, the injections will look identical but no-one knows who is in what group.

The Jaded Scale discussed below considers the method of double blinding in a trial appropriate if: “a study must be regarded as double blind if the word “double blind” is used. The method will be regarded as appropriate if it is stated that neither the person doing the assessments nor the study participant could identify the intervention being assessed, or if in the absence of such a statement the use of active placebos, identical placebos or dummies is mentioned“.

Withdrawals and drop-outs: When a persons withdraws consent from being included in the study or drop-outs out of the study (or dies). Any withdrawals or drop-outs from the study must be accounted for or the results will be affected. “A true treatment effect might be disguised if control subjects whose condition worsened over the period of the study left the study to seek treatment, as this would make the control group’s average outcome look better than it actually. Conversely, if treatment caused some subjects’ condition to worsen and those subjects left the study, the treatment would look more effective than it actually was” (source).

The Jaded Scale discussed below considers the method of double blinding in a trial appropriate if: “Participants who were included in the study but did not complete the observation period or who were not included in the analysis must be described. The number and the reasons for withdrawal in each group must be stated. If there were no withdrawals, it should be stated in the article. If there is no statement on withdrawals, this item must be given no points. A flow diagram as recommended by the CONSORT statement is helpful in this regard” (CONSORT will be described below).

Also the study should be replicable, that means the study should be able to be conducted by another group of scientists in exactly the same way but with a different group of participants and they would still get the same results. This is an ideal way to determine whether results of the study can be generalised to more people than just those within the study.

The quality of a study can be evaluated using a number of tools.

The PRISMA checklist used by the researchers of “Vaccines are not associated with autism: An evidence-based meta-analysis of case-control and cohort studies” is one method for evaluating the quality of a study (or in this case a meta-analysis). The researchers used the checklist as guideline for reporting results of the meta-analysis. “The aim of the PRISMA Statement is to help authors report a wide array of systematic reviews to assess the benefits and harms of a health care intervention. PRISMA focuses on ways in which authors can ensure the transparent and complete reporting of systematic reviews and meta-analyses.

Another way to evaluate research is to use the Jaded Scale. The Jaded Scale (or Oxford quality scoring system), is a procedure to independently assess the methodological quality of a clinical trial. Methodological errors such as poor or no blinding or poor or no randomisation of participants, no control group and no reporting of participant drop out from the study allow factors such as the placebo effect or selection bias to affect the results of a trial>

The Jaded Scale is comprised of three questions (which is attached as an appendix here): 1 point for yes, 0 points for no.

  1. Was the study described as randomised?
  2. Was the study described as double blind?
  3. Was there a description of withdrawals and dropouts?

Additional points were given if:

  • The method of randomisation was described in the paper, and that method was appropriate.
  • The method of blinding was described, and it was appropriate.

Points would however be deducted if:

  • The method of randomisation was described, but was inappropriate.
  • The method of blinding was described, but was inappropriate.

A paper could receive a score between zero and five (zero being poor quality and five being high quality). The system is useful in many ways:

  1. To evaluate the quality of research.
  2. To set a minimum standard for the paper’s results to be included in a meta analysis. A researcher conducting a systematic review for example might choose to exclude all papers on the topic with a Jaded score of 3 or less due the poor quality of the method leading to an increase chance of bias results.

The CONSORT (Consolidated Standards of Reporting Trials) is another guideline for evaluating the quality of randomised control trials (RCT), it is an evidence-based, minimum set of recommendations for reporting randomised trials. From the website: “The CONSORT Statement comprises a 25-item checklist and a flow diagram. The checklist items focus on reporting how the trial was designed, analysed, and interpreted”.

 WHAT DOES THAT MEAN FOR OUR EXAMPLE META-ANALYSIS?

Our example used for this post “Vaccines are not associated with autism: An evidence-based meta-analysis of case-control and cohort studies” features the following eligibility criteria for studies that would make it into the meta-analysis:

This review included retrospective and prospective cohort studies and case-control studies published in any language looking at the relationship between vaccination and disorders on the autistic spectrum. No limits were placed on publication date, publication status, or participant characteristics. Studies were included that looked at either MMR vaccination, cumulative mercury (Hg) or cumulative thimerosal dosage from vaccinations to ensure all proposed causes of ASD or regression were investigated. Outcome measures included development of any condition on the autistic spectrum as well as those specifically looking at regressive phenotype. Papers that recruited their cohort of participants solely from the Vaccine Adverse Event Reporting System (VAERS) in the United States were not included due to its many limitations and high risk of bias including unverified reports, under-reporting, inconsistent data quality, absence of an unvaccinated control group and many reports being filed in connection with litigation [5] and [6]. We excluded studies that did not meet the inclusion criteria.

Source: “A prospective study watches for outcomes, such as the development of a disease, during the study period and relates this to other factors such as suspected risk or protection factor(s). The study usually involves taking a cohort of subjects and watching them over a long period. The outcome of interest should be common; otherwise, the number of outcomes observed will be too small to be statistically meaningful (indistinguishable from those that may have arisen by chance). All efforts should be made to avoid sources of bias such as the loss of individuals to follow up during the study. Prospective studies usually have fewer potential sources of bias and confounding than retrospective studies“.

Source: “A retrospective study looks backwards and examines exposures to suspected risk or protection factors in relation to an outcome that is established at the start of the study. Most sources of error due to confounding and bias are more common in retrospective studies than in prospective studies. For this reason, retrospective investigations are often criticised. If the outcome of interest is uncommon, however, the size of prospective investigation required to estimate relative risk is often too large to be feasible. In retrospective studies the odds ratio provides an estimate of relative risk. You should take special care to avoid sources of bias and confounding in retrospective studies“.

Source: “A case-control study is an analytical study which compares individuals who have a specific disease (“cases”) with a group of individuals without the disease (“controls”). The proportion of each group having a history of a particular exposure or characteristic of interest is then compared“.

Papers that recruited their cohort of participants solely from the Vaccine Adverse Event Reporting System (VAERS) in the United States were not included due to its many limitations and high risk of bias including unverified reports, underreporting, inconsistent data quality, absence of an unvaccinated control group and many reports being filed in connection with litigation [5] and [6].

Good.

For those of you who don’t know – the Vaccine Adverse Events Reporting System is an American national reporting program designed so adverse events after vaccine administration can be reported – with one massive problem. It is incredibility unreliable. Anyone can report an events and those events aren’t verified for accuracy or followed up in any way. James Laidler once tested it by submitting a report that the influenza virus had turned him into The Incredible Hulk and it was accepted. People without training who are predisposed to thinking vaccines cause autism can report any events without any verification. It’s safe to say excluding the reports from the VAERS database is a excellent way to decrease chances of bias and inaccurate results.

Over-all the meta-analysis was a quality analysis. It followed the PRISMA checklist for reporting, the studies used within the analysis were quality studies and above all – it accounted for bias and took precautions to reduce it.

Of course the anti-vaxxer crowd weren’t too happy with results of the analysis, it didn’t conform to what they believe. Of course it didn’t because quality studies that reduce bias and let the data speak for itself fee from influence are more accurate than the subjective belief of a person who cannot adapt to new information.

Tim Minchin once said “Science adjusts its views based on what’s observed. Faith is the denial of observation so that belief can be preserved.” Too be a good scientist one must be adaptable to new information and if high quality studies started supporting links between vaccinations and autism – we would adjust our views and react accordingly.

Check out:
10 Scientific Ideas That Scientists Wish You Would Stop Misusing

If you like some of the things I say – I now have a Facebook page! Feel free to like my page by clicking here

Share Button

Tags:


About the Author



7 Responses to Not all scientific studies are created equal

  1. Pingback: » You might be a pseudoscientist if..

  2. Pingback: » Refuting: Rejecting Modern Science.

  3. Pingback: » Rejecting Modern Science: Why the mummy instinct is not enough

  4. Pingback: » 5 posts in 5 days: Things you need to understand about science before talking about it

  5. Pingback: » Diseases: Not giving a shit about your choices or beliefs since the dawn of history

  6. Pingback: » 22 Things We Should Be Saying to Mums Who Don’t Vaccinate: Part 2

  7. Pingback: » 3 Things That Science Deniers Don’t Understand About Themselves (But We Wish They Did)

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload the CAPTCHA.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to Top ↑