Barbara Roberts, Homeopath

Analysing Studies

Analysing studies is not easy, you cannot tell at a glance if a study is good. It takes time, reading the entire study and looking at the method, the study population and comparing it with the general population, with other studies, as well as the outcomes. Junk science is a real problem (like the study that showed chocolate can help with weight loss (1)), and in pharmaceutical research and evidence based medicine there is publishing bias, meaning that studies that have a positive effect from a drug are much more likely to be published.

Because it takes time to assess study validity, our very busy health professionals, Doctors and Nurses, rely on summaries and expert opinion. Patients and parents rely on experts to tell them what is safe and what is not safe. Not all experts agree and so sometimes you are left wondering what to believe.

In the field of Homeopathy there is a lot of mixed research. Very credible research showing amazing results – such as the treatment of diarrhoea in children in Nicaragua which showed a statistically significant reduction of duration of diarrhoea (2), as well as well documented research such as the elective hand surgery and arnica study that showed no improvement in pain or bruising (3). There are also conflicting government reports, such as the 2015 Australian report claiming that meta analyses showed that Homeopathy didn’t work (further discussion here (4)), compared to the Swiss report (translated into English in 2011) that found 20 out of 22 systematic reviews favoured homeopathy (5). Just like drug trials this makes it hard for the average person to know what the evidence is.

The problem with homeopathy and research is that it treats people as individuals. So for conditions like anxiety and depression you may get 100 people with 100 different remedies or potencies. For acute conditions it is easier- for example there are three remedies most people consider first for fevers. However there are hundreds of remedies that could treat fever, so a failure of a remedy to act is usually because we haven’t found the right remedy.

Homeopathy is an art, as well as a science. Because sometimes it takes a little bit of magic to find the right remedy, and everyone reacts differently (because we are all individuals after all), so there is no one size fits all for constitutional or chronic treatment.

Optimal research for chronic conditions would involve Homeopath(s) who have a consultation with their client(s), and prescribe a remedy. The remedy is then dispensed at a Pharmacy either as what is prescribed, or as a placebo- this being selected randomly. The patients and the Homeopaths do not know who has the remedy or placebo. The pharmacy does not know who the patients and Homeopaths are. A certain number of follow ups, with or without dispensing would also be required.

We are fortunate to have the Homeopthy Research Institute which does focus on research and evidence. On their website you can see a selection of their research- including ones such as a longitudinal study of children with asthma and eczema that showed dramatic improvement or complete remission after 5-10 years (6).

So, how can you tell whether a study is worthwhile? (Homeopathic or conventional medicine) The method I use (learnt at university in an Epidemiology paper) is the PICOT diagram. (More info here (7).

First look at the Population in the study. Is there a reasonable number of people – a study with only 15 people is not as useful as one with 1,000 due to the small number. Is that population similar in characteristics to the general population? If the study has selected only people with a BMI between 18-25 it is not going to tell you whether the treatment is also safe and effective for someone with a BMI of 32. If the study is only men then it doesn’t tell you about the way it would affect women.

What is the Intervention? What is the study looking at.

What is the Comparator? Double blinded Placebo controlled studies are the gold standard in conventional medicine. But this is not always what happens – Phase II trials of HPV vaccine compared the HPV vaccine to an aluminium containing “placebo” (8) – and aluminium can and does have a physical effect on some people. This will skew the results. What are the variables and how are they controlled? A study looking at exercise that has one group on a strict diet and the other group not on a diet may well have differences in weight at the end- and it will be impossible to tell if it was the diet or the exercise. The method and whether it appropriately looks at how Homeopthy works would be considered here too.

What is the Outcome? This is the main result. What are they trying to measure and is it an appropriate choice of measurement.

T stands for Time. What is the endpoint of the study? If you are looking at side effects after an injection at what point are they measuring to? Drugs, particularly injected ones, can have effects for long after the administration so it should be followed up for a long time too. Especially if there is any chance of it effecting the immune system, the results of which may not show for months. In a trial looking at safety, knowing that drugs can have effects for a long time afterwards, you would expect a long term follow up.

We cannot consider a study also without looking at conflicts of interest. A researcher who receives payment from the drug company may have a bias towards producing a result that make that drug favourable. (For an interesting blog on bias in medicine see this (9))

So the next time someone shares a study, or you see something in the media making health claims, whether a drug, vaccine, diet, supplement or complementary medicine therapy I hope you can use this information to critique the study and make an informed decision.



2- Pediatrics. 1994 May;93(5):719-25.

3- J R Soc Med. 2003 Feb;96(2):60-5.





8- N Engl J Med 2007;356:1928-43.


Share this post