Comment: Prof Ken Harvey’s resignation from La Trobe University

The following is an amalgamation of a couple of comments written on the Paging Dr forum board.  This was in relation to the high profile and controversial resignation of Prof Ken Harvey from La Trobe University, following the announcement of the $15 million deal between Swisse and the university.

It is one thing to fund research into CAM. For this to be scientific, one must have a scientifically sound a priori reason why you are doing the human research in the first place.

It is quite another thing for a single manufacturer to fund a research institute that will effectively be set up to investigate its products that it already markets to the general public.

The comparison between Swisse and more general big pharma funded research is not very analogous. The analogous situation would be say, Pfizer broadly claiming that its various products (let’s say “Lipitor”) reversed male-pattern baldness, and so will pay a university to set up the research institute to test its products to demonstrate efficacy for hair loss.

Part of the problem isn’t that we don’t know enough about the vitamins, potions, crystals, and other wonders our patients turn up with. We know plenty about these things. We do not live in world devoid of biological and chemical knowledge. For the most part, CAM therapies are ineffective for most indications because they lack biological plausibility.

And to be blunt, this shouldn’t even really influence clinical practice. Nobody has ever done a study on atorvastatin for male pattern baldness. That doesn’t mean that we shrug our shoulder and claim “well maybe it works” and think that it is perfectly okay for someone to recommend it. When there is no good evidence for the utility of a treatment, especially in the absence of a biologically plausible mechanism, the reasonable stance that protects patients from harm is to not assume that it works.

From a probabilistic perspective, a random intervention is extraordinarily unlikely to be effective for a given indication. Here’s a thought experiment… Let’s assume for now we have no idea what any modern medications did. Choose an indication, let’s say, male pattern baldness. Randomly select a therapeutic agent and dose from MIMS. What do you think are the odds that this treatment will on balance be more beneficial than harmful? There are many more ways for things to go wrong, than to go right in therapeutics. If we make the assumption that a treatment is NOT therapeutically inert (i.e., it has a real clinically meaningful effect), then it stands to reason that this treatment can also cause harm. In the absence of evidence, the precautionary approach is to accept that harms will probably outweigh benefits.

In any case, a longer article about this issue from John Dwyer in MJA Insight: http://www.mja.com.au/insight/2014/5/john-dwyer-complementary-storm

Another forum participant wrote, “I’m curious about this concept of biological plausibility. How does that framework fit in for drugs like lithium, which was not only discovered fortuitously but is also a drug that we still don’t know its mechanism of action.”  I responded:

You are perfectly right that a drug like lithium came out of an entirely serendipitous situation. When you get lucky, of course you accept it. The research goal is then understanding why it works. Observation is part of the scientific process as well, not just hypothesis testing.

However, you don’t bet on serendipity when conducting human research. You need to have a good reason to be doing it in the first place. Having the position, “I like vitamins”, and then testing them against every condition known to man is poor quality science and unethical. There needs to be a rational and empirically justifiable hypothesis to test before doing the research.

This isn’t a problem isolated to CAM research. Much of medical research is problematic but it is particularly so with CAM. See the essay by Ioannidis who has been beating this drum for a while: http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0020124

Much of the problem with CAM research (even when the study well designed) is that it suffers from “frequentist” interpretations. A P-value of 0.05 does not mean that there is only a 1 in 20 chance of the result being wrong! The likelihood of a positive study result being “true” depends on the pre-test probability of the hypothesis being true. The problem with highly improbable hypotheses is that a positive result in a study will almost always just be a false positive (i.e., wrong). From xkcd:

Leave a Reply

%d bloggers like this: