Profiling

Profiling

As a self-identified mostly liberal I get  intellectual cognitive dissonance when I think in terms of a population as having a shared characteristic. Oddly, it does not bother me when I am using information based on epidemiology and disease risk, but it does bother me when applied to other endeavors, even when the data would support the association.

For example, consider studies out of China. Can the results be trusted?  Perhaps. But the data would suggest that when the topic is traditional Chinese pseudo-medicine (TCPM), the reader should be a bit more critical about accepting resultes, espcially positive results, at face value.

Biases are subtle and can occasionally involve large populations and that biases may show up in clinical trials. In the homelands of acupuncture, there are almost no negative acupuncture trials:

In the study of acupuncture trials, 252 of 1085 abstracts met the inclusion criteria. Research conducted in certain countries was uniformly favorable to acupuncture; all trials originating in China, Japan, Hong Kong, and Taiwan were positive, as were 10 out of 11 of those published in Russia/USSR. In studies that examined interventions other than acupuncture, 405 of 1100 abstracts met the inclusion criteria. Of trials published in England, 75% gave the test treatment as superior to control. The results for China, Japan, Russia/USSR, and Taiwan were 99%, 89%, 97%, and 95%, respectively. No trial published in China or Russia/USSR found a test treatment to be ineffective.

But even more interesting, the positive results occurred for trials of real medicine as well.

Particularly high rates of positive results were seen in China (99%) and Russia/USSR (97%). These two countries published no trials in which the test treatment was not reported effective.

Why this occurs is unknown and open to speculation. Perhaps it is as simple as publication bias: in these countries negative studies are not published. Perhaps there are subtle societal influences that turned what would have been a negative study in England into a positive study in China. But it makes one wonder when reading positive studies from those countries: can the results be trusted?

And it may be as simple as poorly done clinical trials. The worse the methodology, the more likely it is that an implausible intervention will appear to be effective. The usual rule in the world of pseudo-medicines is that as the quality of the studies improves the efficacy diminishes until well done studies show no effect.

And when it comes to TCPM, practitioners are churing out a lot of poor quality studies:

A total of 4133 trials in 2005-2009 and 2861 trials in 2011-2012 were identified respectively. There was a significant increase in proportion of reports that included details of background (24.71% vs 35.20%, P < 0.001), participants (49.79% vs 65.26%, P < 0.001), the methods of random sequence generation (13.77% vs 19.85%, P < 0.001), statistical methods (63.00% vs 72.77%, P < 0.001) and recruitment date (70.14% vs 80.36%, P < 0.001) in 2011-2012 compared to 2005-2009. However, the percentage of reports with trial design decreased from 4.45% to 3.25% (P = 0.011). Few reports described the blinding methods, and there was a decreasing tendency (4.77% vs 2.48%, P < 0.001). There was a similar decreasing tendency on the reporting of funding (6.53% vs 5.00%, P = 0.007). There were no significant differences in the other CONSORT items. In terms of Jadad Score, the proportion of reports with a score of 2 was markedly increased (15.15% vs 19.71%, P < 0.001).

The Jadad score is a measure of blinding, randomization and dropout on a scale of 1 (lousy) to 5 (rigorous), study characteristics key for avoiding bias in trials of implausible therapies with subjective outcomes. For TCPM it is a huge fail.

When it comes to the literature on TCPM it is likely that a given paper will have a positive result and be poorly done. Reader beware.

Points of Interest: 10/01/2014
Points of Interest: 09/25/2018