The randomized clinical trial (RCT) is considered by some (including a few chiropractors) to be the “gold standard” for clinical research. (1) This methodology, however, is being subjected to scrutiny, and coming up short.
Jadad and Rennie (2) note that “RCTs can be vulnerable to multiple types of bias at all stages of their life spans,” and “It has also been shown that most reports of RCTs, even those published in prominent journals, are incomplete and do not reflect the empirical methodological evidence available.”
A recent paper in the Journal of the American Medical Association (JAMA)
(3) revealed that it is common practice to employ “run-in periods” in randomized clinical trials. Run-in periods are used prior to randomization to exclude noncompliant subjects, placebo responders, or subjects who could not tolerate or did not respond to active drugs.
Such techniques, in my opinion, represent “curve fitting” at best, and come dangerously close to blatant fraud. Unless there is full disclosure of the run in shenanigans used to get the desired results, the conclusions of any randomized clinical trial must be considered suspect.
Experimental design is not merely an intellectual exercise. Evidence from RCTs is used to approve drugs. Furthermore, health policy decisions may rely heavily on the results of such trials.
Even without such dubious techniques such as run in periods, there are significant problems inherent in the RCT. And, for chiropractic, which does not treat specific diseases and emphasizes the individual needs of each patient, RCTs are an expensive exercise in futility.
The randomized clinical trial was first proposed by the British statistician Austin Bradford Hill in the 1930s. (4) Since then, the RCT has received a plethora of praise and a paucity of criticism. The Office of Technology Assessment noted, “objections are rarely if ever raised to the principles of controlled experimentation on which RCTs are based.” (5)
Despite such widespread enthusiasm, A.B. Hill recognized that clinical research must answer the following question: “Can we identify the individual patient for whom one or the other of the treatments is the right answer? Clearly this is what we want to do…There are very few signs that they (investigators) are doing so.” (6) Herein lies the fatal flaw in RCTs.
As Coulter (4) observed, “We consider the controlled clinical trial to be a wrongheaded attempt by man to subjugate nature. Its advocates hope to overcome the innate and ineluctable heterogeneity of the human species in both sickness and health merely by applying a rigid procedure.” Inability of the RCT to deal with patient heterogeneity makes it impossible to use RCT results to determine if a given intervention will achieve a specified result in an individual patient.
Friedman stated, “The patient must not be viewed as merely one subject in a population but rather as a unique individual who may or may not benefit from such treatment.” (7)
Coulter (4) succinctly summarizes the problem with RCTs: “The clinical trial is an experiment performed on an unreal, unknown, mysterious entity — an assembly of sick people who have some features in common. Its results cannot be extrapolated to any larger population, and the information cannot be reliably duplicated. What is worse, the results of the trial cannot even be extrapolated to the individual patient, who (not some faceless member of a ‘homogeneous group’) is still the object of medical ministration.”
The chiropractic profession should direct its limited research resources to cost-effective investigations which utilize appropriate research designs. Such studies represent a rational alternative to performing RCTs on the effects of chiropractic care on every disease listed in the Merck Manual. A discussion of such designs will be the subject of future columns.
1. Sackett DL, Richardson WS, Rosenberg W, Haynes RB: “Evidence-based Medicine.” Churchill Livingstone. New York. 1997.
2. Jadad AR, Rennie D: “The randomized controlled trial gets a meddle-aged checkup.” JAMA 1998;279(4):319.
3. Pablos-Mendez A, Barr RG, Shea S: “Run-in periods in randomized clinical trials.” JAMA 1998;279(3):222.
4. Coulter HL: “The Controlled Clinical Trial: An Analysis.” Center for Empirical Medicine. Washington, DC, 1991.
5. US Congress. Office of Technology Assessment, 1983, page 7. Quoted in Coulter (4).
6. Hill AB: “Reflections on the controlled clinical trial.” Annals of the Rheumatic Diseases 25:107, 1966.
7. Friedman HS: “Randomized clinical trials and common sense.” Am J Med 81:1047, 1986.