Background: The results from phase 1 clinical trials can allow new treatments to progress further in drug development or halt that process altogether.
At the forefront of phase 1 clinical trials is the safety of every patient participant, which is particularly true when testing new oncologic treatments in which patients may risk potentially toxic treatments in the hope of slowing the progression of or even curing their disease.
Methods: We explore the benefits and risks that patients experience when participating in phase 1 clinical trials.
Results: Rules and regulations have been put into place to protect the safety and interests of patients while undergoing clinical trials. Selecting patients with cancer who will survive long enough to accrue data for these trials continues to be challenging.
New prognostic models have been validated to help health care professionals select those patients who will likely benefit from participation in phase 1 trials. There also are long-lasting positive and negative impacts on those patients who choose to participate in phase 1 clinical trials.
Conclusions: Modern phase 1 clinical trials represent a therapeutic option for many patients who progress through frontline therapy for their malignancies. Recent phase 1 clinical trials testing targeted therapies have increased responses in many diseases in which other lines of therapy have failed.
Patients still face many risks and benefits while enrolled in a phase 1 trial, but the likelihood of treatment response in the era of rational, targeted therapy is increased when compared with the era of cytotoxic therapy.
Results from clinical trials help to answer questions and provide guidance for practicing health care professionals. The regimented clinical trial design was not standardized until the twentieth century1; however, physicians have been employing concepts of modern clinical trials for centuries.
An ancient medical text, The Canon of Medicine, established guidelines for the proper conduct of medical experimentation.2 In this text, the principles for testing the efficacy of a new medication were laid out, including that the drug must be free from any extraneous accidental quality and that the experimentation must be performed with the human body.2
The essence of these guidelines became the scientific method for testing of medications, and, for the most part, the medical field regulated itself when it came to new medications, elixirs, “cure-alls,” panaceas, and the like.
The turning point in medication development that resulted in the rigorous, regimented development of clinical trials in the United States occurred in 1937 when pharmaceutical manufacturer S.E. Massengill Company (Bristol, Tennessee) released the first elixir formulation of sulfanilamide, an antibiotic that, at the time, had been shown to have activity against streptococcal throat infections.3 The elixir was available to consumers without undergoing animal or human testing of any kind prior to its release.
However, the antibiotic was suspended in diethylene glycol, also known colloquially as antifreeze. The product was so extensively disseminated into US stores that the US Food and Drug Administration (FDA) and S.E. Massengill could not fully recall the product, which had caused the deaths of at least 100 people.1 Even then, the FDA was empowered to recall the drug only because the label was misleading (ie, it was labeled as an “elixir” and, therefore, had to contain alcohol, but this “elixir” did not have any).
Due in part to this series of deaths, the FDA was granted new powers in 1938 under the Federal Food, Drug, and Cosmetic Act, which required drug sponsors to submit safety data to the FDA for it to evaluate prior to marketing of the drug, thus planting the seed for the modern clinical trial structure4; this was later modernized by Hill in 1948.1 Hill, who was a British statistician, performed one of the first randomized controlled studies that showed that streptomycin could cure tuberculosis.5
However, in 1962, thalidomide, a drug popular as a hypnotic in Europe and suspected to cause birth defects, was supplied to US physicians who subsequently gave the drug to expectant mothers as a remedy for morning sickness.6
This act resulted in nearly a dozen infants being born with birth defects, far less than the approximately 10,000 infants worldwide born with thalidomide-related defects. The smaller impact of thalidomide in the United States was due in part to the efforts of the FDA, which denied the thalidomide application on grounds that more evidence of safety was required.1
The amendments in 1962 that followed on the heels of the thalidomide incident further strengthened the control of the FDA over new investigational drugs, thus requiring pharmaceutical companies to demonstrate that their investigational drug could be safely given to patients in the preclinical setting, thereby setting the stage for the formation of phase 1 clinical trials (Table 1).1,7
(To view a larger version of Table 1, click here.)