What are clinical trials – and why should I care?
Clinical trials test medical treatments before they reach the general public. This testing is important, as it attempts as far as is possible to determine both the safety and effectiveness of the new product in treating the targeted disease. Medications are not permitted for use in humans without receiving a licence, which is awarded after certain safety and efficacy requirements are met through clinical trials.
Each country or region in the world has a different set of protocols to ensure patient safety. While they vary slightly, all clinical trial protocols are based on the same safety and ethical principles. In the UK, clinical trials take place in-line with the 1968 Medicines Act, which is a law that was enacted after a medicine called Thalidomide caused foetal abnormalities when it was taken by pregnant women. We’ll discuss this later in the article.
So what do clinical trials do, exactly?
One of the functions of a clinical trial is to assess whether a treatment is effective or not. This means most clinical trials aim to answer a specific question (normally called an “a priori” question) that relates to the effectiveness of any given medical treatment. Questions scientists might ask could include: “does a specific medical treatment reduce the average duration of a specific illness?”, or “does this treatment increase the average longevity / lifespan of sufferers of a specific illness?”. Clinical trials employ a specific protocol that controls the required demographic make up of the participants, the degree of disease to be present in the subjects, and the effect that would allow the treatment to be considered effective.
The a priori question will often concern the alleviation of unpleasant symptoms of an infectious disease or a condition. It’s a common misconception that medications often cure an underlying condition – in many cases, they don’t, but instead they may alter or reduce one or more of the symptoms associated with it. When you take paracetamol for a fever, for example, the paracetamol doesn’t cure the underlying condition that has caused the fever, such as a flu virus. It lowers the body temperature independently, which may reduce the suffering caused by the infection.
A priori questions are normally answered by comparing two groups of trial participants: one will be given the treatment being trialled, and the other will usually not be given any treatment (called the “control group” or the “placebo control”). A control group is usually administered placebo pills that look identical to the treatment being tested, as this helps mitigate any psychological effects the test subject may have if they know whether they have or have not received the trialled medication. Outcomes for the two groups are then recorded to try and ascertain whether the treatment has had any impact. In some instances, it is not possible to have a reliable control group because the patients have consciously noticed a physical change as a result of taking the trialled treatment. A notable example of this were the trials of beta-blocking drugs, which lower blood pressure – among trial participants, these caused a perceivable fall in heart rate.
There are some exceptional examples of clinical trials where control groups may not be used – such as clinical trials aiming to treat terminal cancer. When there is an urgent life-or-death situation, some of the usual protocols can be relaxed, in part for ethical considerations. It would be deemed unethical to withhold medical treatment for a terminal patient – which would be required for a control group – so in some rare instances control groups won’t be used. In some circumstances, a trialled drug will be compared in effectiveness with another product that is already on the market, but this usually takes place long after safety tests have been carried out.
The “gold standard” in modern clinical trials is the “randomised double-blind trial”. “Randomised” refers to a trial where the test subjects have been assigned to a control (placebo) group or a treatment group at random, and anonymously. The term “double-blind” refers to a situation where the staff dispensing the medication to the test subjects do not know whether they are administering the placebo (a dud pill that has no active effect) or the active medication. This is because it’s thought that the behaviour of physicians or other staff will change if they are aware which group they are treating, which will have an impact on the results of the trial. Double-blind trials try to prevent this, but as with the beta-blocking drugs mentioned above, effective blinding is not always possible..
In addition to collecting efficacy information, clinical trials also aim to collect important information about the way the medical treatment operates in the body, and any negative side effects or toxicity that may happen once a person has received the treatment. In short, scientists need to work out whether the treatment is safe. In the jargon of clinical trials, the way drugs operate in the body is normally referred to as “pharmacodynamics”, and is distinct from “pharmacokinetics”, which is how the body treats the putative medicine.
This toxicity information is (under usual circumstances) collected first by testing medication on a range of animals, and then by trialling the medication on healthy adult human subjects. A great deal of data are usually collected in these stages of drug development, long before any medication is ever trialled in those who are sick.
It is a strict legal requirement to have completed defined tests in animals before human trials begin. Animal testing of medical treatments is naturally controversial, but we’ll tackle that topic another time.
A Little History
The earliest medical testing can be dated as far back as the early 1500s, when the Royal College of Physicians was originally formed in London, but medical testing was not subject to regulations or legislation in the UK until Parliament passed the Medical Act in 1858 and the Pharmacy Act in 1863.
Medical ethics as we know it today arose in response to crimes committed mainly in the Second World War. The principles of informed consent and harm prevention were created in the wake of horrific and unethical experiments conducted by the Nazis and by the Japanese authorities in the early 1940s. During the Nuremberg trials, where prominent Nazis were placed on trial for war crimes and their role in the Holocaust, The Nuremberg Code (1947) was created in an attempt to prevent future unethical experiments. The Code can be read here.
The allied nations who defeated Nazism were similarly complicit in unethical experimentation in the same time period, and even after the creation of the Nuremberg Code, though this is rarely covered in history textbooks. In the United States, 600 impoverished African American men in Alabama were told they were receiving public healthcare from the US Federal Government in 1932, when in reality they were infected with syphilis by researchers from Tuskegee University, who aimed to observe the natural course of the disease in humans. The trial participants were denied treatment for syphilis with antibiotics, despite such treatments being viable and available from 1940s onwards. These trials continued until 1972. The victims of the experiment received no official apology until 1997.
While medical ethics as we currently understand it dates back to the 1940s, a major overhaul in testing protocols took place in the 1960s, in response to the Thalidomide Disaster. Thalidomide is a drug that was marketed as an anti-anxiety medication and a medication to relieve morning sickness in the late 1950s and early 1960s. The use of thalidomide in pregnant women led to the birth of over 10,000 children with severe deformities, across 46 countries. It also led to many miscarriages.
The Thalidomide Disaster did not occur in the United States because an FDA medical reviewer named Frances Oldham Kelsey held back on granting the medication a licence in that country. It is not unusual for pharmacology or medical ethics university courses to reference her (now broadly regarded as heroic) decision for this reason.
The thalidomide crisis led to a change in the way medications are prescribed. In 1968 the UK Parliament passed the Medicines Act, which restricted which medications could be prescribed by doctors for specific ailments.
The Thalidomide Disaster also sparked a major change in the way drugs are tested. Now, drugs must be tested to ensure they will be safe for fertile women and during pregnancy. One change was that new medications must now undergo testing in pregnant rabbits. Rabbits are the only other animal known to produce foetal abnormalities in offspring when administered Thalidomide while pregnant.
In 1964, the Nuremberg Code was superseded by The Declaration of Helsinki. The Declaration of Helsinki is now the modern standard for medical ethics, though it has undergone various revisions since its inception. The Declaration can be read here.
Different stages of a trial
There are three major stages of clinical trials.
Phase I is designed to explore the pharmacology, pharmacokinetics and pharmacodynamics in healthy adult humans (not patients). This stage tries to understand how humans respond to the medication. These trials normally involve a small number of participants, usually under 100, and sometimes very many fewer.
Phase II begins to explore the efficacy of the treatment in humans who are suffering from the targeted ailment. This trial stage involves a larger number of people than stage I, usually less than 500 but over 100. This phase involves an early safety evaluation and attempts to work out the most effective dose.
Phase III involves a large number of people, normally more than 1,000. Phase III involves a further assessment of efficacy and safety, and is the major determinant for granting a product licence.
Some research papers may refer to “Phase 0” (which refers to pre-human experimentation, both in vitro and in non-human animals), or “Phase IV”, which refers to further safety assessment that is ongoing after a treatment has been granted a licence and is being used in the wider population.
How does this relate to COVID-19? And how are vaccines tested?
Two types of clinical trial are being conducted in response to COVID-19. The first are trials aiming to find a treatment for the disease. These trials are unusual as they involve medications that have already been granted a licence for some other condition, so the usual safety tests that are conducted on animals and healthy humans do not need to take place. These trials are being overseen by the World Health Organisation: more information can be found here (the status of animal testing and findings in volunteers varies according to the practices current at the original product licence award).
The second type of clinical trials are to test proposed vaccines to the virus that causes COVID-19. Testing of vaccines follow the same basic principles as testing for other medical treatments, but with a few differences.
Safety testing in animals takes on a whole new emphasis when it comes to vaccine development. In vaccines, testing in non-human primates is usually of paramount importance, as their immune systems are the closest to those found in humans.
As with any clinical trials, the first tests in humans are concerned with patient safety more than the effectiveness of the treatment. But while these safety tests are conducted, researchers will also look for signs that the vaccine has led to the production of antibodies against the virus in those who have received it. To become immune to a virus, the body must produce neutralising antibodies consistently.
These antibodies sometimes take considerable time to develop, and in some cases the body will begin to produce antibodies but then cease to after a period of time. Testing of vaccines has a dimension other medical treatments do not: how can the proposed vaccine best establish long-term immunity? In some cases, multiple doses of the vaccine must be administered at specific time intervals to ensure lasting immunity, and in others one dose of the vaccine is sufficient to do the same. These kinds of questions about timing and dose are what researchers in the UK, and abroad, will be trying to answer while these early clinical trials take place.
These steps all take a long time, which is why any future COVID-19 vaccine may not be with us for another year at least. It is not unusual for vaccine development to take 15 years or more. The costs involved in finding effective vaccines are immense: they can range from hundreds of millions of dollars, to several billion. Some viruses, such as HIV, have never led to a viable vaccine despite a widespread international effort to create one. Indeed, the vast majority of vaccines or medications do not make it onto the market as they fail at some stage prior to the award of a licence.
But there’s hope ahead. Stage I clinical trials for a COVID-19 vaccine began in the UK last week, and are also being conducted in several other countries. Testing these vaccines will take time. In the interim, the best thing you can do to protect yourself and your loved ones is to follow public health advice and to stay at home.