Clinical Trials Cannot Catch All Side Effects of Drugs; Should We Worry?

This week an article in JAMA reported that about a third of all new drugs approved by FDA in the 10 year period from 2001-2010 had “safety events” after approval which led  to about 1% of the drugs being withdrawn from market. This number may be concerning to some, but a careful review of the context shows that this information is clinically less significant that it seems to appear. On the other hand, it points to a remarkable fact; more than two-thirds of the drugs did not show any new safety issue and 99% of drugs were allowed to stay in the market by FDA despite the safety events. That points to the incredible robustness of the clinical trial system. The findings in this report are identical to a previous survey of approvals in Europe during the same time-period. Safety of a drug should not be looked at as an independent parameter but in context of the benefits of treatment with that drug. In the years 2001-2010, under the existing laws at the time, each new product had to demonstrate safety and efficacy in multiple clinical trials before it was approved. It is well accepted that clinical trials can only detect major safety issues with the drug being tested with additional safety concerns to be found post-approval. Clinical trials represent a closed system for evaluating drugs. Patients in a trial are carefully selected to increase the odds of finding higher overall effectiveness and lower safety concerns; the investigators interact vigorously with the trial participants to assure almost all safety concerns are addressed way before they become an issue, and mountains of data collected for most conceivable aspects of the drug are presented to the regulators for detailed review. Once approved the drugs are available to anyone and hence the clinical experience with the product is so much more valuable for making treatment decisions that the one used to get the product approved in the first place. However, after approval, there is little incentive for companies to collect robust post-marketing data unless required by the regulators, hence post-approval only major safety events are captured. It is likely that the numbers of new adverse events are much higher than reported. New adverse events are not found till about 4 years post-approval with the earliest events being around 2.5 years post-approval. This makes sense based on the usual life of a new product in the market post-approval; it generally takes at least two years before a product reaches it full market potential, i.e., gets prescribed to a reasonable patient population, leading to increased amounts of user experiences. Since 2010, newer regulatory pathways such as breakthrough therapies, and a more aggressive FDA willing to approve products with fewer clinical trials have led to products being approved with less robust clinical trial data. However, such aggressive approvals are limited to unmet medical needs and life-threatening conditions where efficacy of the product trumps its potential safety since the patients have no other options and lack of any treatment is worse than a treatment with a few unknown side effects. A more useful survey would be to evaluate how often a product’s efficacy in the post-approval prescription life matches or exceeds what was seen in the clinical trials used to approve it. That would be a much harder survey as efficacy signals are not captured the same way safety signals are. That FDA-approved products have unknown safety issues is a well-accepted and unsurprising fact, but do new products benefit patients over previously approved treatments, is the more interesting, and probably more usable information.       

Author

FDA Purán Newsletter Signup

Subscribe to FDA Purán Newsletter for 
Refreshing Outlook on Regulatory Topics