• Strategy
  • Payers
  • AI
  • Cybersecurity
  • Device Development
  • Studies/Published Research
  • Ethics
  • Regulatory
  • Digital Biomarkers
  • Data Science
  • Companion Diagnostics
  • Clinical Care
  • Digital Therapeutics
  • Academic Research
  • Virtual Reality
  • RWE
  • Digital Endpoints
  • Medical Devices (Class II and Class III)
  • Commercialization
  • Clinical Trials
  • Drug-Device Combinations
  • SaMD
  • Remote Patient Monitoring
  • Patient Engagement
  • Profiles/Q&A

The Changing Practice of Pharmacovigilance

Article

PV is on the road to becoming a leader, not a follower, in leveraging drug lifecycle information for clinical trials and to inform clinical care

As a science, pharmacovigilance has leaned more passive than proactive. Regulators wait for a patient taking a prescribed medication to 1) experience, and then 2) report an unwanted event, before anything can be corrected. But, since the occurrence of 1 didn’t necessarily predicate 2, there always was the problem of scale.

The push to take pharmacovigilance proactive began in the mid-2000s. Researchers at the Inserm in Paris, for example, used machine learning to develop data mining tools.1 Then, the OMOP, or the Observational Medical Outcomes Partnership, went live about 2010; this open-source data model helped standardize data structure and content. In the early 2010s, an FDA scientific committee wondered aloud if queries made in social media about drug safety events would correlate with information in the FDA’s adverse events research database.2 Soon after, researchers at Oracle began wondering the same thing, too.

Medical Device & Technology spoke with Bruce Palsulich, vice president, safety product strategy, Oracle Health Sciences, and Michael Fronstin, global head, clinical, Cerner Enviza, now part of Oracle.

MDT: Please discuss industry’s changing attitude towards the purpose of pharmacovigilance.

Bruce Palsulich: Decades ago, monitoring drug safety was just a regulatory obligation. Pharma focused on adverse events reported during the trial, they didn't want people mining for things that weren’t necessarily reported. There was no real effort to exceed the minimum to be compliant.

Bruce Palsulich, vice president, safety product strategy, Oracle Health Sciences

Bruce Palsulich, vice president, safety product strategy, Oracle Health Sciences

MDT: And what is industry’s perception now of pharmacovigilance?

BP: It is considered a highly, expensively curated source of accessible information. We want to learn all things from all sources of information, and gather new insights, like risks to patients. We also want to help refine our development strategies. Most entities are looking at the accelerated pharmacovigilance process.

MDT: Please discuss Oracle’s early research into pharmacovigilance.

BP: We loaned one of our team members to Microsoft, this was in the mid 2010s. They used search histories and logs to identify the prediction of adverse events.3 Another study looked at preclinical profiles of 2,134 approved drugs. We wanted to learn about potential adverse event risks if preclinical assets were furthered developed and how that could change priorities and targets.4 In that study, the team identified 221 statistical associations between the targeted proteins and unwanted events. There is work being done to target certain populations to see whether they should be included or excluded in trials based on historical evidence that is coming out of post-market or real-world experience. The interest here is leveraging a pharmacovigilance dataset in the clinical trial life cycle.

MDT: That requires a lot of data.

BP: We now have footprints of after point of care for inpatient and outpatient. Using these footprints, we might be able to inform back to improve care or intervene before patient had been harmed, as opposed to after the fact.

Michael Fronstin, global head, clinical, Cerner Enviza

Michael Fronstin, global head, clinical, Cerner Enviza

MDT: Please explain the expanded footprint.

Michael Fronstin: Two- and one-half years ago, Cerner began the Learning Health Network. We recruited health systems of all sizes, big academic systems, 80-bed community health centers. There are now 105 members with 100 million patients. It is free to join. The members contribute de-identified EHR data, which can be shared across members. They can do their own research in exchange for giving us data rights to do research on these datasets. We can use this network to help customers with their trials. We also are creating a network in Europe. Once we go outside it will be interesting for the industry.

MDT: What is the goal here?

MF: To capture data at many sources, whether it is claims data, survey data, pharmacy data, whatever it is. The more sources, the information becomes more varied and more complex, so the more insights you can draw. Close to 40% of the Cerner data are linked at the patient level.

MDT: Pharmacovigilance has its own dictionary, correct? How are you standardizing the various terminologies in all these databases?

BP: The MedDRA (Medical Dictionary for Regulatory Activities) has 30,000 unique terms that relate to system organ classes, like central nervous system disorders, or cardiovascular or neurologic, but the healthcare and claims terms are different. So, we figure out how these terms are different—the EHR might say headache, but the prescription data says sumatriptan—and then we standardize the codes.

MDT: What other information sources are you mining for adverse events?

BP: An estimated 60% of data considered valuable are in unstructured clinical note. We have a project looking at what additional value we can garner regarding events if we add in clinical notes. There are certain things that aren’t visible in structured data but become so once you further mine the clinical notes. And that is a big emphasis right now.

MDT: Federal regulations that are enacted to protect consumers and patients are often disaster-generated—thalidomide and Tuskegee come to mind.5 Is it possible that the Vioxx and Bextra disasters in the early 2000s helped prompt a more active view of pharmacovigilance?

MF: I think that is a fair evaluation. Suppose that Merck and Pfizer—makers of Vioxx and Bextra, respectively—had performed real-world monitoring of these painkillers post-approval. Merck could have learned about the dosing problem, blamed for the heart attacks,6 and Pfizer the rare skin reaction problem. They could have gone to the FDA, reported the contraindications for these subpopulations, maybe gotten a black box warning. Vioxx was helping so many people; Merck was making a billion two years after approval. Everybody lost in the process.

References

  1. Henegar C, Bousquet C, Lillo-Le Louët A, Degoulet P, Jaulent MC. A knowledge based approach for automated signal generation in pharmacovigilance. Stud Health Technol Inform. 2004;107(Pt 1):626-30. PMID: 15360888.
  2. Joan-Ramon Laporte. pharmacoepidemiology and drug safety 2016; 25: 725–732 Published online 22 January 2016 in Wiley Online Library (wileyonlinelibrary.com)
  3. White RW, Wang S, Pant A, et al. Early identification of adverse drug reactions from search log data. J Biomed Inform, 2016; 59: 42-48.
  4. Letswaart R, Arat S, Chen AX, et al. Machine learning guided association of adverse drug reactions with in vitro target-based pharmacology. EBioMedicine. 2020 Jul;57:102837.
  5. Milestones in U.S. Food and Drug Law https://www.fda.gov/about-fda/fda-history/milestones-us-food-and-drug-law
  6. Garner SE, Fidan DD, Frankish RR, Judd MG, Towheed TE, Wells G, Tugwell P. Rofecoxib for rheumatoid arthritis. Cochrane Database Syst Rev. 2005 Jan 25;2005(1):CD003685. doi: 10.1002/14651858.CD003685.pub2. PMID: 15674912; PMCID: PMC8725608.

Interview conducted with Christine Bahls, a freelance journalist for medical, clinical trials, and pharma information.

© 2024 MJH Life Sciences

All rights reserved.