Crowdsourced science refines AI prediction of clinical trial outcomes

Think of the R&D dollars that could be saved if artificial intelligence (AI) modelling could tell you at an early stage whether a drug was likely to succeed in clinical trials, and ultimately reach the market.

In 2019, a team at Massachusetts Institute of Technology (MIT) in the US came up with just such a model, and found that it was able to perform “better than random” at predicting the outcome of a clinical trial or development programme.

Professor Andrew Lo, director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) worked on the initial machine learning (ML) model, which used drug development and clinical trial data harvested from the Informa Pharma Intelligence database, including drug compound characteristics, clinical trial design, previous trial outcomes, and the sponsor track record.

The results were interesting, but the team wanted to do better. Shortly after the work was published, the teamed up with Novartis to challenge researchers to come up with a way to improve the predictive power of the model in a form of scientific crowdsourcing.

The hope was that by using modified algorithm or layering in more data sources, for example, the model could be made more efficient at spotting programmes that should be dropped to allow R&D investment to be redirected to more promising projects.

Now the results of the Data Science and Artificial Intelligence (DSAI) challenge are in, and two teams have been announced as the winners out of a competitive field that spanned 50 groups 300 individuals and are 3,000 models.

“Our goal in collaborating with Novartis was to validate key features previously found to be associated with regulatory approval, and to learn from industry experts about new features that can improve on our forecasts,” according to Lo.

The winning team relied on “handcrafted” features that incorporated their own insights into drug development timelines and which data entries should be discarded, according to MIT.

They found that one of the strongest predictors of approval was the phase 2 accrual relative to the disease average, and that prior approval for any indication, past approvals of other drugs for similar indications, and well-established mechanisms of action all improved the odds of getting the drug to market.

Strong indicators of failure included targeting therapeutic areas that have historically been tough to crack – like cancer or Alzheimer’s disease – as well as trial termination, poor patient enrolment, and the absence of an international non-proprietary name for a drug.

“The DSAI challenge highlights the promise of crowdsourcing in developing new predictive models, as well as the opportunity to develop more accurate models with additional data and a broader pool of challenge participants,” said Lo.

A report issued recently by Deloitte found that the average cost to bring a new medicine to market has risen for the seventh year in a row to $2.44 billion, due to the growing complexity of development and longer overall time in the R&D process.

The post Crowdsourced science refines AI prediction of clinical trial outcomes appeared first on .