Automatically Labeling $200B Life-Saving Datasets: A Large Clinical Trial Outcome Benchmark (2024)

Chufan Gao
Department of Computer Science
University of Illinois Urbana-Champaign
chufan2@illinois.edu
&Jathurshan Pradeepkumar
Department of Computer Science
University of Illinois Urbana-Champaign
jp65@illinois.edu
&Trisha Das
Department of Computer Science
University of Illinois Urbana-Champaign
trishad2@illinois.edu
&Shivashankar Thati
University of Illinois Urbana-Champaign
ssthati@outlook.com
&Jimeng Sun
Department of Computer Science
Carle Illinois College of Medicine
University of Illinois Urbana-Champaign
jimeng@illinois.edu
Equal contribution

Abstract

The global cost of drug discovery and development exceeds $200 billion annually. The main results of drug discovery and development are the outcomes of clinical trials, which directly influence the regulatory approval of new drug candidates and ultimately affect patient outcomes. Despite their significance, large-scale, high-quality clinical trial outcome data are not readily available to the public.Suppose a large clinical trial outcome dataset is provided; machine learning researchers can potentially develop accurate prediction models using past trials and outcome labels, which could help prioritize and optimize therapeutic programs, ultimately benefiting patients. This paper introduces Clinical Trial Outcome (CTO) dataset, the largest trial outcome dataset with around 479479479479K clinical trials, aggregating outcomes from multiple sources of weakly supervised labels, minimizing the noise from individual sources, and eliminating the need for human annotation. These sources include large language model (LLM) decisions on trial-related documents, news headline sentiments, stock prices of trial sponsors, trial linkages across phases, and other signals such as patient dropout rates and adverse events. CTO’s labels show unprecedented agreement with supervised clinical trial outcome labels from test split of the supervised TOP dataset [11], with a 91919191 F1.

1 Introduction

A clinical trial is an indispensable step toward developing a new drug, involving human participants to test the drug’s efficacy and safety for treating target diseases. In 2022, drug discovery and development spending reached 244 billion dollars globally [38]. Among which, the clinical trial market reached $44.3 billion in 2020 and is expected to grow to $69.3 billion by 2028 [33].Low efficacy, safety issues, and poor trial protocol design can lead to trial failures [10, 37, 25]. Eroom’s Law111reverse of “Moore’s Law” shows that the number of new FDA-approved drugs per billion US dollars of R&D spending has halved approximately every nine years since 1950, even with inflation adjustment [35].Given these challenges, predicting trial outcomes in silico—using computational methods—could significantly enhance drug discovery efficiency.

Surprising challenge: Despite the significant effort and resources invested in clinical trials, it is surprising that trial outcomes are not readily available for all trials. While some trial results have been published, many do not have publications available, and connecting the trials across different phases is also challenging. This lack of high-quality trial outcome labels presents a major obstacle in creating predictive models related to trial outcomes, which could potentially optimize the drug development process and improve patient outcomes. Furthermore, the FDA does not release the clinical trial ID (NCTID) in documents of approved drug applications.

Public data sources, such as the ClinicalTrials.gov database with more than 400,000 historical trials [34, 46, 47], provide vital information for identifying trial outcome labels. Other valuable resources include the Food and Drug Administration (FDA) National Drug Code (NDC) directory [9], which offers a comprehensive set of drug approvals and their codes, and the DrugBank [44, 23] database, which contains biochemical descriptions of many drugs and their indications. However, these resources are not connected, and the lack of direct links between clinical trials, drug application processes, and different phases of drug interventions makes it difficult to obtain clear trial outcome labels.The absence of a centralized, easily accessible database that consolidates clinical trial outcomes, drug approvals, and intervention phases poses a significant challenge for researchers and drug developers.This fragmented landscape of information hinders the development of accurate predictive models and can slow down the drug discovery and development process.

Trial Outcome Definitions Clinical trial outcomes are multifaceted and have diverse implications. These outcomes can involve meeting the primary endpoint as defined in the study, advancing to the next phase of the trial, obtaining regulatory approval, impacting the financial outcome for the sponsor (either positively or negatively), and influencing patient outcomes such as adverse effects and trial dropouts.Our paper follows the previous conventions [11, 41, 10, 24, 1] and defines the trial outcome as a binary indicator, showing whether the trial achieves its primary endpoints and can progress to the next stage of drug development. For Phase 1 and 2 trials, success may mean moving to the next phase, such as from Phase 1 to Phase 2, and from Phase 2 to Phase 3. In Phase 3, success is measured by regulatory approval.

Recently, initial efforts have been made to forecast specific aspects of clinical trials to enhance their outcomes. These efforts include employing electroencephalographic (EEG) measures to predict the effects of antidepressant treatments [30], optimizing drug toxicity predictions based on drug and target properties [16], and using phase II trial results to anticipate phase III trial outcomes [29]. Additionally, there is a growing interest in developing comprehensive methods for predicting trial outcomes. For example, predicting drug approvals for 15 disease groups by analyzing drug and clinical trial features using classical machine learning techniques [24], using multimodal drug structure and text information to predict outcomes based on a supervised set of data [11],an algorithm that computes the probability of technical success via asking experts a standardized questionnaire [43], and multimodal trial outcome prediction via omics, text, clinical trial design, and small molecule properties [1].Despite these efforts, several limitations still impede the utility of existing trial outcome prediction models. Namely–the lack of transparency in the clinical trial labeling process. Fu et al. [11] is one example of a publicly available expert-curated clinical trial dataset. However, it is also quite limited because it only contains 17,538 human-labeled interventional small-molecule drug trials, out of around 400,000 total trials. To date, we are unaware of any other large-scale, publicly available, open-source (fully reproducible) effort to compute trial outcome labels.

We state our contributions as follows:

  • We propose CTO, the first large-scale, publicly available, open-sourced, and fully reproducible dataset of clinical trial outcomes derived from multiple sources of weakly supervised labels, including trial phase linkages, LLM interpretations of trial-related publications, news headline sentiments, stock prices of trial sponsors, and other trial metrics.

  • CTO demonstrates significant agreement with published trial outcome results, achieving promising results with a 94 F1 score on Phase 3 trials and 91 F1 score on all phase trials compared to humanly annotated clinical trial outcomes.

  • We provide all code222https://github.com/chufangao/CTOD/ and data in a reproducible and easily extendable format, allowing for calculating predicted labels for new trials. Additionally, we aggregate trial-related data, such as ICD coding, drug mapping, and publications, facilitating secondary applications and benchmarking current outcome prediction models.

Automatically Labeling $200B Life-Saving Datasets: A Large Clinical Trial Outcome Benchmark (1)

1.1 Related Work

DatasetData SponsorSubset# TrialsLabeling Method
Pubicly
Available
Lo et al. [24]TrialTrove, PharmaprojectsPhase II, III19,136Human Expert
Feijoo et al. [7]BiomedtrackerIndustry Phase II, III6,417Manual linking
Aliper et al. [1]Insilico MedicinePhase II55,653Biomedical KG, Trial stats
Willigers et al. [43]AstrazenecaAstraZeneca Phase III57Human Expert
TOP [11]IQVIASmall Molecule Drugs17,538Human Expert
CTO (ours)Publicly CollectedAll479,761
Publications, News, Trial linking,
Stocks, Trial stats, Etc

Predicting clinical trial outcomes is often led by industries with the capacity for extensive data curation. Informa’s TrialTrove [39] is widely used, containing around 20,000 trials [24]. AstraZeneca has developed structured feedback forms to improve Phase 3 trial success annotations [43]. Feijoo et al. [7] demonstrated that using Random Forest on Biomedtracker, a proprietary dataset aggregating company reports, results in strong outcome prediction performance.

Previous studies have tackled clinical trial outcome prediction using various methods. Early work employed statistical analysis [26] and ML models on limited private data sets (<500 samples) [6]. In drug toxicity prediction, Gayvert et al. [12] used Random Forest to predict outcomes based on chemical properties, while Artemov et al. [3] used Multilayer Perceptrons for Phase I/II trials. Lo et al. [24] applied KNN imputation and Random Forest on features from Pharmaprojects and TrialTrove. Additionally, Phase 2 to Phase 3 prediction was explored by Qi et al. [29] through clinical trials and by Aliper et al. [1] using experts, GPT-3.5, and a biomedical knowledge graph.

Fu et al. [11] released the first publicly available clinical trial outcome dataset based on manual curation. Train and test data splits were selected as all trials completed before 2014 and afterward respectively. After preprocessing and cleaning, the final number of trials in train, validation, and test datasets was reduced to 17,538 \rightarrow 12,465 trials, which was only around 4% of all available trials at the time of publication 333and around 3% as of June 2024. Table1 summarizes recent trial outcome work.

2 CTO Overview

Our main methodology is outlined in Figure1. We overview our primary sources of outcome predictors–LLM prediction on Pubmed Abstracts, Trial Linkage, News Headlines, Stock Price, and finally, Trial Metrics computed from clincialtrials.gov. These metrics are computed independently and are aggregated via weakly supervised label aggregation.

2.1 LLM Predictions on PubMed Abstracts

PubMed abstracts have been automatically linked to trials by the Clinical Trials Transformation Initiative (CTTI)[5, 40, 2] as well as other efforts [17]. First, We extracted all PubMed abstracts for each trial through the NCBI API 444https://ncbi.nlm.nih.gov/ and the statistics of the extracted abstracts are given in the supplementary E.3. These abstracts can be categorized into 1) Background, 2) Derived, and 3) Results. Since we are interested in clinical trial outcomes, we only utilized abstracts in the Derived and Results categories. As many trials had multiple abstracts, we selected the top 2 abstracts based on their title similarity to the trial’s official title to provide the most relevant information.

Given these abstracts, we prompted the ‘gpt-3.5-turbo’ model to summarize important trial-related statistical tests and predict the outcome. Additionally, we prompted the LLM to generate QA pairs about the trial, which are provided as a supplement C.3 to our benchmark. The prompts are provided in the supplementary G.

2.2 Trial Linkage

The journey of a drug from discovery to FDA approval involves several stages, beginning with Phase 1 trials to assess safety and dosage. Subsequent Phase 2 and 3 trials evaluate efficacy and compare the new drug to existing therapies. Upon completing Phase 3, a drug may be submitted for FDA approval. A key limitation of the CITI dataset is the lack of connectivity between trial phases, which could significantly enhance the ability to analyze trial progression and outcomes based on advancement to subsequent phases. Moreover, linking trials across phases is not straightforward due to challenges, including unstructured data, inconsistent reporting standards, missing information, data noise, and discrepancies in intervention details across phases. Despite these challenges, linking trials can be invaluable, particularly as a source of weak labels in clinical trial outcome prediction tasks. This section presents our novel trial-linking algorithm, which, to our knowledge, is the first attempt to systematically connect different phases of clinical trials.The trial linkage extraction process consists of two primary steps: 1) Linking trials across different phases, as illustrated in Figure2B, and 2) Matching phase 3 trials with FDA approvals, as shown in Figure2C.

Linking of Clinical Trials Across Phases

The progression of clinical trials through phases are not all strictly sequential from Phase 1 to Phase 3, as some studies may combine multiple phases. Many trials were categorized as ‘Not Applicable’ or were missing phase information entirely; these were excluded from our analysis. We created a phase connection map, illustrated in Figure2A, that covers all phase categories present in the dataset. Our linking algorithm begins with the later phases and traces back to earlier phases. This approach is based on the assumption that a trial in a subsequent phase implies the success and existence of a corresponding trial in a preceding phase. Phase 3 trials are considered a success if a Phase 4 trial is found or if it is linked to an FDA new drug application.

The trial linkage process consists of three main steps: 1) Selection of the search space, 2) Retrieval of the top K most similar past trials, and 3) Prediction of linkage. In Figure2B, we illustrate an example of linking a Phase 4 trial to its preceding phases. For a given trial x𝑥xitalic_x, the objective is to identify its predecessor among trials in its directly linked earlier phases. For instance, the directly linked earlier phases of Phase 4 are Phase 3 and Phase 2&\&&3.

Automatically Labeling $200B Life-Saving Datasets: A Large Clinical Trial Outcome Benchmark (2)

1. Search space selection: We filter the trials based on their completion dates relative to the start date, ensuring that any linkage candidate must have concluded before the start of x𝑥xitalic_x. Furthermore, we also consider intervention types such as ‘Drug’, ‘Biological’, ‘Device’, etc.

2. Retrieve top-K: From the filtered search space 𝐙𝐙\mathbf{Z}bold_Z, we retrieve top-32 most similar past trials to x𝑥xitalic_x. We extract key features and encode them into dense embedding using PubMedBERT [14] to represent both x𝑥xitalic_x and trials in the search space (ziZsuperscript𝑧𝑖𝑍z^{i}\in Zitalic_z start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ∈ italic_Z) as follows: x={xI,xC,xT,xS,xE}𝑥subscript𝑥Isubscript𝑥Csubscript𝑥Tsubscript𝑥Ssubscript𝑥Ex=\{x_{\text{I}},x_{\text{C}},x_{\text{T}},x_{\text{S}},x_{\text{E}}\}italic_x = { italic_x start_POSTSUBSCRIPT I end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT C end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT T end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT S end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT E end_POSTSUBSCRIPT }, zi={zIi,zCi,zTi,zSi,zEi}superscript𝑧𝑖subscriptsuperscript𝑧𝑖Isubscriptsuperscript𝑧𝑖Csubscriptsuperscript𝑧𝑖Tsubscriptsuperscript𝑧𝑖Ssubscriptsuperscript𝑧𝑖Ez^{i}=\{z^{i}_{\text{I}},z^{i}_{\text{C}},z^{i}_{\text{T}},z^{i}_{\text{S}},z^%{i}_{\text{E}}\}italic_z start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT = { italic_z start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT I end_POSTSUBSCRIPT , italic_z start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT C end_POSTSUBSCRIPT , italic_z start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT T end_POSTSUBSCRIPT , italic_z start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT S end_POSTSUBSCRIPT , italic_z start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT E end_POSTSUBSCRIPT }.Where the subscript I𝐼Iitalic_I denotes intervention or drug, C𝐶Citalic_C denotes condition or targeted disease, T𝑇Titalic_T denotes official trial title, S𝑆Sitalic_S denotes trial summary, and E𝐸Eitalic_E denotes eligibility criteria. We calculate similarity as: similarity(xi,zi)=j𝐅xizjixizjisimilaritysuperscript𝑥𝑖superscript𝑧𝑖subscript𝑗𝐅superscript𝑥𝑖superscriptsubscript𝑧𝑗𝑖normsuperscript𝑥𝑖normsuperscriptsubscript𝑧𝑗𝑖\text{similarity}(x^{i},z^{i})=\sum_{j\in\mathbf{F}}\frac{x^{i}\cdot z_{j}^{i}%}{\|x^{i}\|\|z_{j}^{i}\|}similarity ( italic_x start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , italic_z start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ) = ∑ start_POSTSUBSCRIPT italic_j ∈ bold_F end_POSTSUBSCRIPT divide start_ARG italic_x start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ⋅ italic_z start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT end_ARG start_ARG ∥ italic_x start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ∥ ∥ italic_z start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ∥ end_ARG,where 𝐅=𝐅absent\mathbf{F}=bold_F = {I,C,T,S,E}.

We excluded the lead sponsor as a feature since the sponsor often changes depending on funding and performs worse empirically (AppendixF).

3. Predict linkage Given the large search space, we employ a re-ranking strategy using a cross-encoder pre-trained on MS-MARCO [4]. We provide feature pairs as input to the cross-encoder as follows: Cross-encoder score(xi,zi)=j𝐅gθ(xi,zji)Cross-encoder scoresuperscript𝑥𝑖superscript𝑧𝑖subscript𝑗𝐅subscript𝑔𝜃superscript𝑥𝑖subscriptsuperscript𝑧𝑖𝑗\text{Cross-encoder score}(x^{i},z^{i})=\sum_{j\in\mathbf{F}}g_{\theta}(x^{i},%z^{i}_{j})Cross-encoder score ( italic_x start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , italic_z start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ) = ∑ start_POSTSUBSCRIPT italic_j ∈ bold_F end_POSTSUBSCRIPT italic_g start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_x start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT , italic_z start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ).Based on the cross-encoder scores, we predict the linkage by considering the trials with the highest positive cross-encoder scores as the most probable previous phase trials of x𝑥xitalic_x.

We apply this process for all trials in Phase 4, Phase 3, Phase 2 & 3, and Phase 2 to extract the trial linkages. To extract the outcome labels, we start with trials in the earlier phases and label them based on the existence of linked trials in the subsequent phases. However, trials in Phase 3 and Phase 4 have some exceptions to this process. For Phase 4 trials, there are no following trial phases, so we exclude them from the extracted weak labels. As for Phase 3 trials, they can be successful even without the existence of a subsequent Phase 4 trial. This highlights the importance of matching Phase 3 trials with their corresponding FDA approvals if they exist.

Matching Phase 3 trials with FDA approvals

After establishing connections across different phases of clinical trials, we focus on matching the Phase 3 trials to drug approvals to obtain their outcome labels. We utilize the FDA Orange Book555https://fda.gov/drugs/drug-approvals-and-databases/orange-book-data-files version as of April 2024. Specifically, we use the approval date and drug name provided in the ‘product.txt’ file, as the other files do not contain the relevant information required for the matching process. In this process, we only consider drug-related trials in Phase 3 and Phase 2 & 3 since the Orange Book solely comprises FDA-approved drugs.For a given FDA-approved drug, we first filter Phase 3 and Phase 2 & 3 trials based on the approval date and intervention generic name, retaining trials completed between 2 years and 2 months prior to the approval date to align with the FDA approval process timeline. Similar to trial linkage prediction, we provide the drug’s generic name and the trial’s intervention generic name as input pairs to a cross-encoder, predicting their similarity. We select the top 5 trials based on cross-encoder scores and match the FDA approval to the trial with the completion date closest to the approval date. We then update the previously extracted outcomes for Phase 3 and Phase 2 & 3 trials, labeling them as successful if matched to an FDA approval or having a linked Phase 4 trial.

2.3 News Headlines

News headlines were obtained via the following steps:1. Web Scraping: We sent requests to Google News for headlines regarding the top 1000 industry sponsors, which accounted for around 80% of the industry-sponsored trials (27,720 trials). Due to rate limitations, we limit our requests to a rate of around 1 query every 3-5 seconds. To obtain the widest range of news, we search each sponsor’s name and obtain up to 100 articles for every month, starting from the sponsor’s earliest clinical trial to the current day. We retrieved a total of 1,115,017 news articles.2. News Sentiment Classification: We utilize FinBERT [45] to obtain financial news sentiment (‘Positive’ or ‘Negative’ with a confidence score between 0 and 1) for every headline. We drop the ‘Neutral’ sentiment as it is irrelevant for our task.3. News / Trial Matching: Similar to Trial Linkage, we adopt a similar strategy of filtering using a top-K retriever and reranking using a cross-encoder. Trials are encoded with x={xI,xC,xT,xS}zi𝑥subscript𝑥Isubscript𝑥Csubscript𝑥Tsubscript𝑥Ssuperscript𝑧𝑖x=\{x_{\text{I}},x_{\text{C}},x_{\text{T}},x_{\text{S}}\}\rightarrow z^{i}italic_x = { italic_x start_POSTSUBSCRIPT I end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT C end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT T end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT S end_POSTSUBSCRIPT } → italic_z start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT, and headlines are encoded as hisuperscript𝑖h^{i}italic_h start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT. We use PubMedBERT to encode both trials and headlines and follow the steps in trial linkage to obtain relevancy scores.We consider all headline sentiments with relevance scores larger than the mean score and take the mode of the sentiment predictions.

2.4 Stock price

The stock price of a pharmaceutical or biotech company often reflects market expectations. If investors expect positive results, the stock may rise in anticipation of the trial’s completion. Conversely, if expectations are low or if previous trials have been unsuccessful, the stock may not perform as well. We utilized Yahoo finance666https://pypi.org/project/yfinance/ to collect historical stock market data for companies that have publicly available tickers for completed trials. By averaging the stock prices over a specified time frame, the moving average reduces the noise caused by random, short-term price movements, making it easier to identify the underlying trend [19, 36]. A 5-day simple moving average (SMA) of a stock’s price is calculated by taking the average of the closing prices for the last 5 trading days. As shorter periods reveal shorter-term trends, we selected a 5-day SMA to capture the immediate short-term impact of the completion of clinical trials. A positive slope indicates an uptrend, while a negative slope indicates a downtrend in the SMA line [36]. The absolute value of the slope represents the steepness of the trend. We calculated the slope for a 7-day window starting at a clinical trial’s ‘completion date’.

2.5 Clinical Trial Metrics

We utilize the CTTI dataset to obtain preprocessed tables of trial details (e.g. eligibility criteria, statistics, linked references777https://aact.ctti-clinicaltrials.org) [5, 40, 2]. Specifically, we utilize the following information. (1) Whether results were reported, (2) The number of sponsors, (3) The number of patients, (4) The patient dropout rate, (5) The number of sites or locations for the trial, (6) Whether the P-value <<< 0.05, (7) The date at which the trial was last updated vs its completion date, (8) Number of deaths, (9) Number of serious adverse events, (10) Number of any adverse events, (11) The status of the trial e.g. terminated/withdrawn/completed/etc, and finally, (12) The number of amendments made to the trial page. Please see AppendixD.2 for in-depth discussions for a discussion on how trial outcomes are linked to these metrics.

Each of these metrics is treated as a weakly supervised Labeling Function (LF).For most of these metrics, we consider the "good" outcome as having greater or less than the median of that metric. For example, in the Serious Adverse Events LF, we predict "1" if a trial’s number of serious adverse events is less than the overall median number of serious adverse events. Otherwise, we predict "0".While choosing a highly specific threshold could be better than choosing the median, we note that LFs in the data programming framework (as we describe in the next section) do not have to be perfect, only better than random for data programming to work well.

2.6 Weakly Supervised Label Aggregation

We integrate multiple sources of weak supervision to select highly confident predictions from the previous step of relation prompting. Data programming is a framework designed to create denoised pseudo-labels from various weak supervision sources via labeling functions and matrix completion [32, 31].

A labeling function (LF) is a noisy heuristic that either assigns labels to unlabeled data or abstains from making a prediction. For example, f(text) = return SPAM if "HTTP" in-text else ABSTAIN is a labeling function used for spam detection. The main idea is that LFs that agree more with other LFs should be weighted higher, and given more weight in the final label prediction. Further details are introduced in AppendixD.1.To demonstrate the superiority of this approach, we also compare with Majority Vote baseline, where it simply takes the mode prediction of non-abstaining label prediction on any given data point. Finally, since we have train/test splits on TOP data, we can utilize these ground truth labels to enable a classification-like approach to weakly supervised label aggregation i.e. training a classifier to predict true labels from weakly supervised output. We use the Random Forest classifier for this task and ensure that no data leakage occurs by only training on the training set of TOP.

In total, we have more than 450k trials with automatically labeled outcomes. However, to enable comparison with the supervised labels in TOP, we analyze our trial labels on the interventional small-drug trials (However, we still release the full set of predicted trials).

3 Results

3.1 Agreement with Human Annotations

CTOMVsubscriptCTO𝑀𝑉\text{CTO}_{MV}CTO start_POSTSUBSCRIPT italic_M italic_V end_POSTSUBSCRIPTCTODPsubscriptCTO𝐷𝑃\text{CTO}_{DP}CTO start_POSTSUBSCRIPT italic_D italic_P end_POSTSUBSCRIPTCTORFsubscriptCTO𝑅𝐹\text{CTO}_{RF}CTO start_POSTSUBSCRIPT italic_R italic_F end_POSTSUBSCRIPT
PhaseIIIIIIAllIIIIIIAllIIIIIIAll
F10.7260.6890.9040.7930.8700.8560.9210.8840.9130.8780.9410.909
PRAUC0.7410.6880.8910.7930.8190.7550.8580.8070.8560.7920.8940.843
ROCAUC0.7510.7170.8050.7750.8480.8020.7430.8040.8890.8380.8150.847
κ𝜅\kappaitalic_κ0.4900.4300.6060.5290.7000.6230.5820.6460.7900.6930.7100.729

We compare our aggregated labels, sourced from various weak labeling methods, against the TOP test set, which is annotated by human experts. The primary goal is to assess the agreement between our labels and the human-annotated labels. To ensure the validity of our analysis, we exclude any trials with a mismatch status between our current dataset and the TOP dataset, as the TOP dataset was annotated using trial data as of 2022. For instance, some trials were labeled as ‘Unknown status’ during that period, whereas in our updated data, they have been marked as ‘completed.’ This also highlights that the trial data changes more often, and annotating them manually each time is not feasible.

Table2 shows that Data Programming completely beats Majority Vote in terms of agreement with the TOP dataset. Random Forest obtains the highest scores with a Cohen kappa κ𝜅\kappaitalic_κ on all phases of 0.729, indicating substantial agreement [27]. Additionally, the F1 score of 0.909 is much higher than any previous SOTA trial outcome prediction model [11, 41]. Figure3 also shows that models trained on CTO data performs just as well as those trained on ground truth human annotations.

Automatically Labeling $200B Life-Saving Datasets: A Large Clinical Trial Outcome Benchmark (3)

3.2 Which Labeling Method to Use?

Automatically Labeling $200B Life-Saving Datasets: A Large Clinical Trial Outcome Benchmark (4)

Determining the better labeling method is subjective and depends on various factors. Figure4 illustrates the distribution of labels generated by the random forest and data programming labeling functions, showing a significant distribution difference from that of Data Programming (RF predicts almost all trials to be successful). This discrepancy can be attributed to the nature of the random forest labeling function, which relies on a small-scale, human-annotated set that might not capture the overall trial distribution. However, this leads to better agreement with the TOP test set than with data programming.

On the other hand, data programming is an unsupervised approach that generates labels by considering the agreement and relationships between all weak labeling functions across the entire dataset. This approach allows data programming to more accurately learn the trial outcome label distribution. Due to this challenge, we provide both labels from the random forests and the data programming labeling function in our dataset release. Additionally, Figure4 A and B illustrate the distribution of labels across phases and trial statuses in our proposed CTO dataset. This highlights the sheer quantity of the dataset compared to previous methods.

4 Discussion

We present the first attempt to utilize weak signals to automatically label, a large-scale, reproducible dataset specifically designed for clinical trial outcome prediction. Our dataset and labels facilitate the development of prediction models not only for drug interventions but also for biologics and medical devices, broadening the scope of clinical trial outcome research. Our models consistently perform comparably to baseline models across all phases and metrics when benchmarked against TOP, underscoring the effectiveness of weak supervision and the reliability of our approach.We recognize that automatically created labels will never be a substitute for human ones. However, for data-hungry new ML methods in clinical trial optimization, we assert that this could be a good first step before obtaining human labels, due to our high agreement. In addition, our open-source nature means that any customization to specific tasks can be made quickly and reproducible. Our dataset will be made available at https://github.com/chufangao/CTOD.

Acknowledgments and Disclosure of Funding

This work was supported by NSF award SCH-2205289, SCH-2014438, and IIS-2034479.

References

  • [1]Alex Aliper, Roman Kudrin, Daniil Polykovskiy, Petrina Kamya, Elena Tutubalina, Shan Chen, Feng Ren, and Alex Zhavoronkov.Prediction of clinical trials outcomes based on target choice and clinical trial design with multi-modal artificial intelligence.Clinical Pharmacology & Therapeutics, 2023.
  • [2]MoniqueL Anderson, Karen Chiswell, EricD Peterson, Asba Tasneem, James Topping, and RobertM Califf.Compliance with results reporting at clinicaltrials. gov.New England Journal of Medicine, 372(11):1031–1039, 2015.
  • [3]ArtemV Artemov, Evgeny Putin, Quentin Vanhaelen, Alexander Aliper, IvanV Ozerov, and Alex Zhavoronkov.Integrated deep learned transcriptomic and structure-based predictor of clinical trials outcomes.BioRxiv, page 095653, 2016.
  • [4]Payal Bajaj, Daniel Campos, Nick Craswell, LiDeng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, etal.Ms marco: A human generated machine reading comprehension dataset.arXiv preprint arXiv:1611.09268, 2016.
  • [5]RobertM Califf, DeborahA Zarin, JudithM Kramer, RachelE Sherman, LauraH Aberle, and Asba Tasneem.Characteristics of clinical trials registered in clinicaltrials. gov, 2007-2010.Jama, 307(17):1838–1847, 2012.
  • [6]JADiMasi, JCHermann, KTwyman, RKKondru, SStergiopoulos, KAGetz, and WRackoff.A tool for predicting regulatory approval after phase ii testing of new oncology compounds.Clinical Pharmacology & Therapeutics, 98(5):506–513, 2015.
  • [7]Felipe Feijoo, Michele Palopoli, Jen Bernstein, Sauleh Siddiqui, and TenleyE Albright.Key indicators of phase transition for clinical trials through machine learning.Drug discovery today, 25(2):414–421, 2020.
  • [8]Fidelity Investments.Basic concepts: Trend, 2023.Accessed: 2024-05-31.
  • [9]Food, Drug Administration, etal.National drug code directory.Consumer Protection and Environmental Health Service, Public Health Service…, 1976.
  • [10]LawrenceM Friedman, CurtD Furberg, DavidL DeMets, DavidM Reboussin, and ChristopherB Granger.Fundamentals of clinical trials.Springer, 2015.
  • [11]Tianfan Fu, Kexin Huang, Cao Xiao, LucasM Glass, and Jimeng Sun.Hint: Hierarchical interaction network for clinical-trial-outcome predictions.Patterns, 3(4), 2022.
  • [12]KaitlynM Gayvert, NeelS Madhukar, and Olivier Elemento.A data-driven approach to predicting successes and failures of clinical trials.Cell chemical biology, 23(10):1294–1301, 2016.
  • [13]Jianping Gou, Baosheng Yu, StephenJ Maybank, and Dacheng Tao.Knowledge distillation: A survey.International Journal of Computer Vision, 129(6):1789–1819, 2021.
  • [14]YuGu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon.Domain-specific language model pretraining for biomedical natural language processing.ACM Transactions on Computing for Healthcare (HEALTH), 3(1):1–23, 2021.
  • [15]Geoffrey Hinton, Oriol Vinyals, and Jeff Dean.Distilling the knowledge in a neural network.arXiv preprint arXiv:1503.02531, 2015.
  • [16]Zhen-Yu Hong, Jooyong Shim, WooChan Son, and Changha Hwang.Predicting successes and failures of clinical trials with an ensemble ls-svr.medRxiv, pages 2020–02, 2020.
  • [17]Vojtech Huser and JamesJ Cimino.Linking clinicaltrials. gov and pubmed to track results of interventional human clinical trials.PloS one, 8(7):e68409, 2013.
  • [18]International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use.Integrated addendum to ich e6(r1): Guideline for good clinical practice e6(r2), 2016.
  • [19]Investopedia.Moving average (ma), n.d.Accessed: 2024-05-29.
  • [20]Sunghwan Kim, PaulA. Thiessen, and EvanE. Bolton.Programmatic Retrieval of Small Molecule Information from PubChem Using PUG-REST, pages 1–24.Humana Press, Totowa, NJ, 2019.
  • [21]Sunghwan Kim, PaulA Thiessen, EvanE Bolton, and StephenH Bryant.Pug-soap and pug-rest: web services for programmatic access to chemical information in pubchem.Nucleic acids research, 43(W1):W605–W611, 2015.
  • [22]Sunghwan Kim, PaulA Thiessen, Tiejun Cheng, BoYu, and EvanE Bolton.An update on pug-rest: Restful interface for programmatic access to pubchem.Nucleic Acids Research, 46(W1):W563–W570, 2018.
  • [23]Craig Knox, Mike Wilson, ChristenM Klinger, Mark Franklin, Eponine Oler, Alex Wilson, Allison Pon, Jordan Cox, NaEun Chin, SethA Strawbridge, etal.Drugbank 6.0: the drugbank knowledgebase for 2024.Nucleic Acids Research, 52(D1):D1265–D1275, 2024.
  • [24]AndrewW Lo, KienWei Siah, and ChiHeem Wong.Machine learning with statistical imputation for predicting drug approvals, volume60.SSRN, 2019.
  • [25]Yingzhou Lu, Minjie Shen, Huazheng Wang, Xiao Wang, Capucine van Rechem, and Wenqi Wei.Machine learning for synthetic data generation: a review.arXiv preprint arXiv:2302.04062, 2023.
  • [26]Laeeq Malik, Alex Mejia, Helen Parsons, Benjamin Ehler, Devalingam Mahalingam, Andrew Brenner, John Sarantopoulos, and Steven Weitman.Predicting success in regulatory approval from phase i results.Cancer chemotherapy and pharmacology, 74:1099–1103, 2014.
  • [27]MaryL McHugh.Interrater reliability: the kappa statistic.Biochemia medica, 22(3):276–282, 2012.
  • [28]Medicines and Healthcare products Regulatory Agency.Good Clinical Practice Guide.TSO (The Stationery Office), United Kingdom, 2012.
  • [29]Youran Qi and QiTang.Predicting phase 3 clinical trial results by modeling phase 2 clinical trial subject level data using deep learning.In Machine Learning for Healthcare Conference, pages 288–303. PMLR, 2019.
  • [30]Pranav Rajpurkar, Jingbo Yang, Nathan Dass, Vinjai Vale, ArielleS Keller, Jeremy Irvin, Zachary Taylor, Sanjay Basu, Andrew Ng, and LeanneM Williams.Evaluation of a machine learning model based on pretreatment symptoms and electroencephalographic features to predict outcomes of antidepressant treatment in adults with depression: a prespecified secondary analysis of a randomized clinical trial.JAMA network open, 3(6):e206653–e206653, 2020.
  • [31]Alexander Ratner, Braden Hanco*ck, Jared Dunnmon, Frederic Sala, Shreyash Pandey, and Christopher Ré.Training complex models with multi-task weak supervision.In Proceedings of the AAAI Conference on Artificial Intelligence, volume33, pages 4763–4771, 2019.
  • [32]AlexanderJ Ratner, ChristopherM DeSa, Sen Wu, Daniel Selsam, and Christopher Ré.Data programming: Creating large training sets, quickly.Advances in neural information processing systems, 29, 2016.
  • [33]GrandView Research.Clinical trials market size, share & trends analysis report by phase (phase i, phase ii, phase iii, phase iv), by study design, by indication (pain management, oncology, cns condition, diabetes, obesity), by region, and segment forecasts, 2022-2030, Apr 2022.
  • [34]JosephS Ross, GregoryK Mulvey, ElizabethM Hines, StevenE Nissen, and HarlanM Krumholz.Trial publication after registration in clinicaltrials. gov: a cross-sectional analysis.PLoS medicine, 6(9):e1000144, 2009.
  • [35]JackW Scannell, Alex Blanckley, Helen Boldon, and Brian Warrington.Diagnosing the decline in pharmaceutical r&d efficiency.Nature reviews Drug discovery, 11(3):191–200, 2012.
  • [36]Charles Schwab.Create a momentum indicator with moving averages, n.d.Accessed: 2024-05-29.
  • [37]Krishnendu Sinha, Nabanita Ghosh, and ParamesC Sil.A review on the recent applications of deep learning in predictive drug toxicological studies.Chemical Research in Toxicology, 36(8):1174–1205, 2023.
  • [38]Global r&d expenditure for pharmaceuticals.Accessed: Insert date here.
  • [39]Stella Stergiopoulos, KennethA Getz, and Christine Blazynski.Evaluating the completeness of clinicaltrials. gov.Therapeutic Innovation & Regulatory Science, 53(3):307–317, 2019.
  • [40]Asba Tasneem, Laura Aberle, Hari Ananth, Swati Chakraborty, Karen Chiswell, BrianJ McCourt, and Ricardo Pietrobon.The database for aggregate analysis of clinicaltrials. gov (aact) and subsequent regrouping by clinical specialty.PloS one, 7(3):e33677, 2012.
  • [41]Zifeng Wang, Cao Xiao, and Jimeng Sun.Spot: sequential predictive modeling of clinical trial outcome with meta-learning.In Proceedings of the 14th ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, pages 1–11, 2023.
  • [42]David Weininger.Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules.Journal of chemical information and computer sciences, 28(1):31–36, 1988.
  • [43]BartJA Willigers, Sridevi Nagarajan, Serban Ghiorghui, Patrick Darken, and Simon Lennard.Algorithmic benchmark modulation: A novel method to develop success rates for clinical studies.Clinical Trials, page 17407745231207858, 2023.
  • [44]DavidS Wishart, Craig Knox, AnChi Guo, Dean Cheng, Savita Shrivastava, Dan Tzur, Bijaya Gautam, and Murtaza Hassanali.Drugbank: a knowledgebase for drugs, drug actions and drug targets.Nucleic acids research, 36(suppl_1):D901–D906, 2008.
  • [45]YiYang, Mark ChristopherSiy Uy, and Allen Huang.Finbert: A pretrained language model for financial communications.arXiv preprint arXiv:2006.08097, 2020.
  • [46]DeborahA Zarin, Tony Tse, RebeccaJ Williams, RobertM Califf, and NicholasC Ide.The clinicaltrials. gov results database—update and key issues.New England Journal of Medicine, 364(9):852–860, 2011.
  • [47]DeborahA Zarin, Tony Tse, RebeccaJ Williams, and Sarah Carr.Trial reporting in clinicaltrials. gov—the final rule.New England Journal of Medicine, 375(20):1998–2004, 2016.

Checklist

  1. 1.

    For all authors…

    1. (a)

      Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope?[Yes]

    2. (b)

      Did you describe the limitations of your work?[Yes] See Conclusion

    3. (c)

      Did you discuss any potential negative societal impacts of your work?[Yes]

    4. (d)

      Have you read the ethics review guidelines and ensured that your paper conforms to them?[Yes]

  2. 2.

    If you are including theoretical results…

    1. (a)

      Did you state the full set of assumptions of all theoretical results?[N/A]

    2. (b)

      Did you include complete proofs of all theoretical results?[N/A]

  3. 3.

    If you ran experiments (e.g. for benchmarks)…

    1. (a)

      Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?[Yes]

    2. (b)

      Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?[Yes]

    3. (c)

      Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?[Yes]

    4. (d)

      Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?[Yes]

  4. 4.

    If you are using existing assets (e.g., code, data, models) or curating/releasing new assets…

    1. (a)

      If your work uses existing assets, did you cite the creators?[Yes]

    2. (b)

      Did you mention the license of the assets?[Yes]

    3. (c)

      Did you include any new assets either in the supplemental material or as a URL?[Yes]

    4. (d)

      Did you discuss whether and how consent was obtained from people whose data you’re using/curating?[N/A] We are using public data.

    5. (e)

      Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?[N/A]

  5. 5.

    If you used crowdsourcing or conducted research with human subjects…

    1. (a)

      Did you include the full text of instructions given to participants and screenshots, if applicable?[N/A]

    2. (b)

      Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?[N/A]

    3. (c)

      Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?[N/A]

Appendix A Ethics and Broader Impacts

Using a weakly supervised dataset for clinical trial outcome prediction with large language models (LLMs) can potentially decrease the reliability of the model’s predictions if not correctly instantiated with proper labeling functions. Weak supervision may result in incomplete or imprecise labeling, leading to the model learning incorrect associations and missing crucial factors, which can introduce or exacerbate biases. This lack of precise guidance can also cause the model to overfit to noise or incorrect patterns in the training data, reducing its ability to generalize effectively to new, unseen data. In diverse clinical scenarios, one must take care to independently verify model predictions to prevent potentially jeopardizing patient safety with inaccurate predictions. To mitigate these issues, it’s essential to improve the quality of the training data through better labeling techniques, supplementary high-quality data, or advanced methods like semi-supervised or active learning.

Furthermore, we use publicly available data, so the risk of identification is minimized. However, we recognize that LLMs on public data inherently still pose some privacy risks.

Reproducibility

We utilized an AMD EPYC 7513 32-core Processor with 100 GB of RAM to run our experiments. Running Data Programming with less or more powerful systems may impact speed. For us, obtaining the labels took around 25 hours in experiments and prototyping. Additionally, we made use of ChatGPT for writing as well as obtaining trial outcome prediction on the PubMed abstracts. Total cost was around $200 US dollars.

Appendix B Datasheet for Datasets

A.1    Motivation

  • For what purpose was the dataset created?

    We created CTO to democratize clinical trial outcome prediction, which was previously only available to industry-sponsored researchers. Furthermore, we attempt to expand the previous labeling efforts (which were primarily focused on small drug interventions) and predict labels on all trials.

  • Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?

    The authors of this paper.

  • Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number.

    This work was supported by NSF award SCH-2205289, SCH-2014438, and IIS-2034479.

A.2    Composition

  • What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)?

    CTO contains molecule SMILES strings, eligibility criteria, ICD Codes, drug names, diseases, study status, phase information, and our automatically created labels. Furthermore, we also have additional QA tasks as extracted by GPT 3.5 Turbo 0125, news articles mined for the top 1000 industry sponsors, and stock prices.

  • How many instances are there in total (of each type, if appropriate)?

    There is a total of 479,761 trials, each with multiple types of predicted labels via random forest and data programming, as well as each phase-optimized threshold. For small-molecule drug interventions, we also have SMILES and ICD Codes.We release all such labels to not limit any downstream use. Furthermore, there are a total of 1,115,017 news articles we extracted and 105,570 trials with corresponding QA pairs.

  • Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set?

    We provide all possible trial labels up to the beginning of May 2024.

  • What data does each instance consist of?

    CTO contains molecule eligibility criteria, drug names, diseases, study status, phase information, and our automatically created labels.

  • Is there a label or target associated with each instance?

    The automatically predicted label is provided for each question.

  • Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable). This does not include intentionally removed information, but might include, e.g., redacted text.

    No.

  • Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)?

    No.

  • Are there recommended data splits (e.g., training, development/validation, testing)?

    See Table 4 and Figure 4.

  • Are there any errors, sources of noise, or redundancies in the dataset?

    Automatically created labels inherently come with an element of noise. However, our high agreement with TOP’s human labels (up to 0.91 F1), implies that our labels are of high quality.

  • Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)?

    CTO depends on multiple open source datasets.

    1. 1.
    2. 2.
    3. 3.
  • Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals’ non-public communications)?

    No. We obtained all data sources via open-source methods.

  • Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?

    No.

  • Does the dataset relate to people?

    Yes.

  • Does the dataset identify any subpopulations (e.g., by age, gender)?

    Yes, but only in the eligibility criteria of the trials, which are public.

  • Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals race or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)?

    No. There is no reference to individuals.

A.3    Collection process

  • How was the data associated with each instance acquired?

    We automatically mined each LLM prediction, trial linkage, news headline, and stock price as an overview in Section2.

  • What mechanisms or procedures were used to collect the data (e.g., hardware apparatuses or sensors, manual human curation, software programs, software APIs)?

    We mainly used Google Sheets and Python to collect, process, and label the data.In addition, we used OpenAI’s ChatGPT (GPT-3.5-turbo 0125) to generate QA and GPT outcome predictions.

  • If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)?

    Some data splitting was done according to previous data splits from TOP https://github.com/futianfan/clinical-trial-outcome-prediction. Additionally, we also split the data according to the following dates: (,2018,2020,)20182020(-\infty,2018,2020,\infty)( - ∞ , 2018 , 2020 , ∞ )

  • Who was involved in the data collection process (e.g., students, crowd workers, contractors) and how were they compensated (e.g., how much were crowd workers paid)?

    The authors of the paper collected and processed the data.

  • Over what timeframe was the data collected?

    We collected the data between December 2023 and April 2024.

  • Were any ethical review processes conducted (e.g., by an institutional review board)?

    N/A.

  • Does the dataset relate to people?

    Yes.

  • Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)?

    N/A.

  • Were the individuals in question notified about the data collection?

    N/A.

  • Did the individuals in question consent to the collection and use of their data?

    N/A.

  • If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses?

    N/A.

  • Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted?

    The dataset does not have individual-specific information.

A.4    Preprocessing/cleaning/labeling

  • Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)?

    N/A.

  • Was the “raw” data saved in addition to the preprocess/cleaned/labeled data (e.g., to support unanticipated future uses)?

    N/A.

  • Is the software that was used to preprocess/clean/label the data available?

    Preprocessing, cleaning, and labeling are done via Google Sheets and Python.

A.5    Uses

  • Has the dataset been used for any tasks already?

    No.

  • Is there a repository that links to any or all papers or systems that use the dataset?

    No.

  • What (other) tasks could the dataset be used for?

    Our dataset is designed to promote research primarily in clinical trial outcome prediction. The dataset can also be used for stock price/trend prediction C.3, question answering C.3, etc.

  • Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?

    N/A.

  • Are there tasks for which the dataset should not be used?

    N/A.

A.6    Distribution

  • Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created?

    No.

  • How will the dataset be distributed?

    Since clinical trial data is frequently updated, we provide the code for generating our CTO dataset at https://github.com/chufangao/CTOD. The current version of the dataset can be accessed at https://zenodo.org/doi/10.5281/zenodo.11535960.

  • Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)?

    The dataset is released under MIT License.

  • Have any third parties imposed IP-based or other restrictions on the data associated with the instances?

    No.

  • Do any export controls or other regulatory restrictions apply to the dataset or to individual instances?

    No.

A.7    Maintenance

  • Who will be supporting/hosting/maintaining the dataset?

    The authors of this paper.

  • How can the owner/curator/manager of the dataset be contacted(e.g., email address)?

    Contact the corresponding authors (chufan2@illinois.edu & jp65@illinois.edu & trishad2@illinois.edu & jimeng@illinois.edu).

  • Is there an erratum?

    No.

  • Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)?

    If any corrections are needed, we plan to upload an updated version of the dataset along with detailed explanations of the changes.

  • If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were the individuals in question told that their data would be retained for a fixed period of time and then deleted)?

    N/A

  • Will older versions of the dataset continue to be supported/hosted/maintained?

    Primarily, we aim to keep only the latest version of the dataset. However, in specific cases like major updates to the dataset or the necessity to validate previous research with older versions, we will exceptionally retain past versions of the dataset for up to one year.

  • If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so?

    Contact the authors of this paper or raise a github issue.

Appendix C Additional Results and Contributions

C.1 Results on TOP test data

We run standard ML baselines [11], namely Support Vector Machine (SVM), XGBoost, Multilayer Perceptron (MLP), Random Forest (RF), and Logistic Regression (LR). For these models, we model the trial outcome prediction task as a natural language classification task for maximum flexibility without requiring the need for molecular structure. TF-IDF is used to obtain features–the concatenated trial phase, disease indication, ICD codes, drugs, and eligibility criteria {xP,xI,xC,xT,xS,xE}subscript𝑥Psubscript𝑥Isubscript𝑥Csubscript𝑥Tsubscript𝑥Ssubscript𝑥E\{x_{\text{P}},x_{\text{I}},x_{\text{C}},x_{\text{T}},x_{\text{S}},x_{\text{E}}\}{ italic_x start_POSTSUBSCRIPT P end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT I end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT C end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT T end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT S end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT E end_POSTSUBSCRIPT }. Essentially, we use the trial linkage features with the addition of Phase xPsubscript𝑥Px_{\text{P}}italic_x start_POSTSUBSCRIPT P end_POSTSUBSCRIPT. We additionally utilize PubMedBERT and BioBERT as additional baselines by adding an MLP classification head to the encoder output of the concatenated text.The sizes of our train, validation, and test splits are 47080, 901, and 3165 respectively, with training and validation mined from CTO (trials completed <2014absent2014<2014< 2014 as per TOP [11]). We tested on the TOP test data. We demonstrate F1, PR-AUC, and ROC-AUC scores for all baseline models in Table 3.

We observe that models trained on CTO’s labels perform similarly, and even occasionally outperform those trained on supervised TOP. We hypothesize that this is due to a couple of factors. This demonstrates the effectiveness and reliability of our approach in predicting clinical trial outcomes. First, we combine highly diverse sources of LFs, the final predicted labels may serve as additional insights that the human annotators may have otherwise gleaned from fewer sources. Additionally, data programming enforces a level of self-regularization of the LFs, potentially smoothing the labels to allow for improved learning, much like in knowledge distillation’s teacher pseudo-labels [13, 15].

TOPCTO RFCTO DP
PhaseModelF1PRAUCROCAUCF1PRAUCROCAUCF1PRAUCROCAUC
ISVM0.632±0.019subscript0.632plus-or-minus0.0190.632_{\pm 0.019}0.632 start_POSTSUBSCRIPT ± 0.019 end_POSTSUBSCRIPT0.645±0.028subscript0.645plus-or-minus0.0280.645_{\pm 0.028}0.645 start_POSTSUBSCRIPT ± 0.028 end_POSTSUBSCRIPT0.592±0.023subscript0.592plus-or-minus0.0230.592_{\pm 0.023}0.592 start_POSTSUBSCRIPT ± 0.023 end_POSTSUBSCRIPT0.713±0.015subscript0.713plus-or-minus0.0150.713_{\pm 0.015}0.713 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.661±0.027subscript0.661plus-or-minus0.0270.661_{\pm 0.027}0.661 start_POSTSUBSCRIPT ± 0.027 end_POSTSUBSCRIPT0.626±0.021subscript0.626plus-or-minus0.0210.626_{\pm 0.021}0.626 start_POSTSUBSCRIPT ± 0.021 end_POSTSUBSCRIPT0.720±0.010subscript0.720plus-or-minus0.0100.720_{\pm 0.010}0.720 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.648±0.017subscript0.648plus-or-minus0.0170.648_{\pm 0.017}0.648 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.614±0.013subscript0.614plus-or-minus0.0130.614_{\pm 0.013}0.614 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT
XGBoost0.630±0.020subscript0.630plus-or-minus0.0200.630_{\pm 0.020}0.630 start_POSTSUBSCRIPT ± 0.020 end_POSTSUBSCRIPT0.637±0.025subscript0.637plus-or-minus0.0250.637_{\pm 0.025}0.637 start_POSTSUBSCRIPT ± 0.025 end_POSTSUBSCRIPT0.596±0.021subscript0.596plus-or-minus0.0210.596_{\pm 0.021}0.596 start_POSTSUBSCRIPT ± 0.021 end_POSTSUBSCRIPT0.706±0.017subscript0.706plus-or-minus0.0170.706_{\pm 0.017}0.706 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.675±0.030subscript0.675plus-or-minus0.0300.675_{\pm 0.030}0.675 start_POSTSUBSCRIPT ± 0.030 end_POSTSUBSCRIPT0.645±0.021subscript0.645plus-or-minus0.0210.645_{\pm 0.021}0.645 start_POSTSUBSCRIPT ± 0.021 end_POSTSUBSCRIPT0.719±0.016subscript0.719plus-or-minus0.0160.719_{\pm 0.016}0.719 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.656±0.027subscript0.656plus-or-minus0.0270.656_{\pm 0.027}0.656 start_POSTSUBSCRIPT ± 0.027 end_POSTSUBSCRIPT0.606±0.023subscript0.606plus-or-minus0.0230.606_{\pm 0.023}0.606 start_POSTSUBSCRIPT ± 0.023 end_POSTSUBSCRIPT
MLP0.585±0.023subscript0.585plus-or-minus0.0230.585_{\pm 0.023}0.585 start_POSTSUBSCRIPT ± 0.023 end_POSTSUBSCRIPT0.636±0.028subscript0.636plus-or-minus0.0280.636_{\pm 0.028}0.636 start_POSTSUBSCRIPT ± 0.028 end_POSTSUBSCRIPT0.575±0.022subscript0.575plus-or-minus0.0220.575_{\pm 0.022}0.575 start_POSTSUBSCRIPT ± 0.022 end_POSTSUBSCRIPT0.661±0.019subscript0.661plus-or-minus0.0190.661_{\pm 0.019}0.661 start_POSTSUBSCRIPT ± 0.019 end_POSTSUBSCRIPT0.645±0.026subscript0.645plus-or-minus0.0260.645_{\pm 0.026}0.645 start_POSTSUBSCRIPT ± 0.026 end_POSTSUBSCRIPT0.600±0.018subscript0.600plus-or-minus0.0180.600_{\pm 0.018}0.600 start_POSTSUBSCRIPT ± 0.018 end_POSTSUBSCRIPT0.655±0.020subscript0.655plus-or-minus0.0200.655_{\pm 0.020}0.655 start_POSTSUBSCRIPT ± 0.020 end_POSTSUBSCRIPT0.626±0.030subscript0.626plus-or-minus0.0300.626_{\pm 0.030}0.626 start_POSTSUBSCRIPT ± 0.030 end_POSTSUBSCRIPT0.581±0.024subscript0.581plus-or-minus0.0240.581_{\pm 0.024}0.581 start_POSTSUBSCRIPT ± 0.024 end_POSTSUBSCRIPT
RF0.641±0.024subscript0.641plus-or-minus0.0240.641_{\pm 0.024}0.641 start_POSTSUBSCRIPT ± 0.024 end_POSTSUBSCRIPT0.682±0.028subscript0.682plus-or-minus0.0280.682_{\pm 0.028}0.682 start_POSTSUBSCRIPT ± 0.028 end_POSTSUBSCRIPT0.623±0.024subscript0.623plus-or-minus0.0240.623_{\pm 0.024}0.623 start_POSTSUBSCRIPT ± 0.024 end_POSTSUBSCRIPT0.715±0.015subscript0.715plus-or-minus0.0150.715_{\pm 0.015}0.715 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.676±0.028subscript0.676plus-or-minus0.0280.676_{\pm 0.028}0.676 start_POSTSUBSCRIPT ± 0.028 end_POSTSUBSCRIPT0.624±0.021subscript0.624plus-or-minus0.0210.624_{\pm 0.021}0.624 start_POSTSUBSCRIPT ± 0.021 end_POSTSUBSCRIPT0.716±0.015subscript0.716plus-or-minus0.0150.716_{\pm 0.015}0.716 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.682±0.026subscript0.682plus-or-minus0.0260.682_{\pm 0.026}0.682 start_POSTSUBSCRIPT ± 0.026 end_POSTSUBSCRIPT0.631±0.022subscript0.631plus-or-minus0.0220.631_{\pm 0.022}0.631 start_POSTSUBSCRIPT ± 0.022 end_POSTSUBSCRIPT
LR0.656±0.018subscript0.656plus-or-minus0.0180.656_{\pm 0.018}0.656 start_POSTSUBSCRIPT ± 0.018 end_POSTSUBSCRIPT0.669±0.028subscript0.669plus-or-minus0.0280.669_{\pm 0.028}0.669 start_POSTSUBSCRIPT ± 0.028 end_POSTSUBSCRIPT0.631±0.021subscript0.631plus-or-minus0.0210.631_{\pm 0.021}0.631 start_POSTSUBSCRIPT ± 0.021 end_POSTSUBSCRIPT0.715±0.016subscript0.715plus-or-minus0.0160.715_{\pm 0.016}0.715 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.690±0.029subscript0.690plus-or-minus0.0290.690_{\pm 0.029}0.690 start_POSTSUBSCRIPT ± 0.029 end_POSTSUBSCRIPT0.656±0.023subscript0.656plus-or-minus0.0230.656_{\pm 0.023}0.656 start_POSTSUBSCRIPT ± 0.023 end_POSTSUBSCRIPT0.726±0.016subscript0.726plus-or-minus0.0160.726_{\pm 0.016}0.726 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.693±0.029subscript0.693plus-or-minus0.0290.693_{\pm 0.029}0.693 start_POSTSUBSCRIPT ± 0.029 end_POSTSUBSCRIPT0.660±0.021subscript0.660plus-or-minus0.0210.660_{\pm 0.021}0.660 start_POSTSUBSCRIPT ± 0.021 end_POSTSUBSCRIPT
BioBERT0.627±0.022subscript0.627plus-or-minus0.0220.627_{\pm 0.022}0.627 start_POSTSUBSCRIPT ± 0.022 end_POSTSUBSCRIPT0.665±0.029subscript0.665plus-or-minus0.0290.665_{\pm 0.029}0.665 start_POSTSUBSCRIPT ± 0.029 end_POSTSUBSCRIPT0.612±0.023subscript0.612plus-or-minus0.0230.612_{\pm 0.023}0.612 start_POSTSUBSCRIPT ± 0.023 end_POSTSUBSCRIPT0.713±0.015subscript0.713plus-or-minus0.0150.713_{\pm 0.015}0.713 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.676±0.019subscript0.676plus-or-minus0.0190.676_{\pm 0.019}0.676 start_POSTSUBSCRIPT ± 0.019 end_POSTSUBSCRIPT0.641±0.020subscript0.641plus-or-minus0.0200.641_{\pm 0.020}0.641 start_POSTSUBSCRIPT ± 0.020 end_POSTSUBSCRIPT0.716±0.016subscript0.716plus-or-minus0.0160.716_{\pm 0.016}0.716 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.677±0.019subscript0.677plus-or-minus0.0190.677_{\pm 0.019}0.677 start_POSTSUBSCRIPT ± 0.019 end_POSTSUBSCRIPT0.649±0.015subscript0.649plus-or-minus0.0150.649_{\pm 0.015}0.649 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT
PubMedBERT0.646±0.014subscript0.646plus-or-minus0.0140.646_{\pm 0.014}0.646 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.602±0.017subscript0.602plus-or-minus0.0170.602_{\pm 0.017}0.602 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.588±0.015subscript0.588plus-or-minus0.0150.588_{\pm 0.015}0.588 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.715±0.014subscript0.715plus-or-minus0.0140.715_{\pm 0.014}0.715 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.595±0.018subscript0.595plus-or-minus0.0180.595_{\pm 0.018}0.595 start_POSTSUBSCRIPT ± 0.018 end_POSTSUBSCRIPT0.579±0.013subscript0.579plus-or-minus0.0130.579_{\pm 0.013}0.579 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.719±0.015subscript0.719plus-or-minus0.0150.719_{\pm 0.015}0.719 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.591±0.020subscript0.591plus-or-minus0.0200.591_{\pm 0.020}0.591 start_POSTSUBSCRIPT ± 0.020 end_POSTSUBSCRIPT0.571±0.012subscript0.571plus-or-minus0.0120.571_{\pm 0.012}0.571 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT
HINT0.621±0.022subscript0.621plus-or-minus0.0220.621_{\pm 0.022}0.621 start_POSTSUBSCRIPT ± 0.022 end_POSTSUBSCRIPT0.633±0.029subscript0.633plus-or-minus0.0290.633_{\pm 0.029}0.633 start_POSTSUBSCRIPT ± 0.029 end_POSTSUBSCRIPT0.590±0.025subscript0.590plus-or-minus0.0250.590_{\pm 0.025}0.590 start_POSTSUBSCRIPT ± 0.025 end_POSTSUBSCRIPT0.611±0.020subscript0.611plus-or-minus0.0200.611_{\pm 0.020}0.611 start_POSTSUBSCRIPT ± 0.020 end_POSTSUBSCRIPT0.559±0.022subscript0.559plus-or-minus0.0220.559_{\pm 0.022}0.559 start_POSTSUBSCRIPT ± 0.022 end_POSTSUBSCRIPT0.520±0.031subscript0.520plus-or-minus0.0310.520_{\pm 0.031}0.520 start_POSTSUBSCRIPT ± 0.031 end_POSTSUBSCRIPT0.607±0.021subscript0.607plus-or-minus0.0210.607_{\pm 0.021}0.607 start_POSTSUBSCRIPT ± 0.021 end_POSTSUBSCRIPT0.490±0.035subscript0.490plus-or-minus0.0350.490_{\pm 0.035}0.490 start_POSTSUBSCRIPT ± 0.035 end_POSTSUBSCRIPT0.545±0.026subscript0.545plus-or-minus0.0260.545_{\pm 0.026}0.545 start_POSTSUBSCRIPT ± 0.026 end_POSTSUBSCRIPT
SPOT0.652±0.025subscript0.652plus-or-minus0.0250.652_{\pm 0.025}0.652 start_POSTSUBSCRIPT ± 0.025 end_POSTSUBSCRIPT0.679±0.029subscript0.679plus-or-minus0.0290.679_{\pm 0.029}0.679 start_POSTSUBSCRIPT ± 0.029 end_POSTSUBSCRIPT0.624±0.028subscript0.624plus-or-minus0.0280.624_{\pm 0.028}0.624 start_POSTSUBSCRIPT ± 0.028 end_POSTSUBSCRIPT0.600±0.016subscript0.600plus-or-minus0.0160.600_{\pm 0.016}0.600 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.670±0.022subscript0.670plus-or-minus0.0220.670_{\pm 0.022}0.670 start_POSTSUBSCRIPT ± 0.022 end_POSTSUBSCRIPT0.635±0.016subscript0.635plus-or-minus0.0160.635_{\pm 0.016}0.635 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.625±0.029subscript0.625plus-or-minus0.0290.625_{\pm 0.029}0.625 start_POSTSUBSCRIPT ± 0.029 end_POSTSUBSCRIPT0.693±0.026subscript0.693plus-or-minus0.0260.693_{\pm 0.026}0.693 start_POSTSUBSCRIPT ± 0.026 end_POSTSUBSCRIPT0.646±0.027subscript0.646plus-or-minus0.0270.646_{\pm 0.027}0.646 start_POSTSUBSCRIPT ± 0.027 end_POSTSUBSCRIPT
IISVM0.672±0.011subscript0.672plus-or-minus0.0110.672_{\pm 0.011}0.672 start_POSTSUBSCRIPT ± 0.011 end_POSTSUBSCRIPT0.664±0.017subscript0.664plus-or-minus0.0170.664_{\pm 0.017}0.664 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.626±0.013subscript0.626plus-or-minus0.0130.626_{\pm 0.013}0.626 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.715±0.010subscript0.715plus-or-minus0.0100.715_{\pm 0.010}0.715 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.653±0.017subscript0.653plus-or-minus0.0170.653_{\pm 0.017}0.653 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.617±0.012subscript0.617plus-or-minus0.0120.617_{\pm 0.012}0.617 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.720±0.010subscript0.720plus-or-minus0.0100.720_{\pm 0.010}0.720 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.648±0.017subscript0.648plus-or-minus0.0170.648_{\pm 0.017}0.648 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.614±0.013subscript0.614plus-or-minus0.0130.614_{\pm 0.013}0.614 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT
XGBoost0.659±0.013subscript0.659plus-or-minus0.0130.659_{\pm 0.013}0.659 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.675±0.016subscript0.675plus-or-minus0.0160.675_{\pm 0.016}0.675 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.627±0.014subscript0.627plus-or-minus0.0140.627_{\pm 0.014}0.627 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.706±0.011subscript0.706plus-or-minus0.0110.706_{\pm 0.011}0.706 start_POSTSUBSCRIPT ± 0.011 end_POSTSUBSCRIPT0.650±0.018subscript0.650plus-or-minus0.0180.650_{\pm 0.018}0.650 start_POSTSUBSCRIPT ± 0.018 end_POSTSUBSCRIPT0.609±0.014subscript0.609plus-or-minus0.0140.609_{\pm 0.014}0.609 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.710±0.012subscript0.710plus-or-minus0.0120.710_{\pm 0.012}0.710 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.644±0.018subscript0.644plus-or-minus0.0180.644_{\pm 0.018}0.644 start_POSTSUBSCRIPT ± 0.018 end_POSTSUBSCRIPT0.607±0.014subscript0.607plus-or-minus0.0140.607_{\pm 0.014}0.607 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT
MLP0.615±0.014subscript0.615plus-or-minus0.0140.615_{\pm 0.014}0.615 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.640±0.017subscript0.640plus-or-minus0.0170.640_{\pm 0.017}0.640 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.599±0.014subscript0.599plus-or-minus0.0140.599_{\pm 0.014}0.599 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.671±0.011subscript0.671plus-or-minus0.0110.671_{\pm 0.011}0.671 start_POSTSUBSCRIPT ± 0.011 end_POSTSUBSCRIPT0.638±0.017subscript0.638plus-or-minus0.0170.638_{\pm 0.017}0.638 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.601±0.015subscript0.601plus-or-minus0.0150.601_{\pm 0.015}0.601 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.673±0.012subscript0.673plus-or-minus0.0120.673_{\pm 0.012}0.673 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.647±0.017subscript0.647plus-or-minus0.0170.647_{\pm 0.017}0.647 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.599±0.013subscript0.599plus-or-minus0.0130.599_{\pm 0.013}0.599 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT
RF0.675±0.011subscript0.675plus-or-minus0.0110.675_{\pm 0.011}0.675 start_POSTSUBSCRIPT ± 0.011 end_POSTSUBSCRIPT0.690±0.016subscript0.690plus-or-minus0.0160.690_{\pm 0.016}0.690 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.643±0.014subscript0.643plus-or-minus0.0140.643_{\pm 0.014}0.643 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.715±0.010subscript0.715plus-or-minus0.0100.715_{\pm 0.010}0.715 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.671±0.017subscript0.671plus-or-minus0.0170.671_{\pm 0.017}0.671 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.628±0.013subscript0.628plus-or-minus0.0130.628_{\pm 0.013}0.628 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.712±0.011subscript0.712plus-or-minus0.0110.712_{\pm 0.011}0.712 start_POSTSUBSCRIPT ± 0.011 end_POSTSUBSCRIPT0.657±0.018subscript0.657plus-or-minus0.0180.657_{\pm 0.018}0.657 start_POSTSUBSCRIPT ± 0.018 end_POSTSUBSCRIPT0.623±0.014subscript0.623plus-or-minus0.0140.623_{\pm 0.014}0.623 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT
LR0.674±0.013subscript0.674plus-or-minus0.0130.674_{\pm 0.013}0.674 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.697±0.017subscript0.697plus-or-minus0.0170.697_{\pm 0.017}0.697 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.648±0.012subscript0.648plus-or-minus0.0120.648_{\pm 0.012}0.648 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.714±0.010subscript0.714plus-or-minus0.0100.714_{\pm 0.010}0.714 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.686±0.017subscript0.686plus-or-minus0.0170.686_{\pm 0.017}0.686 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.646±0.015subscript0.646plus-or-minus0.0150.646_{\pm 0.015}0.646 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.718±0.010subscript0.718plus-or-minus0.0100.718_{\pm 0.010}0.718 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.679±0.016subscript0.679plus-or-minus0.0160.679_{\pm 0.016}0.679 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.642±0.014subscript0.642plus-or-minus0.0140.642_{\pm 0.014}0.642 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT
BioBERT0.672±0.016subscript0.672plus-or-minus0.0160.672_{\pm 0.016}0.672 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.696±0.025subscript0.696plus-or-minus0.0250.696_{\pm 0.025}0.696 start_POSTSUBSCRIPT ± 0.025 end_POSTSUBSCRIPT0.648±0.015subscript0.648plus-or-minus0.0150.648_{\pm 0.015}0.648 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.715±0.007subscript0.715plus-or-minus0.0070.715_{\pm 0.007}0.715 start_POSTSUBSCRIPT ± 0.007 end_POSTSUBSCRIPT0.674±0.014subscript0.674plus-or-minus0.0140.674_{\pm 0.014}0.674 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.642±0.014subscript0.642plus-or-minus0.0140.642_{\pm 0.014}0.642 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.718±0.007subscript0.718plus-or-minus0.0070.718_{\pm 0.007}0.718 start_POSTSUBSCRIPT ± 0.007 end_POSTSUBSCRIPT0.657±0.014subscript0.657plus-or-minus0.0140.657_{\pm 0.014}0.657 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.627±0.014subscript0.627plus-or-minus0.0140.627_{\pm 0.014}0.627 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT
PubMedBERT0.682±0.010subscript0.682plus-or-minus0.0100.682_{\pm 0.010}0.682 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.612±0.010subscript0.612plus-or-minus0.0100.612_{\pm 0.010}0.612 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.601±0.009subscript0.601plus-or-minus0.0090.601_{\pm 0.009}0.601 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.705±0.009subscript0.705plus-or-minus0.0090.705_{\pm 0.009}0.705 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.578±0.010subscript0.578plus-or-minus0.0100.578_{\pm 0.010}0.578 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.545±0.010subscript0.545plus-or-minus0.0100.545_{\pm 0.010}0.545 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.713±0.009subscript0.713plus-or-minus0.0090.713_{\pm 0.009}0.713 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.575±0.010subscript0.575plus-or-minus0.0100.575_{\pm 0.010}0.575 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.540±0.006subscript0.540plus-or-minus0.0060.540_{\pm 0.006}0.540 start_POSTSUBSCRIPT ± 0.006 end_POSTSUBSCRIPT
HINT0.654±0.013subscript0.654plus-or-minus0.0130.654_{\pm 0.013}0.654 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.680±0.016subscript0.680plus-or-minus0.0160.680_{\pm 0.016}0.680 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.632±0.013subscript0.632plus-or-minus0.0130.632_{\pm 0.013}0.632 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.708±0.010subscript0.708plus-or-minus0.0100.708_{\pm 0.010}0.708 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.647±0.018subscript0.647plus-or-minus0.0180.647_{\pm 0.018}0.647 start_POSTSUBSCRIPT ± 0.018 end_POSTSUBSCRIPT0.627±0.014subscript0.627plus-or-minus0.0140.627_{\pm 0.014}0.627 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.713±0.007subscript0.713plus-or-minus0.0070.713_{\pm 0.007}0.713 start_POSTSUBSCRIPT ± 0.007 end_POSTSUBSCRIPT0.670±0.013subscript0.670plus-or-minus0.0130.670_{\pm 0.013}0.670 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.635±0.016subscript0.635plus-or-minus0.0160.635_{\pm 0.016}0.635 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT
SPOT0.681±0.009subscript0.681plus-or-minus0.0090.681_{\pm 0.009}0.681 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.660±0.012subscript0.660plus-or-minus0.0120.660_{\pm 0.012}0.660 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.617±0.010subscript0.617plus-or-minus0.0100.617_{\pm 0.010}0.617 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.601±0.014subscript0.601plus-or-minus0.0140.601_{\pm 0.014}0.601 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.666±0.020subscript0.666plus-or-minus0.0200.666_{\pm 0.020}0.666 start_POSTSUBSCRIPT ± 0.020 end_POSTSUBSCRIPT0.625±0.014subscript0.625plus-or-minus0.0140.625_{\pm 0.014}0.625 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.608±0.015subscript0.608plus-or-minus0.0150.608_{\pm 0.015}0.608 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.677±0.017subscript0.677plus-or-minus0.0170.677_{\pm 0.017}0.677 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.622±0.014subscript0.622plus-or-minus0.0140.622_{\pm 0.014}0.622 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT
IIISVM0.813±0.010subscript0.813plus-or-minus0.0100.813_{\pm 0.010}0.813 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.876±0.012subscript0.876plus-or-minus0.0120.876_{\pm 0.012}0.876 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.700±0.018subscript0.700plus-or-minus0.0180.700_{\pm 0.018}0.700 start_POSTSUBSCRIPT ± 0.018 end_POSTSUBSCRIPT0.853±0.009subscript0.853plus-or-minus0.0090.853_{\pm 0.009}0.853 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.859±0.013subscript0.859plus-or-minus0.0130.859_{\pm 0.013}0.859 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.674±0.017subscript0.674plus-or-minus0.0170.674_{\pm 0.017}0.674 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.854±0.007subscript0.854plus-or-minus0.0070.854_{\pm 0.007}0.854 start_POSTSUBSCRIPT ± 0.007 end_POSTSUBSCRIPT0.840±0.014subscript0.840plus-or-minus0.0140.840_{\pm 0.014}0.840 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.653±0.018subscript0.653plus-or-minus0.0180.653_{\pm 0.018}0.653 start_POSTSUBSCRIPT ± 0.018 end_POSTSUBSCRIPT
XGBoost0.815±0.010subscript0.815plus-or-minus0.0100.815_{\pm 0.010}0.815 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.849±0.015subscript0.849plus-or-minus0.0150.849_{\pm 0.015}0.849 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.674±0.017subscript0.674plus-or-minus0.0170.674_{\pm 0.017}0.674 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.852±0.009subscript0.852plus-or-minus0.0090.852_{\pm 0.009}0.852 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.848±0.014subscript0.848plus-or-minus0.0140.848_{\pm 0.014}0.848 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.666±0.018subscript0.666plus-or-minus0.0180.666_{\pm 0.018}0.666 start_POSTSUBSCRIPT ± 0.018 end_POSTSUBSCRIPT0.855±0.008subscript0.855plus-or-minus0.0080.855_{\pm 0.008}0.855 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT0.846±0.013subscript0.846plus-or-minus0.0130.846_{\pm 0.013}0.846 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.659±0.018subscript0.659plus-or-minus0.0180.659_{\pm 0.018}0.659 start_POSTSUBSCRIPT ± 0.018 end_POSTSUBSCRIPT
MLP0.762±0.011subscript0.762plus-or-minus0.0110.762_{\pm 0.011}0.762 start_POSTSUBSCRIPT ± 0.011 end_POSTSUBSCRIPT0.848±0.012subscript0.848plus-or-minus0.0120.848_{\pm 0.012}0.848 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.650±0.017subscript0.650plus-or-minus0.0170.650_{\pm 0.017}0.650 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.824±0.011subscript0.824plus-or-minus0.0110.824_{\pm 0.011}0.824 start_POSTSUBSCRIPT ± 0.011 end_POSTSUBSCRIPT0.856±0.011subscript0.856plus-or-minus0.0110.856_{\pm 0.011}0.856 start_POSTSUBSCRIPT ± 0.011 end_POSTSUBSCRIPT0.666±0.016subscript0.666plus-or-minus0.0160.666_{\pm 0.016}0.666 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.833±0.009subscript0.833plus-or-minus0.0090.833_{\pm 0.009}0.833 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.839±0.013subscript0.839plus-or-minus0.0130.839_{\pm 0.013}0.839 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.642±0.021subscript0.642plus-or-minus0.0210.642_{\pm 0.021}0.642 start_POSTSUBSCRIPT ± 0.021 end_POSTSUBSCRIPT
RF0.830±0.009subscript0.830plus-or-minus0.0090.830_{\pm 0.009}0.830 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.858±0.013subscript0.858plus-or-minus0.0130.858_{\pm 0.013}0.858 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.675±0.015subscript0.675plus-or-minus0.0150.675_{\pm 0.015}0.675 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.857±0.008subscript0.857plus-or-minus0.0080.857_{\pm 0.008}0.857 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT0.844±0.014subscript0.844plus-or-minus0.0140.844_{\pm 0.014}0.844 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.656±0.018subscript0.656plus-or-minus0.0180.656_{\pm 0.018}0.656 start_POSTSUBSCRIPT ± 0.018 end_POSTSUBSCRIPT0.857±0.009subscript0.857plus-or-minus0.0090.857_{\pm 0.009}0.857 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.831±0.014subscript0.831plus-or-minus0.0140.831_{\pm 0.014}0.831 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.639±0.017subscript0.639plus-or-minus0.0170.639_{\pm 0.017}0.639 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT
LR0.828±0.008subscript0.828plus-or-minus0.0080.828_{\pm 0.008}0.828 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT0.887±0.010subscript0.887plus-or-minus0.0100.887_{\pm 0.010}0.887 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.725±0.014subscript0.725plus-or-minus0.0140.725_{\pm 0.014}0.725 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.855±0.008subscript0.855plus-or-minus0.0080.855_{\pm 0.008}0.855 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT0.870±0.013subscript0.870plus-or-minus0.0130.870_{\pm 0.013}0.870 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.705±0.017subscript0.705plus-or-minus0.0170.705_{\pm 0.017}0.705 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.857±0.008subscript0.857plus-or-minus0.0080.857_{\pm 0.008}0.857 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT0.847±0.013subscript0.847plus-or-minus0.0130.847_{\pm 0.013}0.847 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.677±0.019subscript0.677plus-or-minus0.0190.677_{\pm 0.019}0.677 start_POSTSUBSCRIPT ± 0.019 end_POSTSUBSCRIPT
BioBERT0.838±0.009subscript0.838plus-or-minus0.0090.838_{\pm 0.009}0.838 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.859±0.012subscript0.859plus-or-minus0.0120.859_{\pm 0.012}0.859 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.696±0.016subscript0.696plus-or-minus0.0160.696_{\pm 0.016}0.696 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.854±0.010subscript0.854plus-or-minus0.0100.854_{\pm 0.010}0.854 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.681±0.015subscript0.681plus-or-minus0.0150.681_{\pm 0.015}0.681 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.848±0.014subscript0.848plus-or-minus0.0140.848_{\pm 0.014}0.848 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.855±0.010subscript0.855plus-or-minus0.0100.855_{\pm 0.010}0.855 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.833±0.013subscript0.833plus-or-minus0.0130.833_{\pm 0.013}0.833 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.650±0.015subscript0.650plus-or-minus0.0150.650_{\pm 0.015}0.650 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT
PubMedBERT0.850±0.008subscript0.850plus-or-minus0.0080.850_{\pm 0.008}0.850 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT0.809±0.014subscript0.809plus-or-minus0.0140.809_{\pm 0.014}0.809 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.650±0.013subscript0.650plus-or-minus0.0130.650_{\pm 0.013}0.650 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.852±0.010subscript0.852plus-or-minus0.0100.852_{\pm 0.010}0.852 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.759±0.015subscript0.759plus-or-minus0.0150.759_{\pm 0.015}0.759 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.531±0.008subscript0.531plus-or-minus0.0080.531_{\pm 0.008}0.531 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT0.855±0.010subscript0.855plus-or-minus0.0100.855_{\pm 0.010}0.855 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.756±0.016subscript0.756plus-or-minus0.0160.756_{\pm 0.016}0.756 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.522±0.004subscript0.522plus-or-minus0.0040.522_{\pm 0.004}0.522 start_POSTSUBSCRIPT ± 0.004 end_POSTSUBSCRIPT
HINT0.825±0.009subscript0.825plus-or-minus0.0090.825_{\pm 0.009}0.825 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.850±0.016subscript0.850plus-or-minus0.0160.850_{\pm 0.016}0.850 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.696±0.017subscript0.696plus-or-minus0.0170.696_{\pm 0.017}0.696 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.872±0.006subscript0.872plus-or-minus0.0060.872_{\pm 0.006}0.872 start_POSTSUBSCRIPT ± 0.006 end_POSTSUBSCRIPT0.830±0.017subscript0.830plus-or-minus0.0170.830_{\pm 0.017}0.830 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.592±0.020subscript0.592plus-or-minus0.0200.592_{\pm 0.020}0.592 start_POSTSUBSCRIPT ± 0.020 end_POSTSUBSCRIPT0.871±0.006subscript0.871plus-or-minus0.0060.871_{\pm 0.006}0.871 start_POSTSUBSCRIPT ± 0.006 end_POSTSUBSCRIPT0.828±0.012subscript0.828plus-or-minus0.0120.828_{\pm 0.012}0.828 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.606±0.014subscript0.606plus-or-minus0.0140.606_{\pm 0.014}0.606 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT
SPOT0.832±0.008subscript0.832plus-or-minus0.0080.832_{\pm 0.008}0.832 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT0.862±0.013subscript0.862plus-or-minus0.0130.862_{\pm 0.013}0.862 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.676±0.020subscript0.676plus-or-minus0.0200.676_{\pm 0.020}0.676 start_POSTSUBSCRIPT ± 0.020 end_POSTSUBSCRIPT0.676±0.015subscript0.676plus-or-minus0.0150.676_{\pm 0.015}0.676 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.834±0.016subscript0.834plus-or-minus0.0160.834_{\pm 0.016}0.834 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.665±0.018subscript0.665plus-or-minus0.0180.665_{\pm 0.018}0.665 start_POSTSUBSCRIPT ± 0.018 end_POSTSUBSCRIPT0.653±0.013subscript0.653plus-or-minus0.0130.653_{\pm 0.013}0.653 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.834±0.015subscript0.834plus-or-minus0.0150.834_{\pm 0.015}0.834 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.638±0.016subscript0.638plus-or-minus0.0160.638_{\pm 0.016}0.638 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT

C.2 Results on Pre-2020 Split vs Post-2020 Split

Since Covid changed the landscape of clinical trials due to its disruptive nature, the trial distribution before and after 2020 are quite different (See empirical performance in Table4). Note that we choose CTO DP, since despite its lower performance on TOP, we believe it has stronger generalizability to all trials, as shown by its proportion of positive and negative outcome label prediction in Figure4.

The Pre-2020 Split consists of trials where their completion dates are before 2018. The test completion dates start in 2018 but are within 2020. This split has 27,245, 6,828, and 6,151 samples in the training, validation, and test sets, respectively. We have another data split, the Post-2020 Split. The completion dates of the training trials are before 2020, while the test completion dates are after 2020 but before 2024. This split has 32,200, 8,024, and 12,525 samples in the training, validation, and test sets, respectively.

Note that for post-2020, the training set includes all trials that were completed before 2020 and therefore the entire Pre-2020 dataset.

We additionally show results from both pre- and post-2020 splits in Table4. The pre-2020 split shows generally higher performance across all metrics and phases compared to the post-2020 split, indicating that the pre-2020 data is easier to predict. The decline in performance post-2020 can be attributed to the complexities and disruptions introduced by the COVID-19 pandemic, which affected clinical trial operations and outcomes. Overall, these observations suggest that the pre-2020 data offers a more stable and predictable environment for clinical trial outcome prediction and that post-2020 data should require additional consideration.

PrePost
PhaseModelF1PRAUCROCAUCF1PRAUCROCAUC
ISVM0.744±0.008subscript0.744plus-or-minus0.0080.744_{\pm 0.008}0.744 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT0.632±0.015subscript0.632plus-or-minus0.0150.632_{\pm 0.015}0.632 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.556±0.012subscript0.556plus-or-minus0.0120.556_{\pm 0.012}0.556 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.664±0.007subscript0.664plus-or-minus0.0070.664_{\pm 0.007}0.664 start_POSTSUBSCRIPT ± 0.007 end_POSTSUBSCRIPT0.516±0.012subscript0.516plus-or-minus0.0120.516_{\pm 0.012}0.516 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.528±0.010subscript0.528plus-or-minus0.0100.528_{\pm 0.010}0.528 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT
XGBoost0.731±0.008subscript0.731plus-or-minus0.0080.731_{\pm 0.008}0.731 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT0.656±0.013subscript0.656plus-or-minus0.0130.656_{\pm 0.013}0.656 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.561±0.012subscript0.561plus-or-minus0.0120.561_{\pm 0.012}0.561 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.646±0.007subscript0.646plus-or-minus0.0070.646_{\pm 0.007}0.646 start_POSTSUBSCRIPT ± 0.007 end_POSTSUBSCRIPT0.532±0.012subscript0.532plus-or-minus0.0120.532_{\pm 0.012}0.532 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.545±0.009subscript0.545plus-or-minus0.0090.545_{\pm 0.009}0.545 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT
MLP0.673±0.010subscript0.673plus-or-minus0.0100.673_{\pm 0.010}0.673 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.614±0.016subscript0.614plus-or-minus0.0160.614_{\pm 0.016}0.614 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.545±0.015subscript0.545plus-or-minus0.0150.545_{\pm 0.015}0.545 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.583±0.009subscript0.583plus-or-minus0.0090.583_{\pm 0.009}0.583 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.518±0.011subscript0.518plus-or-minus0.0110.518_{\pm 0.011}0.518 start_POSTSUBSCRIPT ± 0.011 end_POSTSUBSCRIPT0.532±0.009subscript0.532plus-or-minus0.0090.532_{\pm 0.009}0.532 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT
RF0.744±0.008subscript0.744plus-or-minus0.0080.744_{\pm 0.008}0.744 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT0.591±0.014subscript0.591plus-or-minus0.0140.591_{\pm 0.014}0.591 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.502±0.011subscript0.502plus-or-minus0.0110.502_{\pm 0.011}0.502 start_POSTSUBSCRIPT ± 0.011 end_POSTSUBSCRIPT0.662±0.006subscript0.662plus-or-minus0.0060.662_{\pm 0.006}0.662 start_POSTSUBSCRIPT ± 0.006 end_POSTSUBSCRIPT0.468±0.009subscript0.468plus-or-minus0.0090.468_{\pm 0.009}0.468 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.458±0.009subscript0.458plus-or-minus0.0090.458_{\pm 0.009}0.458 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT
LR0.740±0.008subscript0.740plus-or-minus0.0080.740_{\pm 0.008}0.740 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT0.628±0.015subscript0.628plus-or-minus0.0150.628_{\pm 0.015}0.628 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT0.548±0.012subscript0.548plus-or-minus0.0120.548_{\pm 0.012}0.548 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.651±0.007subscript0.651plus-or-minus0.0070.651_{\pm 0.007}0.651 start_POSTSUBSCRIPT ± 0.007 end_POSTSUBSCRIPT0.505±0.011subscript0.505plus-or-minus0.0110.505_{\pm 0.011}0.505 start_POSTSUBSCRIPT ± 0.011 end_POSTSUBSCRIPT0.519±0.009subscript0.519plus-or-minus0.0090.519_{\pm 0.009}0.519 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT
PubMedBERT0.743±0.005subscript0.743plus-or-minus0.0050.743_{\pm 0.005}0.743 start_POSTSUBSCRIPT ± 0.005 end_POSTSUBSCRIPT0.623±0.017subscript0.623plus-or-minus0.0170.623_{\pm 0.017}0.623 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.538±0.016subscript0.538plus-or-minus0.0160.538_{\pm 0.016}0.538 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.664±0.006subscript0.664plus-or-minus0.0060.664_{\pm 0.006}0.664 start_POSTSUBSCRIPT ± 0.006 end_POSTSUBSCRIPT0.511±0.008subscript0.511plus-or-minus0.0080.511_{\pm 0.008}0.511 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT0.527±0.005subscript0.527plus-or-minus0.0050.527_{\pm 0.005}0.527 start_POSTSUBSCRIPT ± 0.005 end_POSTSUBSCRIPT
BioBERT0.743±0.005subscript0.743plus-or-minus0.0050.743_{\pm 0.005}0.743 start_POSTSUBSCRIPT ± 0.005 end_POSTSUBSCRIPT0.613±0.014subscript0.613plus-or-minus0.0140.613_{\pm 0.014}0.613 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.522±0.014subscript0.522plus-or-minus0.0140.522_{\pm 0.014}0.522 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT0.664±0.006subscript0.664plus-or-minus0.0060.664_{\pm 0.006}0.664 start_POSTSUBSCRIPT ± 0.006 end_POSTSUBSCRIPT0.476±0.009subscript0.476plus-or-minus0.0090.476_{\pm 0.009}0.476 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.490±0.008subscript0.490plus-or-minus0.0080.490_{\pm 0.008}0.490 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT
IISVM0.862±0.005subscript0.862plus-or-minus0.0050.862_{\pm 0.005}0.862 start_POSTSUBSCRIPT ± 0.005 end_POSTSUBSCRIPT0.795±0.011subscript0.795plus-or-minus0.0110.795_{\pm 0.011}0.795 start_POSTSUBSCRIPT ± 0.011 end_POSTSUBSCRIPT0.574±0.012subscript0.574plus-or-minus0.0120.574_{\pm 0.012}0.574 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.849±0.003subscript0.849plus-or-minus0.0030.849_{\pm 0.003}0.849 start_POSTSUBSCRIPT ± 0.003 end_POSTSUBSCRIPT0.781±0.007subscript0.781plus-or-minus0.0070.781_{\pm 0.007}0.781 start_POSTSUBSCRIPT ± 0.007 end_POSTSUBSCRIPT0.569±0.009subscript0.569plus-or-minus0.0090.569_{\pm 0.009}0.569 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT
XGBoost0.843±0.005subscript0.843plus-or-minus0.0050.843_{\pm 0.005}0.843 start_POSTSUBSCRIPT ± 0.005 end_POSTSUBSCRIPT0.810±0.009subscript0.810plus-or-minus0.0090.810_{\pm 0.009}0.810 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.582±0.012subscript0.582plus-or-minus0.0120.582_{\pm 0.012}0.582 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.820±0.004subscript0.820plus-or-minus0.0040.820_{\pm 0.004}0.820 start_POSTSUBSCRIPT ± 0.004 end_POSTSUBSCRIPT0.775±0.008subscript0.775plus-or-minus0.0080.775_{\pm 0.008}0.775 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT0.559±0.008subscript0.559plus-or-minus0.0080.559_{\pm 0.008}0.559 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT
MLP0.784±0.006subscript0.784plus-or-minus0.0060.784_{\pm 0.006}0.784 start_POSTSUBSCRIPT ± 0.006 end_POSTSUBSCRIPT0.790±0.010subscript0.790plus-or-minus0.0100.790_{\pm 0.010}0.790 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.569±0.012subscript0.569plus-or-minus0.0120.569_{\pm 0.012}0.569 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.750±0.005subscript0.750plus-or-minus0.0050.750_{\pm 0.005}0.750 start_POSTSUBSCRIPT ± 0.005 end_POSTSUBSCRIPT0.765±0.007subscript0.765plus-or-minus0.0070.765_{\pm 0.007}0.765 start_POSTSUBSCRIPT ± 0.007 end_POSTSUBSCRIPT0.545±0.008subscript0.545plus-or-minus0.0080.545_{\pm 0.008}0.545 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT
RF0.863±0.005subscript0.863plus-or-minus0.0050.863_{\pm 0.005}0.863 start_POSTSUBSCRIPT ± 0.005 end_POSTSUBSCRIPT0.799±0.009subscript0.799plus-or-minus0.0090.799_{\pm 0.009}0.799 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.574±0.011subscript0.574plus-or-minus0.0110.574_{\pm 0.011}0.574 start_POSTSUBSCRIPT ± 0.011 end_POSTSUBSCRIPT0.850±0.004subscript0.850plus-or-minus0.0040.850_{\pm 0.004}0.850 start_POSTSUBSCRIPT ± 0.004 end_POSTSUBSCRIPT0.762±0.008subscript0.762plus-or-minus0.0080.762_{\pm 0.008}0.762 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT0.532±0.009subscript0.532plus-or-minus0.0090.532_{\pm 0.009}0.532 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT
LR0.856±0.006subscript0.856plus-or-minus0.0060.856_{\pm 0.006}0.856 start_POSTSUBSCRIPT ± 0.006 end_POSTSUBSCRIPT0.797±0.009subscript0.797plus-or-minus0.0090.797_{\pm 0.009}0.797 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.579±0.010subscript0.579plus-or-minus0.0100.579_{\pm 0.010}0.579 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.835±0.004subscript0.835plus-or-minus0.0040.835_{\pm 0.004}0.835 start_POSTSUBSCRIPT ± 0.004 end_POSTSUBSCRIPT0.779±0.007subscript0.779plus-or-minus0.0070.779_{\pm 0.007}0.779 start_POSTSUBSCRIPT ± 0.007 end_POSTSUBSCRIPT0.568±0.008subscript0.568plus-or-minus0.0080.568_{\pm 0.008}0.568 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT
PubMedBERT0.865±0.004subscript0.865plus-or-minus0.0040.865_{\pm 0.004}0.865 start_POSTSUBSCRIPT ± 0.004 end_POSTSUBSCRIPT0.812±0.009subscript0.812plus-or-minus0.0090.812_{\pm 0.009}0.812 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.583±0.012subscript0.583plus-or-minus0.0120.583_{\pm 0.012}0.583 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.849±0.004subscript0.849plus-or-minus0.0040.849_{\pm 0.004}0.849 start_POSTSUBSCRIPT ± 0.004 end_POSTSUBSCRIPT0.741±0.005subscript0.741plus-or-minus0.0050.741_{\pm 0.005}0.741 start_POSTSUBSCRIPT ± 0.005 end_POSTSUBSCRIPT0.508±0.005subscript0.508plus-or-minus0.0050.508_{\pm 0.005}0.508 start_POSTSUBSCRIPT ± 0.005 end_POSTSUBSCRIPT
BioBERT0.865±0.004subscript0.865plus-or-minus0.0040.865_{\pm 0.004}0.865 start_POSTSUBSCRIPT ± 0.004 end_POSTSUBSCRIPT0.829±0.010subscript0.829plus-or-minus0.0100.829_{\pm 0.010}0.829 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.603±0.012subscript0.603plus-or-minus0.0120.603_{\pm 0.012}0.603 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.849±0.004subscript0.849plus-or-minus0.0040.849_{\pm 0.004}0.849 start_POSTSUBSCRIPT ± 0.004 end_POSTSUBSCRIPT0.779±0.007subscript0.779plus-or-minus0.0070.779_{\pm 0.007}0.779 start_POSTSUBSCRIPT ± 0.007 end_POSTSUBSCRIPT0.563±0.008subscript0.563plus-or-minus0.0080.563_{\pm 0.008}0.563 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT
IIISVM0.910±0.005subscript0.910plus-or-minus0.0050.910_{\pm 0.005}0.910 start_POSTSUBSCRIPT ± 0.005 end_POSTSUBSCRIPT0.863±0.011subscript0.863plus-or-minus0.0110.863_{\pm 0.011}0.863 start_POSTSUBSCRIPT ± 0.011 end_POSTSUBSCRIPT0.560±0.020subscript0.560plus-or-minus0.0200.560_{\pm 0.020}0.560 start_POSTSUBSCRIPT ± 0.020 end_POSTSUBSCRIPT0.912±0.003subscript0.912plus-or-minus0.0030.912_{\pm 0.003}0.912 start_POSTSUBSCRIPT ± 0.003 end_POSTSUBSCRIPT0.864±0.007subscript0.864plus-or-minus0.0070.864_{\pm 0.007}0.864 start_POSTSUBSCRIPT ± 0.007 end_POSTSUBSCRIPT0.563±0.011subscript0.563plus-or-minus0.0110.563_{\pm 0.011}0.563 start_POSTSUBSCRIPT ± 0.011 end_POSTSUBSCRIPT
XGBoost0.901±0.006subscript0.901plus-or-minus0.0060.901_{\pm 0.006}0.901 start_POSTSUBSCRIPT ± 0.006 end_POSTSUBSCRIPT0.867±0.010subscript0.867plus-or-minus0.0100.867_{\pm 0.010}0.867 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.574±0.020subscript0.574plus-or-minus0.0200.574_{\pm 0.020}0.574 start_POSTSUBSCRIPT ± 0.020 end_POSTSUBSCRIPT0.894±0.004subscript0.894plus-or-minus0.0040.894_{\pm 0.004}0.894 start_POSTSUBSCRIPT ± 0.004 end_POSTSUBSCRIPT0.858±0.009subscript0.858plus-or-minus0.0090.858_{\pm 0.009}0.858 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.552±0.015subscript0.552plus-or-minus0.0150.552_{\pm 0.015}0.552 start_POSTSUBSCRIPT ± 0.015 end_POSTSUBSCRIPT
MLP0.841±0.008subscript0.841plus-or-minus0.0080.841_{\pm 0.008}0.841 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT0.851±0.012subscript0.851plus-or-minus0.0120.851_{\pm 0.012}0.851 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT0.551±0.017subscript0.551plus-or-minus0.0170.551_{\pm 0.017}0.551 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.825±0.005subscript0.825plus-or-minus0.0050.825_{\pm 0.005}0.825 start_POSTSUBSCRIPT ± 0.005 end_POSTSUBSCRIPT0.846±0.010subscript0.846plus-or-minus0.0100.846_{\pm 0.010}0.846 start_POSTSUBSCRIPT ± 0.010 end_POSTSUBSCRIPT0.523±0.014subscript0.523plus-or-minus0.0140.523_{\pm 0.014}0.523 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT
RF0.910±0.005subscript0.910plus-or-minus0.0050.910_{\pm 0.005}0.910 start_POSTSUBSCRIPT ± 0.005 end_POSTSUBSCRIPT0.856±0.011subscript0.856plus-or-minus0.0110.856_{\pm 0.011}0.856 start_POSTSUBSCRIPT ± 0.011 end_POSTSUBSCRIPT0.536±0.019subscript0.536plus-or-minus0.0190.536_{\pm 0.019}0.536 start_POSTSUBSCRIPT ± 0.019 end_POSTSUBSCRIPT0.912±0.003subscript0.912plus-or-minus0.0030.912_{\pm 0.003}0.912 start_POSTSUBSCRIPT ± 0.003 end_POSTSUBSCRIPT0.854±0.007subscript0.854plus-or-minus0.0070.854_{\pm 0.007}0.854 start_POSTSUBSCRIPT ± 0.007 end_POSTSUBSCRIPT0.529±0.014subscript0.529plus-or-minus0.0140.529_{\pm 0.014}0.529 start_POSTSUBSCRIPT ± 0.014 end_POSTSUBSCRIPT
LR0.906±0.006subscript0.906plus-or-minus0.0060.906_{\pm 0.006}0.906 start_POSTSUBSCRIPT ± 0.006 end_POSTSUBSCRIPT0.870±0.013subscript0.870plus-or-minus0.0130.870_{\pm 0.013}0.870 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT0.576±0.017subscript0.576plus-or-minus0.0170.576_{\pm 0.017}0.576 start_POSTSUBSCRIPT ± 0.017 end_POSTSUBSCRIPT0.904±0.004subscript0.904plus-or-minus0.0040.904_{\pm 0.004}0.904 start_POSTSUBSCRIPT ± 0.004 end_POSTSUBSCRIPT0.860±0.008subscript0.860plus-or-minus0.0080.860_{\pm 0.008}0.860 start_POSTSUBSCRIPT ± 0.008 end_POSTSUBSCRIPT0.561±0.013subscript0.561plus-or-minus0.0130.561_{\pm 0.013}0.561 start_POSTSUBSCRIPT ± 0.013 end_POSTSUBSCRIPT
PubMedBERT0.911±0.004subscript0.911plus-or-minus0.0040.911_{\pm 0.004}0.911 start_POSTSUBSCRIPT ± 0.004 end_POSTSUBSCRIPT0.878±0.009subscript0.878plus-or-minus0.0090.878_{\pm 0.009}0.878 start_POSTSUBSCRIPT ± 0.009 end_POSTSUBSCRIPT0.598±0.016subscript0.598plus-or-minus0.0160.598_{\pm 0.016}0.598 start_POSTSUBSCRIPT ± 0.016 end_POSTSUBSCRIPT0.911±0.003subscript0.911plus-or-minus0.0030.911_{\pm 0.003}0.911 start_POSTSUBSCRIPT ± 0.003 end_POSTSUBSCRIPT0.836±0.004subscript0.836plus-or-minus0.0040.836_{\pm 0.004}0.836 start_POSTSUBSCRIPT ± 0.004 end_POSTSUBSCRIPT0.497±0.006subscript0.497plus-or-minus0.0060.497_{\pm 0.006}0.497 start_POSTSUBSCRIPT ± 0.006 end_POSTSUBSCRIPT
BioBERT0.911±0.004subscript0.911plus-or-minus0.0040.911_{\pm 0.004}0.911 start_POSTSUBSCRIPT ± 0.004 end_POSTSUBSCRIPT0.892±0.011subscript0.892plus-or-minus0.0110.892_{\pm 0.011}0.892 start_POSTSUBSCRIPT ± 0.011 end_POSTSUBSCRIPT0.631±0.019subscript0.631plus-or-minus0.0190.631_{\pm 0.019}0.631 start_POSTSUBSCRIPT ± 0.019 end_POSTSUBSCRIPT0.911±0.003subscript0.911plus-or-minus0.0030.911_{\pm 0.003}0.911 start_POSTSUBSCRIPT ± 0.003 end_POSTSUBSCRIPT0.870±0.005subscript0.870plus-or-minus0.0050.870_{\pm 0.005}0.870 start_POSTSUBSCRIPT ± 0.005 end_POSTSUBSCRIPT0.585±0.012subscript0.585plus-or-minus0.0120.585_{\pm 0.012}0.585 start_POSTSUBSCRIPT ± 0.012 end_POSTSUBSCRIPT

C.3 Additional Contributions

Additional Feature Collection

To run the baselines, we needed to collect ICD codes for diseases and SMILES strings [42] for drugs. We collected ICD10 codes for the diseases from the NIH Clinical Table Search Service888https://clinicaltables.nlm.nih.gov/[11].We additionally collected SMILES (Simplified Molecular Input Line Entry System) strings for drugs from DrugBankWe also used NIH PubChem999PubChemPy API https://pubchempy.readthedocs.io/en/latest/ [22, 21, 20] to collect the SMILES we could not find in DrugBank.

Stock Trend Prediction

The trend of a stock price is the overall direction in which the price of the stock is moving over a specified period, derived from the historical price data [8]. As discussed in 2.4, we calculated the slope of stock prices. We define trend as the direction of the slope, i.e., positive slope refers to positive trend, and negative slope refers to negative trend. We tried to predict the trend of stock price in the 7-day window starting from a clinical trial’s completion date. Table 5 shows the scores for different ML models for stock trend prediction. We utilized phase, eligibility criteria, diseases, and drugs as features for predicting trend. We get embeddings of the features using BioBERT and use the embeddings as input to the different baseline methods of Table 5.

ModelAccuracyROCAUCF1
LR0.51210.53140.5531
MLP0.54180.52020.6620
SVM0.54410.48780.6860
RF0.50400.51030.5377
XGBoost0.50130.50770.5340

QA Dataset

In addition to the weak trial outcome labels, we provide a QA dataset on trial publications as an additional contribution. While obtaining LLM predictions on PubMed abstracts, we prompted the model to generate question-answer pairs from the given abstracts. The prompts for generating the QA pairs are provided in the supplementary material (Figure10). The answers are provided in both short-answer and multiple-choice formats. Examples can be found in the supplementary material (Fig11 and 12). This QA dataset offers valuable information about the trials, complementing the weak outcome labels. It can be used for various downstream tasks, such as question answering, information extraction, and knowledge base construction related to clinical trials.

Appendix D Label Creation Continued

D.1 Data Programming

The full data programming framework is detailed by Ratner et al. [31]. We introduce a small aspect of the framework below.At a high level, the aggregation of weakly supervised labeling functions (LFs) is framed as a dependency graph Gsourcesubscript𝐺𝑠𝑜𝑢𝑟𝑐𝑒G_{source}italic_G start_POSTSUBSCRIPT italic_s italic_o italic_u italic_r italic_c italic_e end_POSTSUBSCRIPT where each LF λisubscript𝜆𝑖\lambda_{i}italic_λ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is dependently conditioned on the true label Y𝑌Yitalic_Y. In our case, we assume conditional independence of all λi|Yconditionalsubscript𝜆𝑖𝑌\lambda_{i}|Yitalic_λ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | italic_Y. For this case, the dependency graphs will have observable cliques 𝑶={λi,inlf}C𝑶subscript𝜆𝑖𝑖subscript𝑛𝑙𝑓𝐶\bm{O}=\{\lambda_{i},i\in n_{lf}\}\subset Cbold_italic_O = { italic_λ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_i ∈ italic_n start_POSTSUBSCRIPT italic_l italic_f end_POSTSUBSCRIPT } ⊂ italic_C, where nlfsubscript𝑛𝑙𝑓n_{lf}italic_n start_POSTSUBSCRIPT italic_l italic_f end_POSTSUBSCRIPT is the number of labeling functions.

From here, the covariance matrix of an observable subset of the cliques in Gsourcesubscript𝐺𝑠𝑜𝑢𝑟𝑐𝑒G_{source}italic_G start_POSTSUBSCRIPT italic_s italic_o italic_u italic_r italic_c italic_e end_POSTSUBSCRIPT is analyzed, leading to a matrix completion approach for recovering estimated accuracies μ𝜇\muitalic_μ (used in the final label model to predict P(𝒀|𝝀)𝑃conditional𝒀𝝀P(\bm{Y}|\bm{\lambda})italic_P ( bold_italic_Y | bold_italic_λ )).

Let μ=𝔼(ψ(C))𝜇𝔼𝜓𝐶\mu=\mathbb{E}(\psi(C))italic_μ = blackboard_E ( italic_ψ ( italic_C ) ) where ψ(C)𝜓𝐶\psi(C)italic_ψ ( italic_C ) is a vector of indicator random variables for all combinations of all but one of the labels emitted by each variable in clique C.

The norm of the covariance of observed LFs cliques O𝑂Oitalic_O and separator set S𝑆Sitalic_S cliques 𝑪𝒐𝒗(ψ(O)ψ(S))𝑪𝒐𝒗𝜓𝑂𝜓𝑆\bm{Cov}(\psi(O)\cup\psi(S))bold_italic_C bold_italic_o bold_italic_v ( italic_ψ ( italic_O ) ∪ italic_ψ ( italic_S ) ) can be used to recover μ𝜇\muitalic_μ.

𝑪𝒐𝒗(ψ(O)ψ(S))=Σ=[ΣOΣOSΣOSTΣS]𝑪𝒐𝒗𝜓𝑂𝜓𝑆ΣmatrixsubscriptΣ𝑂subscriptΣ𝑂𝑆superscriptsubscriptΣ𝑂𝑆𝑇subscriptΣ𝑆\displaystyle\bm{Cov}(\psi(O)\cup\psi(S))=\Sigma=\begin{bmatrix}\Sigma_{O}&%\Sigma_{OS}\\\Sigma_{OS}^{T}&\Sigma_{S}\end{bmatrix}bold_italic_C bold_italic_o bold_italic_v ( italic_ψ ( italic_O ) ∪ italic_ψ ( italic_S ) ) = roman_Σ = [ start_ARG start_ROW start_CELL roman_Σ start_POSTSUBSCRIPT italic_O end_POSTSUBSCRIPT end_CELL start_CELL roman_Σ start_POSTSUBSCRIPT italic_O italic_S end_POSTSUBSCRIPT end_CELL end_ROW start_ROW start_CELL roman_Σ start_POSTSUBSCRIPT italic_O italic_S end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT end_CELL start_CELL roman_Σ start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT end_CELL end_ROW end_ARG ](1)

Its inverse is:

K=Σ1=[KOKOSKOSTKS]𝐾superscriptΣ1matrixsubscript𝐾𝑂subscript𝐾𝑂𝑆superscriptsubscript𝐾𝑂𝑆𝑇subscript𝐾𝑆\displaystyle K=\Sigma^{-1}=\begin{bmatrix}K_{O}&K_{OS}\\K_{OS}^{T}&K_{S}\end{bmatrix}italic_K = roman_Σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT = [ start_ARG start_ROW start_CELL italic_K start_POSTSUBSCRIPT italic_O end_POSTSUBSCRIPT end_CELL start_CELL italic_K start_POSTSUBSCRIPT italic_O italic_S end_POSTSUBSCRIPT end_CELL end_ROW start_ROW start_CELL italic_K start_POSTSUBSCRIPT italic_O italic_S end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT end_CELL start_CELL italic_K start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT end_CELL end_ROW end_ARG ](2)

Applying block matrix inversion, we get:

KO=ΣO1+cΣO1ΣOSΣOSTΣO1subscript𝐾𝑂subscriptsuperscriptΣ1𝑂𝑐subscriptsuperscriptΣ1𝑂subscriptΣ𝑂𝑆subscriptsuperscriptΣ𝑇𝑂𝑆subscriptsuperscriptΣ1𝑂K_{O}=\Sigma^{-1}_{O}+c\Sigma^{-1}_{O}\Sigma_{OS}\Sigma^{T}_{OS}\Sigma^{-1}_{O}italic_K start_POSTSUBSCRIPT italic_O end_POSTSUBSCRIPT = roman_Σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_O end_POSTSUBSCRIPT + italic_c roman_Σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_O end_POSTSUBSCRIPT roman_Σ start_POSTSUBSCRIPT italic_O italic_S end_POSTSUBSCRIPT roman_Σ start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_O italic_S end_POSTSUBSCRIPT roman_Σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_O end_POSTSUBSCRIPT
c=(ΣSΣOSTΣO1ΣOS)𝑐subscriptΣ𝑆subscriptsuperscriptΣ𝑇𝑂𝑆subscriptsuperscriptΣ1𝑂subscriptΣ𝑂𝑆c=(\Sigma_{S}-\Sigma^{T}_{OS}\Sigma^{-1}_{O}\Sigma_{OS})italic_c = ( roman_Σ start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT - roman_Σ start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_O italic_S end_POSTSUBSCRIPT roman_Σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_O end_POSTSUBSCRIPT roman_Σ start_POSTSUBSCRIPT italic_O italic_S end_POSTSUBSCRIPT )

Let z=zΣO1ΣOS𝑧𝑧subscriptsuperscriptΣ1𝑂subscriptΣ𝑂𝑆z=\sqrt{z}\Sigma^{-1}_{O}\Sigma_{OS}italic_z = square-root start_ARG italic_z end_ARG roman_Σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_O end_POSTSUBSCRIPT roman_Σ start_POSTSUBSCRIPT italic_O italic_S end_POSTSUBSCRIPT, then

KO=ΣO1+zzTsubscript𝐾𝑂subscriptsuperscriptΣ1𝑂𝑧superscript𝑧𝑇K_{O}=\Sigma^{-1}_{O}+zz^{T}italic_K start_POSTSUBSCRIPT italic_O end_POSTSUBSCRIPT = roman_Σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_O end_POSTSUBSCRIPT + italic_z italic_z start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT

Solving for z𝑧zitalic_z can directly recover estimated accuracies from μ𝜇\muitalic_μ via Algorithm 1 in Ratner et al. [31].

D.2 Weakly Supervised Labeling Functions

In this section, we report all of our LFs and their implementations.

  • results_reported: 1 if results were reported for a trial. Otherwise, it predicts 0.

  • num_sponsors: The number of sponsors for a trial. Can be thresholded. It is important to study the impact of single sponsors, multiple sponsors, collaborative partnerships, and public funding. The number and type of sponsors will have a significant impact on the clinical trial process and the overall path of bringing the new drug to market. Sponsors affect all parts of the trial, from funding and resources, regulatory guidance, global reach, operational support, supply chain management, market access and distribution, and risk management.

  • num_patients: The number of patients for a trial. Can be thresholded. As the clinical trials are conducted with strong statistical power to detect the true effect of the drug and to minimize the risk of committing Type II error (failing to detect the treatment effect that is present or false negatives), number of participants in each respective trial is key aspect of trial success.

  • patient_drop: The number of patients that drop out during the trial. Can be thresholded. Patients drop out from the clinical trial due to lack of efficacy unintended adverse events or other reasons that could result in unanticipated trial outcomes.

  • sites: The number of total sites during the trial. Can be thresholded. This also indirectly measures the funding capabilities of the sponsors, much like num_sponsors.

  • pvalues: The total sum of occurrences where the P-value <<< 0.05. Can be thresholded in the number of occurrences (the 0.05 threshold remains the same). Most of the p-value <<< 0.05 suggests the observed effect is statistically significant and helpful to reject the null hypothesis.

  • update_more_recent: The difference in the date at which the trial was last updated vs its completion date. Can be thresholded. The time gap can provide critical insights into the trials’ post-completion process and transparency. Identifying delays is helpful for trial success, as it could be due to data analysis and validation, regulatory review, and publication process. A highly amended trial could indicate success due to the large amount of publications.

  • death_ae, serious_ae, all_ae: Represents the number of deaths, serious adverse events, and total adverse events, respectively. Can be thresholded. The total number of adverse events (AEs), serious adverse events (SAEs), and deaths in a clinical trial can provide important safety information about the investigational treatment. However, the significance of these numbers depends on various factors, including the size and duration of the trial, the nature of the treatment, and the characteristics of the study population.

  • status: The status of the trial. We say that a trial is not successful if the status is ’Terminated’, ’Withdrawn’, ’Suspended’, ’Withheld’, ’No longer available’, or ’Temporarily not available’. However, if it is ’Approved for marketing’, then we say it is successful. Otherwise, we abstain from predicting either. Having this information incorporated serves to enhance the transparency, regulatory compliance, and ethical conduct of clinical trials. I.e. "Terminated: The study has stopped early and will not start again. Participants are no longer being examined or treated." usually occurs when the trial causes significant negative side-effects in several patients.

  • amendments: Represents the number of trial amendments. Can be thresholded.Clinical trials must follow approved protocols [18, 28], but amendments may be required after regulatory approval to adjust protocols according to new requirements or insights.We scraped record histories for each trial from https://clinicaltrials.gov/ and calculated the total number of times a clinical trial has been amended. The number of amendments to a clinical trial protocol can provide some insights into the trial’s progress and potential success, but it’s not necessarily a direct indication of success or failure. Therefore, we consider the total number of amendments of trial as a weak label. Sometimes, amendments to trial protocols are crucial for adapting to emerging data from ongoing clinical trials, addressing safety concerns, or optimizing study design to yield better outcomes.

  • stock_price: Is positive if the sponsor stock price’s 5-day moving average has a positive or negative slope. See Section2.4.

  • linkage: Is positive if a trial was found to have any later-stage trials linked to it. See Section2.2.

  • news_headlines: Is positive or negative depending on sentiment from any news headline related to the trial. See Section2.3.

  • gpt: Represents GPT3.5-turbo-0125’s decisions on PubMed abstracts. See Section2.1.

D.3 Phase-Specific Thresholding

For Phases 1, 2, and 3, we find specific quantile thresholds from (0.1,0.2,,0.9)0.10.20.9(0.1,0.2,...,0.9)( 0.1 , 0.2 , … , 0.9 ) for all LFs that have tunable thresholds, fine-tuned on each respective phase on the TOP training dataset.

To reiterate our final labeling process, we utilize both an unsupervised aggregation approach– data programming– and a supervised random forest to obtain our estimated labels and ground our weakly supervised signals on the humanly annotated TOP training data. For data programming, we add TOP training labels to all of our other weakly supervised LFs. We duplicate TOP labels 3 times to obtain high agreement and therefore high weight in the matrix completion step.

For our supervised approach, we train a Random Forest model on all other weakly supervised LF outputs to predict the ground truth.Both approaches automatically create predictions for all of the trials (more than 400k trials).For the final prediction, trials are first segmented by their phase, where the respective phase threshold-tuned predicted labels are matched.

Appendix E CTO Statistics

E.1 Labeling Function Statistics

Table7 and Table6 show statistics of all tunable and static labeling functions for the TOP training and validation data splits. We see that most LFs cover more than 50% of the data, although this is not true of news headlines and status. Additionally, each individual accuracy scores and kappa values are generally not stellar on their own, with the highest kappa being status (which suffers from low coverage). In terms of kappa values, p-values from the tunable LFs and GPT decisions stand out from the static LFs as the highest, which makes sense as studies with more significant results may be one of the original annotator’s primary signals and would also be referred to in the publications. The highest accuracy is status and GPT. Since status predicts all negative, it is clear that terminated, withdrawn, and incomplete trials in general are considered unsuccessful.

PhaseLabeling FunctionAbstainPredict 0Predict 1Pos. Prop.CoverageAccκ𝜅\kappaitalic_κ
1status81434700.0000.2170.9540.000
gpt179411980.8280.1500.8900.678
linkage04965730.5360.6700.5160.035
stock_price0951300.5780.1410.540-0.021
results_reported08003610.3110.7270.455-0.059
2status3296115500.0000.2280.9680.000
gpt6833389130.7300.2470.8750.707
linkage0212520270.4880.8200.5760.150
stock_price04574470.4940.1780.5040.008
results_reported0205823930.5380.8790.5240.050
3status299943900.0000.1210.9540.000
gpt60527613890.8340.4600.9080.717
linkage0128419290.6000.8880.6260.209
stock_price04385280.5470.2670.5500.057
results_reported0139920390.5930.9510.6190.199

Phase
Labeling
Function
Best
Quantile
Abstain01
Pos.
Prop.
CoverageAccκ𝜅\kappaitalic_κ
Iupdate_more_recent0.8886654080.3800.6720.5640.153
sites0.105884910.4550.6760.5430.092
serious_ae0.90223390.9390.2260.490-0.045
pvalues0.5033230.4110.0350.5850.256
patient_drop0.9042200.9820.1400.5300.013
num_sponsors0.701064970.0840.7270.446-0.017
num_patients0.201262350.6510.2260.7230.441
news_headlines0.763716480.7500.0400.6100.009
death_ae0.10373240.8980.2260.5530.086
amendments0.10301950.8670.1410.715-0.076
all_ae0.9063550.9830.2260.509-0.010
IIupdate_more_recent0.8290289712640.3040.8220.5300.051
sites0.90272313980.3390.8140.5370.059
serious_ae0.9021121790.9120.4720.488-0.031
pvalues0.504183410.4490.1500.6830.384
patient_drop0.9010016140.9420.3380.5330.003
num_sponsors0.9043281230.0280.8790.515-0.007
num_patients0.3086915240.6370.4720.6990.398
news_headlines0.9267824680.7390.0180.608-0.007
death_ae0.1015522350.9350.4720.5150.023
amendments0.10578470.9370.1780.633-0.026
all_ae0.9014622440.9390.4720.483-0.041
IIIupdate_more_recent0.9217143117900.5560.8910.513-0.012
sites0.80109119020.6350.8270.527-0.017
serious_ae0.9070813220.6510.5610.5970.060
pvalues0.504498960.6660.3720.7680.450
patient_drop0.9058712100.6730.4970.6090.053
num_sponsors0.1025059330.2710.9510.367-0.124
num_patients0.208019590.9610.5640.7590.182
news_headlines0.9273426720.7350.0270.6960.077
death_ae0.906119690.9700.5610.7200.029
amendments0.10559110.9430.2670.717-0.001
all_ae0.9087611540.5680.5610.5400.015

E.2 Statistics on Trial Linkage

Automatically Labeling $200B Life-Saving Datasets: A Large Clinical Trial Outcome Benchmark (5)

We present statistics on the weak trial outcome labels extracted from the clinical trial linkages in Figure5. Figure5 A shows the phase distribution of these weak labels, excluding trials with missing phase information or labeled as ’Not Applicable’, as they are omitted from our linking algorithm. Additionally, outcomes for Phase 4 are not included because there are no subsequent trials for Phase 4. The criteria for extracting labels for trial linkage are defined as follows:

Trial outcome={Successif next phase trial existsNot Sureif there is a weakly connected next phase trialFailureif no next phase trial foundTrial outcomecasesSuccessif next phase trial existsNot Sureif there is a weakly connected next phase trialFailureif no next phase trial found\text{Trial outcome}=\begin{cases}\text{Success}&\text{if next phase trial %exists}\\\text{Not Sure}&\text{if there is a weakly connected next phase trial}\\\text{Failure}&\text{if no next phase trial found}\end{cases}Trial outcome = { start_ROW start_CELL Success end_CELL start_CELL if next phase trial exists end_CELL end_ROW start_ROW start_CELL Not Sure end_CELL start_CELL if there is a weakly connected next phase trial end_CELL end_ROW start_ROW start_CELL Failure end_CELL start_CELL if no next phase trial found end_CELL end_ROW(3)

Here, a "weakly connected next phase trial" refers to a trial link found during the retrieval stage that received a negative cross-encoder score during the predict linkage stage. Figure5 B and C illustrate the distribution of weak labels across different phases and over time (years).

E.3 Statistics on LLM Predictions

In Figure6, we present the statistics of LLM predictions on PubMed abstracts, including a histogram showing the number of publications per trial. Figure6 A illustrates the phase distribution of the extracted weak labels, including rare phases such as terminated, recruiting, and completed. The LLM is prompted to predict the trial outcome based on the provided abstracts as ’Success’, ’Failure’, or ’Not Sure’. The distribution of weak labels over time (years) and across phases is shown in Figure6 B and C, respectively. The number of ’Failure’ labels is relatively low compared to others, as trials with multiple publications are more likely to be successful. Additionally, Figure6 D presents the histogram for background, derived, and result-type publications for the trials.

Automatically Labeling $200B Life-Saving Datasets: A Large Clinical Trial Outcome Benchmark (6)

E.4 Pair-wise Agreement Between Labeling Functions

To analyze the agreement between the labeling functions and the final aggregated labels, we calculated pairwise agreement scores using Cohen’s Kappa in the TOP data splits. These scores are shown in Figure7 and 8 for random forest and data programming label aggregation, respectively. Each cell in the heatmap represents the agreement score between a pair of labeling functions. It is important to note that each labeling function does not cover all trials with weak labels. Therefore, while calculating Cohen’s Kappa, we only considered the trials common to both labeling functions. For instance, there are no common trials with weak labels from ’status’ and ’amendments,’ so the corresponding cell is left blank.

In the context of random forest label aggregation, LLM predictions on PubMed abstracts, p-values, trial linkage, number of sites, and number of patients showed higher agreement with the final aggregated labels. Similar patterns were observed with labels aggregated using data programming. Notably, LLM predictions had high agreement with p-values, likely because the LLM considered the p-values provided in the abstracts to predict trial outcomes. Additionally, there was good agreement between LLM predictions on PubMed abstracts and other factors such as trial linkage and number of patients.

Automatically Labeling $200B Life-Saving Datasets: A Large Clinical Trial Outcome Benchmark (7)
Automatically Labeling $200B Life-Saving Datasets: A Large Clinical Trial Outcome Benchmark (8)

Appendix F Ablation on Features for Trial Linkage

This section focuses on the ablation study conducted on the trial features for the trial linking algorithm. Figure9 presents the comparison results with TOP and the outcome labels from the trial linkage created using individual features. The features "intervention," "official title," "brief summary," and "eligibility criteria" consistently achieved better performance across all phases. In contrast, "lead sponsor" performed the worst, likely because trial sponsors often change based on funding capacity, even when a trial progresses to the next phase.

Additionally, Table8 shows the performance of various combinations of trial features across phases. Combining all trial features except "lead sponsor" yielded better performance across all phases. Including "lead sponsor" features reduced the accuracy of the extracted weak outcome labels. However, adding "lead sponsor" improved performance for Phase 1 trials, as it is rare for a trial to change sponsors between Phase 1 and Phase 2.

Automatically Labeling $200B Life-Saving Datasets: A Large Clinical Trial Outcome Benchmark (9)

PhaseFeature combinationF1
Phase 1Official title + Intervention0.645
Official title + Intervention + Brief Summary0.6426 \downarrow
Official title + Intervention + Brief Summary + Eligibility0.6447\uparrow
Official title + Intervention + Brief Summary + Eligibility + Condition0.6532 \uparrow
Official title + Intervention + Brief Summary + Eligibility + Condition + Lead Sponsor0.6658 \uparrow
Phase 2Official title + Intervention0.6027
Official title + Intervention + Brief Summary0.6042 \uparrow
Official title + Intervention + Brief Summary + Eligibility0.608 \uparrow
Official title + Intervention + Brief Summary + Eligibility + Condition0.6196 \uparrow
Official title + Intervention + Brief Summary + Eligibility + Condition + Lead Sponsor0.6189\downarrow
Phase 3Official title + Intervention0.7323
Official title + Intervention + Brief Summary0.7319\downarrow
Official title + Intervention + Brief Summary + Eligibility0.7379\uparrow
Official title + Intervention + Brief Summary + Eligibility + Condition0.7498\uparrow
Official title + Intervention + Brief Summary + Eligibility + Condition + Lead Sponsor0.7384 \downarrow
AllOfficial title + Intervention0.6686
Official title + Intervention + Brief Summary0.6687\uparrow
Official title + Intervention + Brief Summary + Eligibility0.6737\uparrow
Official title + Intervention + Brief Summary + Eligibility + Condition0.6842\uparrow
Official title + Intervention + Brief Summary + Eligibility + Condition + Lead Sponsor0.6796\downarrow

Appendix G Prompts for LLM Predictions on PubMed Abstracts

In this section, we describe the method used to obtain LLM predictions on PubMed abstracts, including the statistical test results and question-answer pairs. An example of such a prompt is shown in Figure10. Additionally, we provide two examples of input abstracts given to the LLM and their resulting outputs, as shown in Figures11 and 12.

Automatically Labeling $200B Life-Saving Datasets: A Large Clinical Trial Outcome Benchmark (10)
Automatically Labeling $200B Life-Saving Datasets: A Large Clinical Trial Outcome Benchmark (11)
Automatically Labeling $200B Life-Saving Datasets: A Large Clinical Trial Outcome Benchmark (12)

Appendix H Case studies

We utilized our random forest labels for the following case studies.

H.1 Case study 1

We conducted a case study on clinical trial NCT01213160101010https://clinicaltrials.gov/study/NCT01213160. This trial was completed in 2013. Five different weak labels (GPT decision, trial linkage, stock price, sites, and amendment) suggest that the trial was successful. The link to the PubMed article for this trial is https://ncbi.nlm.nih.gov/pmc/articles/PMC5502072/, which also suggests that "AZD4547 was well tolerated in Japanese patients, with the best response of stable disease \geq 4 weeks." Therefore, we believe NCT01213160 was successful, as our CTO label suggests. We believe the TOP label for this trial is incorrect.

H.2 Case study 2

Another clinical trial we examined in detail is NCT01111188111111https://clinicaltrials.gov/study/NCT01111188. This trial was terminated. Therefore, our label for this trial is Failure (label: 0). However, its TOP label is Success (label: 1), which is unlikely as the trial was not completed. Additionally, the GPT decision for the trial was 0, which means it thinks the PubMed abstract we collected suggests that the trial failed. "…all patients required dose delays during cycle 2 due to cytopenias, and the study team decided to stop the trial…with the primary toxicity being myelosuppression".

Appendix I User Manual: Instructions to Generate and Use CTO

We provide documentation to use CTO as of June 2024, and we will attempt to update this document over time as our sources update. However, the most up-to-date information can be seen in our GitHub https://github.com/chufangao/CTOD. The current version of the dataset can be accessed at https://zenodo.org/doi/10.5281/zenodo.11535960.

I.1 Instructions to Generate Clinical Trial Linkage

The code to reproduce clinical trial linkages in the CTO dataset from the CITI dataset is provided in the GitHub repository under the /clinical_trial_linkage folder. The step-by-step instructions to generate the clinical trials linkages are as follows:

Prerequisites:

1. Extract trial info and save trial embeddings:

First, we extract trial features from the CITI dataset. Provide the <data_path> for downloaded CITI data in the command below:

cd clinical_trial_linkagepython extract_trial_info.py --data_path <CITI dir>

Run the following command to extract and save the embeddings for the trial features using PubMedBERT. Make sure to provide the path to save the embeddings. Feel free to make changes to num_workers and gpu_ids as necessary.

python get_embedding_for_trial_linkage.py --root_folder <saved embeddings dir> --num_workers 2 --gpu_ids 0,1

2. Linking of clinical trials across phases:

Based on the extracted embeddings, we link the trials across different phases, as shown in the above figure. Run the following command to link the trials across phases. Provide the root_folder path to save the extracted linkages and embedding_path with the saved embeddings. Since we link from the latter phase to the initial phases, provide the starting later phase to target_phase. Also, we need only to consider the following phases to create the trial linkage: [’Phase 2’, ’Phase 2/Phase 3’, ’Phase 3’, ’Phase 4’]

# Phase 4python create_trial_linkage.py --root_folder <created linkages dir> --target_phase ’Phase 4’ --embedding_path <saved embeddings dir> --num_workers 2 --gpu_ids 0# Phase 3python create_trial_linkage.py --root_folder <created linkages dir> --target_phase ’Phase 3’ --embedding_path <saved embeddings dir> --num_workers 2 --gpu_ids 0# Phase 2/Phase 3python create_trial_linkage.py --root_folder <created linkages dir> --target_phase ’Phase 2/Phase 3’ --embedding_path <saved embeddings dir> --num_workers 2 --gpu_ids 0# Phase 2python create_trial_linkage.py --root_folder <created linkages dir> --target_phase ’Phase 2’ --embedding_path <saved embeddings dir> --num_workers 2 --gpu_ids 0

3. Extract outcome labels:

Run the following command to extract clinical trial outcome weak labels from clinical trial linkages. Provide the path with saved trial linkages.

python extract_outcome_from_trial_linkage.py --trial_linkage_path <trial linkage dir>

4. FDA approval matching:

Run the following command to match the FDA approvals from the orange book to phase 3 trials and update the outcome labels for phase 3 trials.

python match_fda_approvals.py --trial_linkage_path <matched trials path>

The final outcome labels extracted from the clinical trial linkages and matching FDA approvals will be saved at:

<trial linkage dir>/outcome_labels/Merged_(ALL)_trial_linkage_outcome_df.csv

I.2 Instructions to Obtain LLM Predictions on PubMed Abstracts

The code to obtain LLM predictions on the PubMed abstracts as of in the CTO dataset from the CITI dataset is provided in the GitHub repository under the /llm_prediction_on_pubmedfolder. The step-by-step instructions are as follows:

Prerequisites:

  • Download the trial dataset from CITI. If it has already been downloaded, provide the path to the data in the scripts.

  • To extract all the PubMed abstracts linked to the clinical trials, we use the NCBI API. Follow the instructions on this page to create an NCBI account and obtain the API key.

1. Extract PubMed Abstracts:

Run the following commands for the extraction algorithm to retrieve all linked PubMed abstracts. Provide the NCBI API key, the path to the CITI data, and the path to save the extracted abstracts.

cd llm_prediction_on_pubmedpython extract_pubmed_abstracts.py --data_path <CITI dir> --NCBI_api_key <API key> --save_path <Path to save extracted abstracts>

2. Retrieve the Top 2 Abstracts:

To make the process efficient, we initially retrieve the top 2 most relevant abstracts (as shown in the figure above) and save them in a data frame.

python retrieve_top2_abstracts.py --data_path <Path to CITI data> --pubmed_path <extracted pubmed dir>

The resultant data frame will be saved at <pubmed_path>/top_2_extracted_pubmed_articles.csv

3. Get LLM Predictions:

To get the LLM prediction on PubMed abstracts, provide the OpenAI API key to the get_llm_predictions.py script and run the following. Also, provide the path to the above top_2_extracted_pubmed_articles.csv and the path to save LLM predictions. Along with the LLM predictions, extracted statistical features from the abstracts and the QA pairs are saved in the JSON files.

python get_llm_predictions.py --top_2_pubmed_path <top_2_extracted_pubmed_articles.csv path> --save_path <LLM predictions dir>

Finally, run the following code to combine all the outcomes. Also, provide the path to the above top_2_extracted_pubmed_articles.csv and the path to saved LLM predictions.

python clean_and_extract_final_outcomes.py --gpt_decisions_path <LLM predictions dir> --top_2_pubmed_path <top_2_extracted_pubmed_articles.csv path>

Final outcome predictions from LLM saved in <top_2_pubmed_path>/pubmed_gpt_outcomes.csv

I.3 News Headlines

Prerequisites:

  • We use GNews to scrap Google News for the news headlines.

1: Scrape Google NewsFirst, we extract trial features from the CITI dataset. Provide the <data_path> for downloaded CITI data in the command below:

Run the following command to start the scraping for the top 1000 industry sponsors (NOTE: This will take a long time, i.e. on the scale of multiple weeks.) We share our scraped headlines in the Zenodo supplementary.

python get_news.py --mode=get_news

2: Obtaining Sentiment Embeddings from News Headlines and Study Titles

Running this command also saves the news title embeddings and a dataframe of the news as news.csv.

python get_news.py --mode=process_news

3 Corresponding News and Trials: We encode trial study title embeddings and obtain TopK Similarity. Running this command also saves study title embeddings.

python get_news.py --mode=correspond_news_and_studies

I.4 Stock Price

Prerequisites:

  • Download the trial dataset from CTTI. If it has already been downloaded, provide the path to the data in the scripts.

  • Create a csv file ‘tickers.csv’ that contains the names and corresponding tickers of sponsors in ‘name’ and ‘ticker’ columns respectively.

1: Obtain Stock Prices: Run tickers_2_history.ipynb to get historical stock prices for the sponsors in tickers.csv if those are available publicly. The stock price data will be stored in stock_prices_historical.csv.

2: Calculate Slope: Use studies.txt and sponsors.txt from CTTI withstock_prices_historical.csv and tickers.csv to calculate the slope of the stock prices using slope_calculation.ipynb.

I.5 Running Baselines

Here are the steps to run different baselines for clinical trial outcome prediction:

SPOT:

Update the train, test, and validation data paths in run_spot.py. Execute the Python file:

python run_spot.py

BioBERT:

Modify the train, test, and validation data paths in biobert_trial_outcome.py. Run the Python file:

python biobert_trial_outcome.py

PubMedBERT:

Change the train, test, and validation data paths in pubmedbert_trial_outcome.py. Execute the Python file:

python pubmedbert_trial_outcome.py

SVM, XGBoost, MLP, RF, or LR:

Ensure that the paths in baselines.py are correct. Run the Python file:

python baselines.py
Automatically Labeling $200B Life-Saving Datasets: A Large Clinical Trial Outcome Benchmark (2024)
Top Articles
Latest Posts
Article information

Author: Sen. Emmett Berge

Last Updated:

Views: 5745

Rating: 5 / 5 (60 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Sen. Emmett Berge

Birthday: 1993-06-17

Address: 787 Elvis Divide, Port Brice, OH 24507-6802

Phone: +9779049645255

Job: Senior Healthcare Specialist

Hobby: Cycling, Model building, Kitesurfing, Origami, Lapidary, Dance, Basketball

Introduction: My name is Sen. Emmett Berge, I am a funny, vast, charming, courageous, enthusiastic, jolly, famous person who loves writing and wants to share my knowledge and understanding with you.