Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Thomas Jaki
  • Medical and Pharmaceutical Statistics Research Unit
    Department of Mathematics and Statistics
    Lancaster University

Thomas Jaki

In genetics it is often of interest to discover single nucleotide polymorphisms (SNPs) that are directly related to a disease, rather than just being associated with it. Few methods exist, however, addressing this so-called `true sparsity... more
In genetics it is often of interest to discover single nucleotide polymorphisms (SNPs) that are directly related to a disease, rather than just being associated with it. Few methods exist, however, addressing this so-called `true sparsity recovery' issue. In a thorough simulation study, we show that for moderate or low correlation between predictors, lasso-based methods perform well at true sparsity recovery, despite not being specifically designed for this purpose. For large correlations, however, more specialised methods are needed. Stability selection and direct effect testing perform well in all situations, including when the correlation is large.
Summary An important tool to evaluate the performance of a dose-finding design is the nonparametric optimal benchmark that provides an upper bound on the performance of a design under a given scenario. A fundamental assumption of the... more
Summary An important tool to evaluate the performance of a dose-finding design is the nonparametric optimal benchmark that provides an upper bound on the performance of a design under a given scenario. A fundamental assumption of the benchmark is that the investigator can arrange doses in a monotonically increasing toxicity order. While the benchmark can be still applied to combination studies in which not all dose combinations can be ordered, it does not account for the uncertainty in the ordering. In this article, we propose a generalization of the benchmark that accounts for this uncertainty and, as a result, provides a sharper upper bound on the performance. The benchmark assesses how probable the occurrence of each ordering is, given the complete information about each patient. The proposed approach can be applied to trials with an arbitrary number of endpoints with discrete or continuous distributions. We illustrate the utility of the benchmark using recently proposed dose-fin...
Repurposing approved drugs may rapidly establish effective interventions during a public health crisis. This has yielded immunomodulatory treatments for severe COVID-19, but repurposed antivirals have not been successful to date because... more
Repurposing approved drugs may rapidly establish effective interventions during a public health crisis. This has yielded immunomodulatory treatments for severe COVID-19, but repurposed antivirals have not been successful to date because of redundancy of the target in vivo or suboptimal exposures at studied doses. Nitazoxanide is an FDA approved antiparasitic medicine, that physiologically-based pharmacokinetic (PBPK) modelling has indicated may provide antiviral concentrations across the dosing interval, when repurposed at higher than approved doses. Within the AGILE trial platform (NCT04746183) an open label, adaptive, phase 1 trial in healthy adult participants was undertaken with high dose nitazoxanide. Participants received 1500mg nitazoxanide orally twice-daily with food for 7 days. Primary outcomes were safety, tolerability, optimum dose and schedule. Intensive pharmacokinetic sampling was undertaken day 1 and 5 with Cmin sampling on day 3 and 7. Fourteen healthy participants ...
Background: Endpoint choice for randomized controlled trials of treatments for novel coronavirus-induced disease (COVID-19) is complex. Trials must start rapidly to identify treatments that can be used as part of the outbreak response, in... more
Background: Endpoint choice for randomized controlled trials of treatments for novel coronavirus-induced disease (COVID-19) is complex. Trials must start rapidly to identify treatments that can be used as part of the outbreak response, in the midst of considerable uncertainty and limited information. COVID-19 presentation is heterogeneous, ranging from mild disease that improves within days to critical disease that can last weeks to over a month and can end in death. While improvement in mortality would provide unquestionable evidence about the clinical significance of a treatment, sample sizes for a study evaluating mortality are large and may be impractical, particularly given a multitude of putative therapies to evaluate. Furthermore, patient states in between “cure” and “death” represent meaningful distinctions. Clinical severity scores have been proposed as an alternative. However, the appropriate summary measure for severity scores has been the subject of debate, particularly ...
SUMMARYBackgroundTocilizumab is a monoclonal antibody that binds to the receptor for interleukin (IL)-6, reducing inflammation, and is commonly used to treat rheumatoid arthritis. We evaluated the safety and efficacy of tocilizumab in... more
SUMMARYBackgroundTocilizumab is a monoclonal antibody that binds to the receptor for interleukin (IL)-6, reducing inflammation, and is commonly used to treat rheumatoid arthritis. We evaluated the safety and efficacy of tocilizumab in adult patients admitted to hospital with COVID-19 with evidence of both hypoxia and systemic inflammation.MethodsThis randomised, controlled, open-label, platform trial (Randomised Evaluation of COVID-19 Therapy [RECOVERY]), is assessing several possible treatments in patients hospitalised with COVID-19 in the UK. Those trial participants with hypoxia (oxygen saturation <92% on air or requiring oxygen therapy) and evidence of systemic inflammation (C-reactive protein [CRP] ≥75 mg/L) were eligible for randomisation to usual standard of care alone versus usual standard of care plus tocilizumab at a dose of 400 mg to 800 mg (depending on weight) given intravenously. A second dose could be given 12 to 24 hours later if the patient’s condition had not im...
There is growing interest in Phase I dose-finding studies studying several doses of more than one agent simultaneously. A number of combination dose-finding designs were recently proposed to guide escalation/de-escalation decisions during... more
There is growing interest in Phase I dose-finding studies studying several doses of more than one agent simultaneously. A number of combination dose-finding designs were recently proposed to guide escalation/de-escalation decisions during the trials. The majority of these proposals are model-based: a parametric combination-toxicity relationship is fitted as data accumulates. Various parameter shapes were considered but the unifying theme for many of these is that typically between 4 and 6 parameters are to be estimated. While more parameters allow for more flexible modelling of the combination-toxicity relationship, this is a challenging estimation problem given the typically small sample size in Phase I trials of between 20 and 60 patients. These concerns gave raise to an ongoing debate whether including more parameters into combination-toxicity model leads to more accurate combination selection. In this work, we extensively study two variants of a 4-parameter logistic model with r...
Quantitative methods have been proposed to assess and compare the benefit-risk balance of treatments. Among them, multicriteria decision analysis (MCDA) is a popular decision tool as it permits to summarise the benefits and the risks of a... more
Quantitative methods have been proposed to assess and compare the benefit-risk balance of treatments. Among them, multicriteria decision analysis (MCDA) is a popular decision tool as it permits to summarise the benefits and the risks of a drug in a single utility score, accounting for the preferences of the decision-makers. However, the utility score is often derived using a linear model which might lead to counter-intuitive conclusions; for example, drugs with no benefit or extreme risk could be recommended. Moreover, it assumes that the relative importance of benefits against risks is constant for all levels of benefit or risk, which might not hold for all drugs. We propose Scale Loss Score (SLoS) as a new tool for the benefit–risk assessment, which offers the same advantages as the linear multicriteria decision analysis utility score but has, in addition, desirable properties permitting to avoid recommendations of non-effective or extremely unsafe treatments, and to tolerate larg...
Population heterogeneity is frequently observed among patients' treatment responses in clinical trials because of various factors such as clinical background, environmental, and genetic factors. Different subpopulations defined by... more
Population heterogeneity is frequently observed among patients' treatment responses in clinical trials because of various factors such as clinical background, environmental, and genetic factors. Different subpopulations defined by those baseline factors can lead to differences in the benefit or safety profile of a therapeutic intervention. Ignoring heterogeneity between subpopulations can substantially impact on medical practice. One approach to address heterogeneity necessitates designs and analysis of clinical trials with subpopulation selection. Several types of designs have been proposed for different circumstances. In this work, we discuss a class of designs that allow selection of a predefined subgroup. Using the selection based on the maximum test statistics as the worst-case scenario, we then investigate the precision and accuracy of the maximum likelihood estimator at the end of the study via simulations. We find that the required sample size is chiefly determined by th...
The growing role of targeted medicine has led to an increased focus on the development of actionable biomarkers. Current penalized selection methods that are used to identify biomarker panels for classification in high-dimensional data,... more
The growing role of targeted medicine has led to an increased focus on the development of actionable biomarkers. Current penalized selection methods that are used to identify biomarker panels for classification in high-dimensional data, however, often result in highly complex panels that need careful pruning for practical use. In the framework of regularization methods, a penalty that is a weighted sum of the Land Lnorm has been proposed to account for the complexity of the resulting model. In practice, the limitation of this penalty is that the objective function is non-convex, non-smooth, the optimization is computationally intensive and the application to high-dimensional settings is challenging. In this paper, we propose a stepwise forward variable selection method which combines the Lwith Lor Lnorms. The penalized likelihood criterion that is used in the stepwise selection procedure results in more parsimonious models, keeping only the most relevant features. Simulation results...
The main purpose of dose-escalation trials is to identify the dose(s) that is/are safe and efficacious for further investigations in later studies. In this paper, we introduce dose-escalation designs that incorporate both the... more
The main purpose of dose-escalation trials is to identify the dose(s) that is/are safe and efficacious for further investigations in later studies. In this paper, we introduce dose-escalation designs that incorporate both the dose-limiting events and dose-limiting toxicities (DLTs) and indicative responses of efficacy into the procedure. A flexible nonparametric model is used for modelling the continuous efficacy responses while a logistic model is used for the binary DLTs. Escalation decisions are based on the combination of the probabilities of DLTs and expected efficacy through a gain function. On the basis of this setup, we then introduce 2 types of Bayesian adaptive dose-escalation strategies. The first type of procedures, called "single objective," aims to identify and recommend a single dose, either the maximum tolerated dose, the highest dose that is considered as safe, or the optimal dose, a safe dose that gives optimum benefit risk. The second type, called "dual objective," aims to jointly estimate both the maximum tolerated dose and the optimal dose accurately. The recommended doses obtained under these dose-escalation procedures provide information about the safety and efficacy profile of the novel drug to facilitate later studies. We evaluate different strategies via simulations based on an example constructed from a real trial on patients with type 2 diabetes, and the use of stopping rules is assessed. We find that the nonparametric model estimates the efficacy responses well for different underlying true shapes. The dual-objective designs give better results in terms of identifying the 2 real target doses compared to the single-objective designs.
... Subjects: Q Science > QA Mathematics. Departments: Faculty of Science and Technology > Mathematics and Statistics. ID Code: 10056. Deposited By: Dr Thomas Jaki. Deposited On: 07 Jul 2008 12:13. Refereed?: Yes. Published?:... more
... Subjects: Q Science > QA Mathematics. Departments: Faculty of Science and Technology > Mathematics and Statistics. ID Code: 10056. Deposited By: Dr Thomas Jaki. Deposited On: 07 Jul 2008 12:13. Refereed?: Yes. Published?: Published. Last Modified: 28 Jan 2011 01:29. ...
Multi-arm clinical trials that compare several active treatments to a common control have been proposed as an efficient means of making an informed decision about which of several treatments should be evaluated further in a confirmatory... more
Multi-arm clinical trials that compare several active treatments to a common control have been proposed as an efficient means of making an informed decision about which of several treatments should be evaluated further in a confirmatory study. Additional efficiency is gained by incorporating interim analyses and, in particular, seamless Phase II/III designs have been the focus of recent research. Common to much of this work is the constraint that selection and formal testing should be based on a single efficacy endpoint, despite the fact that in practice, safety considerations will often play a central role in determining selection decisions. Here, we develop a multi-arm multi-stage design for a trial with an efficacy and safety endpoint. The safety endpoint is explicitly considered in the formulation of the problem, selection of experimental arm and hypothesis testing. The design extends group-sequential ideas and considers the scenario where a minimal safety requirement is to be f...
Research Interests:
Research Interests:
Adaptive designs can make clinical trials more flexible by utilising results accumulating in the trial to modify the trial's course in accordance with pre-specified rules. Trials with an adaptive design are often more efficient,... more
Adaptive designs can make clinical trials more flexible by utilising results accumulating in the trial to modify the trial's course in accordance with pre-specified rules. Trials with an adaptive design are often more efficient, informative and ethical than trials with a traditional fixed design since they often make better use of resources such as time and money, and might require fewer participants. Adaptive designs can be applied across all phases of clinical research, from early-phase dose escalation to confirmatory trials. The pace of the uptake of adaptive designs in clinical research, however, has remained well behind that of the statistical literature introducing new methods and highlighting their potential advantages. We speculate that one factor contributing to this is that the full range of adaptations available to trial designs, as well as their goals, advantages and limitations, remains unfamiliar to many parts of the clinical community. Additionally, the term adapt...
Pharmacokinetic studies aim to study how a compound is absorbed, distributed, metabolised, and excreted. The concentration of the compound in the blood or plasma is measured at different time points after administration and... more
Pharmacokinetic studies aim to study how a compound is absorbed, distributed, metabolised, and excreted. The concentration of the compound in the blood or plasma is measured at different time points after administration and pharmacokinetic parameters such as the area under the curve (AUC) or maximum concentration (C) are derived from the resulting concentration time profile. In this paper, we want to compare different methods for collecting concentration measurements (traditional sampling versus microsampling) on the basis of these derived parameters. We adjust and evaluate an existing method for testing superiority of multiple derived parameters that accounts for model uncertainty. We subsequently extend the approach to allow testing for equivalence. We motivate the methods through an illustrative example and evaluate the performance using simulations. The extensions show promising results for application to the desired setting.
Just over half of publicly funded trials recruit their target sample size within the planned study duration. When recruitment targets are missed, the funder of a trial is faced with the decision of either committing further resources to... more
Just over half of publicly funded trials recruit their target sample size within the planned study duration. When recruitment targets are missed, the funder of a trial is faced with the decision of either committing further resources to the study or risk that a worthwhile treatment effect may be missed by an underpowered final analysis. To avoid this challenging situation, when there is insufficient prior evidence to support predicted recruitment rates, funders now require feasibility assessments to be performed in the early stages of trials. Progression criteria are usually specified and agreed with the funder ahead of time. To date, however, the progression rules used are typically ad hoc. In addition, rules routinely permit adaptations to recruitment strategies but do not stipulate criteria for evaluating their effectiveness. In this paper, we develop a framework for planning and designing internal pilot studies which permit a trial to be stopped early if recruitment is disappointing or to continue to full recruitment if enrolment during the feasibility phase is adequate. This framework enables a progression rule to be pre-specified and agreed upon prior to starting a trial. The novel two-stage designs stipulate that if neither of these situations arises, adaptations to recruitment should be made and subsequently evaluated to establish whether they have been successful. We derive optimal progression rules for internal pilot studies which minimise the expected trial overrun and maintain a high probability of completing the study when the recruitment rate is adequate. The advantages of this procedure are illustrated using a real trial example.
When developing new medicines for children, the potential to extrapolate from adult data to reduce the experimental burden in children is well recognised. However, significant assumptions about the similarity of adults and children are... more
When developing new medicines for children, the potential to extrapolate from adult data to reduce the experimental burden in children is well recognised. However, significant assumptions about the similarity of adults and children are needed for extrapolations to be biologically plausible. We reviewed the literature to identify statistical methods that could be used to optimise extrapolations in paediatric drug development programmes. Web of Science was used to identify papers proposing methods relevant for using data from a 'source population' to support inferences for a 'target population'. Four key areas of methods development were targeted: paediatric clinical trials, trials extrapolating efficacy across ethnic groups or geographic regions, the use of historical data in contemporary clinical trials and using short-term endpoints to support inferences about long-term outcomes. Searches identified 626 papers of which 52 met our inclusion criteria. From these we identified 102 methods comprising 58 Bayesian and 44 frequentist approaches. Most Bayesian methods (n = 54) sought to use existing data in the source population to create an informative prior distribution for a future clinical trial. Of these, 46 allowed the source data to be down-weighted to account for potential differences between populations. Bayesian and frequentist versions of methods were found for assessing whether key parameters of source and target populations are commensurate (n = 34). Fourteen frequentist methods synthesised data from different populations using a joint model or a weighted test statistic. Several methods were identified as potentially applicable to paediatric drug development. Methods which can accommodate a heterogeneous target population and which allow data from a source population to be down-weighted are preferred. Methods assessing the commensurability of parameters may be used to determine whether it is appropriate to pool data across age groups to estimate treatment effects.
When several treatments are available for evaluation in a clinical trial, different design options are available. We compare multi-arm multi-stage with factorial designs, and in particular, we will consider a 2 × 2 factorial design, where... more
When several treatments are available for evaluation in a clinical trial, different design options are available. We compare multi-arm multi-stage with factorial designs, and in particular, we will consider a 2 × 2 factorial design, where groups of patients will either take treatments A, B, both or neither. We investigate the performance and characteristics of both types of designs under different scenarios and compare them using both theory and simulations. For the factorial designs, we construct appropriate test statistics to test the hypothesis of no treatment effect against the control group with overall control of the type I error. We study the effect of the choice of the allocation ratios on the critical value and sample size requirements for a target power. We also study how the possibility of an interaction between the two treatments A and B affects type I and type II errors when testing for significance of each of the treatment effects. We present both simulation results and a case study on an osteoarthritis clinical trial. We discover that in an optimal factorial design in terms of minimising the associated critical value, the corresponding allocation ratios differ substantially to those of a balanced design. We also find evidence of potentially big losses in power in factorial designs for moderate deviations from the study design assumptions and little gain compared with multi-arm multi-stage designs when the assumptions hold. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
We consider estimation of treatment effects in two-stage adaptive multi-arm trials with a common control. The best treatment is selected at interim, and the primary endpoint is modeled via a Cox proportional hazards model. The maximum... more
We consider estimation of treatment effects in two-stage adaptive multi-arm trials with a common control. The best treatment is selected at interim, and the primary endpoint is modeled via a Cox proportional hazards model. The maximum partial-likelihood estimator of the log hazard ratio of the selected treatment will overestimate the true treatment effect in this case. Several methods for reducing the selection bias have been proposed for normal endpoints, including an iterative method based on the estimated conditional selection biases and a shrinkage approach based on empirical Bayes theory. We adapt these methods to time-to-event data and compare the bias and mean squared error of all methods in an extensive simulation study and apply the proposed methods to reconstructed data from the FOCUS trial. We find that all methods tend to overcorrect the bias, and only the shrinkage methods can reduce the mean squared error. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Despite optimal therapy, many children with Crohn's disease (CD) experience growth retardation. The objectives of the study are to assess the feasibility of a randomised control trial (RCT) of injectable forms of growth-promoting... more
Despite optimal therapy, many children with Crohn's disease (CD) experience growth retardation. The objectives of the study are to assess the feasibility of a randomised control trial (RCT) of injectable forms of growth-promoting therapy and to survey the attitudes of children with CD and their parents to it. A feasibility study was carried out to determine study arms, sample size and numbers of eligible patients. A face-to-face questionnaire surveyed willingness to consent to future participation in the RCT. Eligibility to the survey was any child under 18 (with their parent/guardian) with CD whose height standard deviation score (HtSDS) was ≤+1. Of 118 questionnaires, 94 (80%) were returned (48 by children and 46 by parents). The median age of the patients in the survey was 14.3 years (range 7.0 to 17.7), and 35 (73%) were male. Their median HtSDS was -1.2 (-3.01, 0.23), and it was lower than the median mid-parental HtSDS of -0.6 (-3.1, 1.4). We analysed the willingness of the...
Development of treatments for rare diseases is challenging due to the limited number of patients available for participation. Learning about treatment effectiveness with a view to treat patients in the larger outside population, as in the... more
Development of treatments for rare diseases is challenging due to the limited number of patients available for participation. Learning about treatment effectiveness with a view to treat patients in the larger outside population, as in the traditional fixed randomised design, may not be a plausible goal. An alternative goal is to treat the patients within the trial as effectively as possible. Using the framework of finite-horizon Markov decision processes and dynamic programming (DP), a novel randomised response-adaptive design is proposed which maximises the total number of patient successes in the trial and penalises if a minimum number of patients are not recruited to each treatment arm. Several performance measures of the proposed design are evaluated and compared to alternative designs through extensive simulation studies using a recently published trial as motivation. For simplicity, a two-armed trial with binary endpoints and immediate responses is considered. Simulation results for the proposed design show that: (i) the percentage of patients allocated to the superior arm is much higher than in the traditional fixed randomised design; (ii) relative to the optimal DP design, the power is largely improved upon and (iii) it exhibits only a very small bias and mean squared error of the treatment effect estimator. Furthermore, this design is fully randomised which is an advantage from a practical point of view because it protects the trial against various sources of bias. As such, the proposed design addresses some of the key issues that have been suggested as preventing so-called bandit models from being implemented in clinical practice.
It is useful to incorporate biological knowledge on the role of genetic determinants in predicting an outcome. It is, however, not always feasible to fully elicit this information when the number of determinants is large. We present an... more
It is useful to incorporate biological knowledge on the role of genetic determinants in predicting an outcome. It is, however, not always feasible to fully elicit this information when the number of determinants is large. We present an approach to overcome this difficulty. First, using half of the available data, a shortlist of potentially interesting determinants are generated. Second, binary indications of biological importance are elicited for this much smaller number of determinants. Third, an analysis is carried out on this shortlist using the second half of the data. We show through simulations that, compared with adaptive lasso, this approach leads to models containing more biologically relevant variables, while the prediction mean squared error (PMSE) is comparable or even reduced. We also apply our approach to bone mineral density data, and again final models contain more biologically relevant variables and have reduced PMSEs. Our method leads to comparable or improved pred...
Data from clinical trials in adults, extrapolated to predict benefits in paediatric patients, could result in fewer or smaller trials being required to obtain a new drug licence for paediatrics. This article outlines the place of such... more
Data from clinical trials in adults, extrapolated to predict benefits in paediatric patients, could result in fewer or smaller trials being required to obtain a new drug licence for paediatrics. This article outlines the place of such extrapolation in the development of drugs for use in paediatric epilepsies. Based on consensus expert opinion, a proposal is presented for a new paradigm for the clinical development of drugs for focal epilepsies. Phase I data should continue to be collected in adults, and phase II and III trials should simultaneously recruit adults and paediatric patients aged above 2 years. Drugs would be provisionally licensed for children subject to phase IV collection of neurodevelopmental safety data in this age group. A single programme of trials would suffice to license the drug for use as either adjunctive therapy or monotherapy. Patients, clinicians and sponsors would all benefit from this new structure through cost reduction and earlier access to novel treat...
Mid-study design modifications are becoming increasingly accepted in confirmatory clinical trials, so long as appropriate methods are applied such that error rates are controlled. It is therefore unfortunate that the important case of... more
Mid-study design modifications are becoming increasingly accepted in confirmatory clinical trials, so long as appropriate methods are applied such that error rates are controlled. It is therefore unfortunate that the important case of time-to-event endpoints is not easily handled by the standard theory. We analyze current methods that allow design modifications to be based on the full interim data, i.e., not only the observed event times but also secondary endpoint and safety data from patients who are yet to have an event. We show that the final test statistic may ignore a substantial subset of the observed event times. An alternative test incorporating all event times is found, where a conservative assumption must be made in order to guarantee type I error control. We examine the power of this approach using the example of a clinical trial comparing two cancer therapies.
Well-designed clinical prediction models (CPMs) often out-perform clinicians at estimating probabilities of clinical outcomes, though their adoption by family physicians is variable. How family physicians interact with CPMs is poorly... more
Well-designed clinical prediction models (CPMs) often out-perform clinicians at estimating probabilities of clinical outcomes, though their adoption by family physicians is variable. How family physicians interact with CPMs is poorly understood, therefore a better understanding and framing within a context-sensitive theoretical framework may improve CPM development and implementation. The aim of this study was to investigate why family physicians do or do not use CPMs, interpreting these findings within a theoretical framework to provide recommendations for the development and implementation of future CPMs. Mixed methods study in North West England that comprised an online survey and focus groups. One hundred thirty eight respondents completed the survey, which found the main perceived advantages to using CPMs were that they guided appropriate treatment (weighted rank [r] = 299; maximum r = 414 throughout), justified treatment decisions (r = 217), and incorporated a large body of ev...
A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this... more
A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found th...
Multiplicity is common in clinical studies and the current standard is to use the familywise error rate to ensure that the errors are kept at a prespecified level. In this paper, we will show that, in certain situations, familywise error... more
Multiplicity is common in clinical studies and the current standard is to use the familywise error rate to ensure that the errors are kept at a prespecified level. In this paper, we will show that, in certain situations, familywise error rate control does not account for all errors made. To counteract this problem, we propose the use of the expected number of false claims (EFC). We will show that a (weighted) Bonferroni approach can be used to control the EFC, discuss how a study that uses the EFC can be powered for co-primary, exchangeable, and hierarchical endpoints, and show how the weight for the weighted Bonferroni test can be determined in this manner. ©2016 The Authors. Pharmaceutical Statistics Published by John Wiley & Sons Ltd.
In most medical research, treatment effectiveness is assessed using the average treatment effect or some version of subgroup analysis. The practice of individualized or precision medicine, however, requires new approaches that predict how... more
In most medical research, treatment effectiveness is assessed using the average treatment effect or some version of subgroup analysis. The practice of individualized or precision medicine, however, requires new approaches that predict how an individual will respond to treatment, rather than relying on aggregate measures of effect. In this study, we present a conceptual framework for estimating individual treatment effects, referred to as predicted individual treatment effects. We first apply the predicted individual treatment effect approach to a randomized controlled trial designed to improve behavioral and physical symptoms. Despite trivial average effects of the intervention, we show substantial heterogeneity in predicted individual treatment response using the predicted individual treatment effect approach. The predicted individual treatment effects can be used to predict individuals for whom the intervention may be most effective (or harmful). Next, we conduct a Monte Carlo sim...
This paper proposes a novel exploratory approach for assessing how the effects of level-2 predictors differ across level-1 units. Multilevel regression mixture models are used to identify latent classes at level-1 that differ in the... more
This paper proposes a novel exploratory approach for assessing how the effects of level-2 predictors differ across level-1 units. Multilevel regression mixture models are used to identify latent classes at level-1 that differ in the effect of one or more level-2 predictors. Monte Carlo simulations are used to demonstrate the approach with different sample sizes and to demonstrate the consequences of constraining 1 of the random effects to zero. An application of the method to evaluate heterogeneity in the effects of classroom practices on students is used to show the types of research questions which can be answered with this method and the issues faced when estimating multilevel regression mixtures.
Telmisartan, an angiotensin receptor blocker, has beneficial effects on insulin resistance and cardiovascular health in non-HIV populations. This trial will evaluate whether telmisartan can reduce insulin resistance in HIV-positive... more
Telmisartan, an angiotensin receptor blocker, has beneficial effects on insulin resistance and cardiovascular health in non-HIV populations. This trial will evaluate whether telmisartan can reduce insulin resistance in HIV-positive individuals on combination antiretroviral therapy. This is a phase II, multicentre, randomised, open-labelled, dose-ranging trial of telmisartan in 336 HIV-positive individuals over a period of 48 weeks. The trial will use an adaptive design to inform the optimal dose of telmisartan. Patients will be randomised initially 1:1:1:1 to receive one of the three doses of telmisartan (20, 40 and 80 mg) or no intervention (control). An interim analysis will be performed when half of the planned maximum of 336 patients have been followed up for at least 24 weeks. The second stage of the study will depend on the results of interim analysis. The primary outcome measure is a reduction in insulin resistance (as measured by Homeostatic Model Assessment-Insulin Resistan...

And 61 more