Janet Wittes

Janet Wittes

Renowned statistician Janet Wittes will tackle a tricky topic when she visits PHRI as the 5th Annual Janice Pogue Lectureship on Biostatistics, on Tuesday, June 28th, 5:00 – 6:30 pm ET, at PHRI’s headquarters in Hamilton, Ontario – as well as Zoom (register here).

Her lecture is titled: Interim Analyses: Rules or Guidelines: A guide from and for the perplexed.” The American giant in statistics – founder and president emerita of WCG Statistics Collaborative – describes the challenge as follows:

“Those of us involved in randomized controlled trials – especially trials that test the effect of an intervention on a hard clinical outcome – are conversant with formal interim analyses permitting a DSMB to recommend stopping a trial if there is little hope that the experimental intervention will show convincing evidence of benefit (a.k.a. futility) or if the data show ‘overwhelming’ evidence of benefit.”

However, she adds, “hidden behind the word “overwhelming’ lurks the need to protect the Type I error rate whether one formulates the trial in a frequentist or Bayesian manner.”

“Sometimes, however, the data from a trial do not obey what the designers anticipated. A boundary may be crossed allowing a formal declaration of benefit, but the DSMB is hesitant to recommend stopping because it fears that the trial has still not answered important questions.”

“In other cases, the data show extremely strong evidence of benefit, but the trajectory of the observed trend has not crossed a boundary.”

In her Janice Pogue Lectureship, Wittes will use examples, including two from studies performed by PHRI, to address both problems:

  • Crossing a boundary that allows a formal declaration of efficacy but not wanting to stop and;
  • Not crossing a boundary, but feeling the data are so strong that continuing is unscientific.

In these situations, the DSMB faces a perplexing question: Is the defined statistical boundary a rule or a guideline?

Wittes completed her Ph.D. in Statistics at Harvard University in 1970, and after serving in several academic positions, she joined the National Heart Lung and Blood Institute’s Biostatistics Research Branch as its Chief in 1983. In 1990 she founded her consulting firm Statistics Collaborative, and is currently its President Emerita.

She is well-known for her research on the design of clinical trials. She has been the president of the Society for Clinical Trials as well as the Editor-in-chief of the society’s journal Controlled Clinical Trials. With Mike Proschan and Gordon Lan, she co-authored a seminal book title Statistical Monitoring of Clinical Trials.

She is a Fellow of the American Statistical Association and of the American Association for the Advancement of Science and of the Society for Clinical Trials, and an elected member of the International Statistical Institute. Wittes was the winner of the 2006 Janet L. Norwood Award for Outstanding Achievement by a Woman in the Statistical Sciences.

The Janice Pogue Lectureship is named in honour of the memory of Janice Pogue, who created the statistical group at PHRI.

Once in Love with Hazard Ratios

On Monday, June 27th, 4 – 5 pm, Janet Wittes will also visit the Health Research Methods, Evidence & Impact (HEI) Department on the McMaster University campus to give a talk titled Once In Love with Hazard Ratios. That talk will also be offered via Zoom (register here).

Her abstract for this academic seminar is as follows:

In the old days, we used Kaplan-Meier curves (1958) to summarize data from time-to-event trials. We might have tacked on Greenwood-Yule methods (1920) to report estimated differences in survival probabilities at specific times. Unfortunately, we had no way of characterizing the curves in their entirety.

Mantel’s log-rank test (1966) was a method for assigning a p-value to the curve, but we still did could not summarize in one number the difference between the two curves. (Some disciplines used medians, but they were poor summaries of the entire curve.)

Cox’s proportional hazards model (1972) gave us the tool we were looking for: we could now calculate something called the hazard ratio (HR) which summarized in one number the effect of an experimental treatment relative to control over the entire period covered by the Kaplan-Meier curves.

We had everything we needed – a visual representation of survival over time; a way of testing the difference in the curves at specific time; a method for assigning a p-value to assess the degree to which the difference between the curves was inconsistent with chance; and a summary statistic, the HR, to describe the magnitude of the difference between curves. We knew that the log-rank test was optimal when the curves had proportional hazards, but valid even when they did not.

The Cox model, on the other hand, required proportional hazards, but if the hazards were not far from proportionality, the model was good enough to use. We invented sloppy language to describe what a HR was, suspecting that many non-statisticians would not understand the technical language of “hazard”.

Wittes’ talk at McMaster will address:

  • What a Hazard Ratio really is, and how it applies in the case of non-proportional hazards.
  • Whether we should be summarizing estimates from survival curves with other statistics – e.g., perhaps back to comparison of events at specific times, or the increasingly popular restricted mean survival time.

Back To Top