Cox Proportional-Hazards Regression For Survival Data: John Fox 15 June 2008 (Small Corrections)
Cox Proportional-Hazards Regression For Survival Data: John Fox 15 June 2008 (Small Corrections)
John Fox
1 Introduction
Survival analysis examines and models the time it takes for events to occur. The prototypical such event
is death, from which the name ‘survival analysis’ and much of its terminology derives, but the ambit of
application of survival analysis is much broader. Essentially the same methods are employed in a variety
of disciplines under various rubrics – for example, ‘event-history analysis’ in sociology. In this appendix,
therefore, terms such as survival are to be understood generically.
Survival analysis focuses on the distribution of survival times. Although there are well known methods for
estimating unconditional survival distributions, most interesting survival modeling examines the relationship
between survival and one or more predictors, usually termed covariates in the survival-analysis literature.
The subject of this appendix is the Cox proportional-hazards regression model (introduced in a seminal
paper by Cox, 1972), a broadly applicable and the most widely used method of survival analysis. Although
I will not discuss them here, the survival library in R and S-PLUS also contains all of the other commonly
employed tools of survival analysis.1
As is the case for the other appendices to An R and S-PLUS Companion to Applied Regression, I assume
that you have read the main text and are therefore familiar with S. In addition, I assume familiarity with
Cox regression. I nevertheless begin with a review of basic concepts, primarily to establish terminology
and notation. The second section of the appendix takes up the Cox proportional-hazards model with time-
independent covariates. Time-dependent covariates are introduced in the third section. A fourth and final
section deals with diagnostics.
There are many texts on survival analysis: Cox and Oakes (1984) is a classic (if now slightly dated)
source. Allison (1995) presents a highly readable introduction to the subject based on the SAS statistical
package, but nevertheless of general interest. The major example in this appendix is adapted from Allison.
A book by Therneau and Grambsch (2000) is also worthy of mention here because Therneau is the author
of the survival library for S. Extensive documentation for the survival library may be found in Therneau
(1999).
function. In R, the suvival library is among the recommended packages, and is included with the standard Windows distribu-
tion; it must be attached prior to use, however.
1
at time t, conditional on survival to that time:
Modeling of survival data usually employs the hazard function or the log hazard. For example, assuming
a constant hazard, h(t) = ν, implies an exponential distribution of survival times, with density function
p(t) = νe−νt . Other common hazard models include
log h(t) = ν + ρt
which leads to the Weibull distribution of survival times. (See, for example, Cox and Oakes, 1984: Sec. 2.3,
for these and other possibilities.) In both the Gompertz and Weibull distributions, the hazard can either
increase or decrease with time; moreover, in both instances, setting ρ = 0 yields the exponential model.
A nearly universal feature of survival data is censoring, the most common form of which is right-censoring:
Here, the period of observation expires, or an individual is removed from the study, before the event occurs –
for example, some individuals may still be alive at the end of a clinical trial, or may drop out of the study for
various reasons other than death prior to its termination. An observation is left-censored if its initial time at
risk is unknown. Indeed, the same observation may be both right and left-censored, a circumstance termed
interval-censoring. Censoring complicates the likelihood function, and hence the estimation, of survival
models.
Moreover, conditional on the value of any covariates in a survival model and on an individual’s survival
to a particular time, censoring must be independent of the future value of the hazard for the individual. If
this condition is not met, then estimates of the survival distribution can be seriously biased. For example,
if individuals tend to drop out of a clinical trial shortly before they die, and therefore their deaths go
unobserved, survival time will be over-estimated. Censoring that meets this requirement is noninformative.
A common instance of noninformative censoring occurs when a study terminates at a predetermined date.
or, equivalently,
hi (t) = exp(α + β 1 xi1 + β 2 xi2 + · · · + β k xik )
that is, as a linear model for the log-hazard or as a multiplicative model for the hazard. Here, i is a subscript
for observation, and the x’s are the covariates. The constant α in this model represents a kind of log-baseline
hazard, since log hi (t) = α [or hi (t) = eα ] when all of the x’s are zero. There are similar parametric regression
models based on the other survival distributions described in the preceding section.2
The Cox model, in contrast, leaves the baseline hazard function α(t) = log h0 (t) unspecified:
models. Because the Cox model is now used much more frequently than parametric survival regression models, I will not
describe survreg in this appendix. Enter help(survreg) and see Therneau (1999) for details.
2
or, again equivalently,
hi (t) = h0 (t) exp(β 1 xi1 + β 2 xi2 + · · · + β k xik )
This model is semi-parametric because while the baseline hazard can take any form, the covariates enter the
model linearly. Consider, now, two observations i and i0 that differ in their x-values, with the corresponding
linear predictors
ηi = β 1 xi1 + β 2 xi2 + · · · + β k xik
and
ηi0 = β 1 xi0 1 + β 2 xi0 2 + · · · + β k xi0 k
The hazard ratio for these two observations,
hi (t) h0 (t)eηi
=
hi0 (t) h0 (t)eηi0
eηi
=
eηi0
is independent of time t. Consequently, the Cox model is a proportional-hazards model.
Remarkably, even though the baseline hazard is unspecified, the Cox model can still be estimated by
the method of partial likelihood, developed by Cox (1972) in the same paper in which he introduced the
Cox model. Although the resulting estimates are not as efficient as maximum-likelihood estimates for a
correctly specified parametric hazard regression model, not having to make arbitrary, and possibly incorrect,
assumptions about the form of the baseline hazard is a compensating virtue of Cox’s specification. Having
fit the model, it is possible to extract an estimate of the baseline hazard (see below).
Most of the arguments to coxph, including formula, data, weights, subset, na.action, singular.ok,
model, x and y, are familiar from lm (see Chapter 4 of the text, especially Section 4.7), although the
formula argument requires special consideration: The right-hand side of the model formula for coxph is the
same as for a linear model.3 The left-hand side is a survival object, created by the Surv function. In the
simple case of right-censored data, the call to Surv takes the form Surv(time, event), where time is either
the event time or the censoring time, and event is a dummy variable coded 1 if the event is observed or 0
if the observation is censored. See the on-line help for Surv for other possibilities.
Among the remaining arguments to coxph:
• init (initial values) and control are technical arguments: See the on-line help for coxph for details.
• method indicates how to handle observations that have tied (i.e., identical) survival times. The default
"efron" method is generally preferred to the once-popular "breslow" method; the "exact" method
is much more computationally intensive.
3 There are, however, special functions cluster and strata that may be included on the right side of the model formula: The
cluster cluster is used to specify non-independent observations (such as several individuals in the same family); the strata
function may be used to divide the data into sub-groups with potentially different baseline hazard functions, as explained in
Section 5.1.
3
• If robust is TRUE, coxph calculates robust coefficient-variance estimates. The default is FALSE, unless
the model includes non-independent observations, specified by the cluster function in the model
formula. I do not describe Cox regression for clustered data in this appendix.
After changing to the directory containing the data, I read the data file into a data frame, and print the
first few observations (omitting the variables emp1 — emp52, which are in columns 11—62 of the data frame):
Thus, for example, the first individual was arrested in week 20 of the study, while the fourth individual was
never rearrested, and hence has a censoring time of 52.
Following Allison, a Cox regression of time to rearrest on the time-constant covariates is specified as
follows:
4 The data file Rossi.txt is available at <http://socserv.mcmaster.ca/jfox/Books/Companion/Rossi.txt>.
4
> mod.allison <- coxph(Surv(week, arrest) ~ fin + age + race + wexp + mar + paro + prio,
+ data=Rossi)
> mod.allison
Call:
coxph(formula = Surv(week, arrest) ~ fin + age + race + wexp +
mar + paro + prio, data = Rossi)
The summary method for Cox models produces a more complete report:
> summary(mod.allison)
Call:
coxph(formula = Surv(week, arrest) ~ fin + age + race + wexp +
mar + paro + prio, data = Rossi)
n= 432
• The column marked z in the output records the ratio of each regression coefficient to its standard error,
a Wald statistic which is asymptotically standard normal under the hypothesis that the corresponding β
is zero. The covariates age and prio (prior convictions) have highly statistically significant coefficients,
while the coefficient for fin (financial aid – the focus of the study) is marginally significant.
5
1.00
Proportion Not Rearrested
0.90
0.80
0.70
0 10 20 30 40 50
Weeks
• The exponentiated coefficients in the second column of the first panel (and in the first column of the
second panel) of the output are interpretable as multiplicative effects on the hazard. Thus, for example,
holding the other covariates constant, an additional year of age reduces the weekly hazard of rearrest by
a factor of eb2 = 0.944 on average – that is, by 5.6 percent. Similarly, each prior conviction increases
the hazard by a factor of 1.096, or 9.6 percent.
• The likelihood-ratio, Wald, and score chi-square statistics at the bottom of the output are asymptoti-
cally equivalent tests of the omnibus null hypothesis that all of the β’s are zero. In this instance, the
test statistics are in close agreement, and the hypothesis is soundly rejected.
Having fit a Cox model to the data, it is often of interest to examine the estimated distribution of
survival times. The survfit function estimates S(t), by default at the mean values of the covariates. The
plot method for objects returned by survfit graphs the estimated surivival function, along with a point-wise
95-percent confidence band. For example, for the model just fit to the recidivism data:
This command produces Figure 1. [The limits for the vertical axis, set by ylim=c(.7, 1), were selected
after examining an initial plot.]
Even more cogently, we may wish to display how estimated survival depends upon the value of a covariate.
Because the principal purpose of the recidivism study was to assess the impact of financial aid on rearrest,
let us focus on this covariate. I construct a new data frame with two rows, one for each value of fin; the
other covariates are fixed to their average values. (In the case of a dummy covariate, such as race, the
average value is the proportion coded 1 in the data set – in the case of race, the proportion of blacks).
This data frame is passed to survfit via the newdata argument:
> attach(Rossi)
> Rossi.fin <- data.frame(fin=c(0,1), age=rep(mean(age),2), race=rep(mean(race),2),
6
1.0
Proportion Not Rearrested
0.9
0.8
fin = 0
fin = 1
0.7
0.6
0 10 20 30 40 50
Weeks
Figure 2: Estimated survival functions for those receiving (fin = 1) and not receiving (fin = 0) financial aid.
Other covariates are fixed at their average values. Each estimate is accompanied by a point-wise 95-percent
confidence envelope.
I specified two additional arguments to plot: lty=c(1,2) indicates that the survival function for the first
group (i.e., for fin = 0) will be plotted with a solid line, while that for the second group (fin = 1) will be
plotted with a broken line; conf.int=T requests that confidence envelopes be drawn around each estimated
survival function (which is not the default when more than one survival function is plotted). Notice, as well,
the use of the legend function (along with locator) to place a legend on the plot: Click the left mouse
button to position the legend.5 The resulting graph, which appears in Figure 2, shows the higher estimated
‘survival’ of those receiving financial aid, but the two confidence envelopes overlap substantially, even after
52 weeks.
4 Time-Dependent Covariates
The coxph function handles time-dependent covariates by requiring that each time period for an individual
appear as a separate observation – that is, as a separate row (or record) in the data set. Consider, for
example, the Rossi data frame, and imagine that we want to treat weekly employment as a time-dependent
predictor of time to rearrest. As if often the case, however, the data for each individual appears as a single
row, with the weekly employment indicators as 52 columns in the data frame, with names emp1 through
emp52; for example, for the first person in the study:
5 The plot method for survfit objects can also draw a legend on the plot, but separate use of the legend function provides
greater flexibility. Legends, line types, and other aspects of constructing graphs in S are described in Chapter 7 of the text.
7
> Rossi[1,]
week arrest fin age race wexp mar paro prio educ emp1 emp2
1 20 1 0 27 1 0 0 1 3 3 0 0
emp3 emp4 emp5 emp6 emp7 emp8 emp9 emp10 emp11 emp12 emp13
1 0 0 0 0 0 0 0 0 0 0 0
emp14 emp15 emp16 emp17 emp18 emp19 emp20 emp21 emp22 emp23
1 0 0 0 0 0 0 0 NA NA NA
emp24 emp25 emp26 emp27 emp28 emp29 emp30 emp31 emp32 emp33
1 NA NA NA NA NA NA NA NA NA NA
emp34 emp35 emp36 emp37 emp38 emp39 emp40 emp41 emp42 emp43
1 NA NA NA NA NA NA NA NA NA NA
emp44 emp45 emp46 emp47 emp48 emp49 emp50 emp51 emp52
1 NA NA NA NA NA NA NA NA NA
>
Notice that the employment indicators are missing after week 20, when individual 1 was rearrested.
To put the data in the requisite form, we need to write one row for each non-missing period of observation.
Here is a simple sequence of commands that accomplishes this purpose:
• First, noting that the employment indicators are in columns 11 through 62, I calculate the number of
non-missing records that will be required:
• Next, I create a matrix, initially filled with 0’s, to hold the data. This matrix has 19,809 rows, one
for each record, and 14 columns, to contain the first 10 variables in the original data frame; the start
and stop times of each weekly record; a time-dependent indicator variable (arrest.time) set to 1 if
a rearrest occurs during the current week, or 0 otherwise; and another indicator (employed) set to 1
if the individual was employed during the current week, and 0 if he was not. This last variable is the
time-dependent covariate. If there were more than one time-dependent covariate, then there would be
a column in the new data set for each.
> Rossi.2 <- matrix(0, 19809, 14) # to hold new data set
> colnames(Rossi.2) <- c(’start’, ’stop’, ’arrest.time’, names(Rossi)[1:10], ’employed’)
>
• Finally, I loop over the observations in the original data set, and over the weeks of the year within each
observation, to construct the new data set:6
8
+ }
> Rossi.2 <- as.data.frame(Rossi.2)
> remove(i, j, row, start, stop, arrest.time) # clean up
>
This procedure is very inefficient computationally, taking more than four minutes under R 1.4.1 on my
Windows 2000 computer, which has an 800 MHz processor and plenty of memory. But the programming
was very simple, requiring perhaps five minutes to write and debug: A time expenditure of about 10 minutes
is insignificant in preparing data for analysis.7
If, however, we often want to perform these operations, it makes sense to encapsulate them in a function,
and to spend some programming time to make the computation more efficient. I have written such a function,
named fold; the function is included with the script file for this appendix, and takes the following arguments:
• data: A data frame or numeric matrix (with column names) to be ‘folded.’ For reasons of efficiency,
if there are factors in data these will be converted to numeric variables in the output data frame.
• time: The quoted name of the event/censoring-time variable in data.
• event: The quoted name of the event/censoring indicator variable in data.
• cov: A vector giving the column numbers of the time-dependent covariate in data, or a list of vectors
if there is more than one time-dependent covariate.
• cov.names: A character string or character vector giving the name(s) to be assigned to the time-
dependent covariate(s) in the output data set.
• suffix: The suffix to be attached to the name of the time-to-event variable in the output data set;
defaults to ’.time’.
• cov.times: The observation times for the covariate values, including the start time. This argument
can take several forms:
— The default is the vector of integers from 0 to the number of covariate values (i.e., containing one
more entry – the initial time of observation – than the length of each vector in cov).
— An arbitrary numerical vector with one more entry than the length of each vector in cov.
— The columns in the input data set that give the (potentially different) covariate observation times
for each individual. There should be one more column than the length of each vector in cov.
• common.times: A logical value indicating whether the times of observation are the same for all indi-
viduals; defaults to TRUE.
• lag: Number of observation periods to lag each value of the time-dependent covariate(s); defaults to
0. The use of lag is described later in this section.
This command required less than 16 seconds on my computer – still not impressively efficient, but (not
counting programming effort) much better than the brute-force approach that I took previously. Here are
the first 50 of the nearly 20,000 records in the data frame Rossi.2:
7 See the discussion of ‘quick and dirty’ programming in Chapter 8 of the text.
9
> Rossi.2[1:50,]
start stop arrest.time week arrest fin age race wexp mar paro prio educ employed
1.1 0 1 0 20 1 0 27 1 0 0 1 3 3 0
1.2 1 2 0 20 1 0 27 1 0 0 1 3 3 0
. . .
1.19 18 19 0 20 1 0 27 1 0 0 1 3 3 0
1.20 19 20 1 20 1 0 27 1 0 0 1 3 3 0
2.1 0 1 0 17 1 0 18 1 0 0 1 8 4 0
2.2 1 2 0 17 1 0 18 1 0 0 1 8 4 0
. . .
2.16 15 16 0 17 1 0 18 1 0 0 1 8 4 0
2.17 16 17 1 17 1 0 18 1 0 0 1 8 4 0
3.1 0 1 0 25 1 0 19 0 1 0 1 13 3 0
3.2 1 2 0 25 1 0 19 0 1 0 1 13 3 0
. . .
3.13 12 13 0 25 1 0 19 0 1 0 1 13 3 0
Once the data set is constructed, it is simple to use coxph to fit a model with time-dependent covariates.
The right-hand-side of the model is essentially the same as before, but both the start and end times of
each interval are specified in the call to Surv, in the form Surv(start, stop, event). Here, event is the
time-dependent version of the event indicator variable, equal to 1 only in the time-period during which the
event occurs. For the example:
n= 19809
10
Likelihood ratio test= 68.7 on 8 df, p=9.11e-12
Wald test = 56.1 on 8 df, p=2.63e-09
Score (logrank) test = 64.5 on 8 df, p=6.1e-11
There are, however, differences in efficiency: In S-PLUS 2000 for Windows, the brute-force approach
to constructing the time-dependent data set required nearly nine minutes to execute, while the fold
function required 27 seconds; in S-PLUS 6.0 for Windows, the first operation required a little more than
six minutes, while the second consumed more than two minutes. All of these timings were on the same
800 MHz Windows PC used for R.
n= 19377
11
mar 0.708 1.412 0.334 1.501
paro 0.954 1.048 0.649 1.402
prio 1.096 0.912 1.036 1.160
employed 0.455 2.197 0.297 0.698
5 Model Diagnostics
As is the case for a linear or generalized linear model (see Chapter 6 of the text), it is desirable to determine
whether a fitted Cox regression model adequately describes the data. I will briefly consider three kinds of
diagnostics: for violation of the assumption of proportional hazards; for influential data; and for nonlinearity
in the relationship between the log hazard and the covariates. All of these diagnostics use the residuals
method for coxph objects, which calculates several kinds of residuals (along with some quantities that are
not normally thought of as residuals). Details are in Therneau (1999).
has a statistically significant interaction with time, which manifests itself as nonproportional hazards. I leave it to the reader
to check for this possibility using the model fit originally to the recidivism data.
12
> cox.zph(mod.allison.4)
rho chisq p
fin -0.00657 0.00507 0.9432
age -0.20976 6.54118 0.0105
prio -0.08003 0.77263 0.3794
GLOBAL NA 7.12999 0.0679
There is, therefore, strong evidence of non-proportional hazards for age, while the global test (on 3 degrees
of freedom) is not quite statistically significant. These tests are sensitive to linear trends in the hazard.
Plotting the object returned by cox.zph produces graphs of the scaled Schoenfeld residuals against
transformed time (see Figure 3):
> par(mfrow=c(2,2))
> plot(cox.zph(mod.allison.4))
Warning messages:
1: Collapsing to unique x values in: approx(xx, xtime,
seq(min(xx), max(xx), length = 17)[2 * (1:8)])
2: Collapsing to unique x values in: approx(xtime, xx, temp)
Interpretation of these graphs is greatly facilitated by smoothing, for which purpose cox.zph uses a smoothing
spline, shown on each graph by a solid line; the broken lines represent ± 2-standard-error envelopes around the
fit. Systematic departures from a horizontal line are indicative of non-proportional hazards. The assumption
of proportional hazards appears to be supported for the covariates fin (which is, recall, a dummy variable,
accounting for the two bands in the graph) and prio, but there appears to be a trend in the plot for age,
with the age effect declining with time; this effect was detected in the test reported above.
One way of accommodating non-proportional hazards is to build interactions between covariates and time
into the Cox regression model; such interactions are themselves time-dependent covariates. For example,
based on the diagnostics just examined, it seems reasonable to consider a linear interaction of time and age;
using the previously constructed Rossi.2 data frame:
As expected, the coefficient for the interaction is negative and highly statistically significant: The effect of
age declines with time.9 Notice that the model does not require a ‘main-effect’ term for stop (i.e., time);
such a term would be redundant, since the time effect is the baseline hazard.
An alternative to incorporating an interaction in the model is to divide the data into strata based on the
value of one or more covariates. Each stratum is permitted to have a different baseline hazard function, while
the coefficients of the remaining covariates are assumed to be constant across strata. An advantage of this
9 That is, initially, age has a positive partial effect on the hazard (given by the age coefficient, 0.032), but this effect gets
progressively smaller with time (at the rate −0.0038 per week), becoming negative after about 10 weeks.
13
2
0.8
1
0.4
0
-1
0.0
-2
-0.4
7.9 14 20 25 32 37 44 7.9 14 20 25 32 37 44
Time Time
1.0
Beta(t) for prio
0.5
0.0
7.9 14 20 25 32 37 44
Time
Figure 3: Plots of scaled Schoenfeld residuals against transformed time for each covariate in a model fit to
the recidivism data. The solid line is a smoothing-spline fit to the plot, with the broken lines representing a
± 2-standard-error band around the fit.
14
approach is that we do not have to assume a particular form of interaction between the stratifying covariates
and time. A disadvantage is the resulting inability to examine the effects of the stratifying covariates.
Stratification is most natural when a covariate takes on only a few distinct values, and when the effect of the
stratifying variable is not of direct interest. In our example, age takes on many different values, but we can
create categories by arbitrarily dissecting the variable into class intervals. After examining the distribution
of age, I decided to define four intervals: 19 or younger; 20 to 25; 26 to 30; and 31 or older. I use the recode
function in the car library to categorize age:10
> library(car)
. . .
> Rossi$age.cat <- recode(Rossi$age, " lo:19=1; 20:25=2; 26:30=3; 31:hi=4 ")
> table(Rossi$age.cat)
1 2 3 4
66 236 66 64
Most of the individuals in the data set are in the second age category, 20 to 25, but since this is a reasonably
narrow range of ages, I did not feel the need to sub-divide the category.
A stratified Cox regression model is fit by including a call to the strata function on the right-hand side
of the model formula. The arguments to this function are one or more stratifying variables; if there is more
than one such variable, then the strata are formed from their cross-classification. In the current illustration,
there is just one stratifying variable:
15
0.02
0.006
0.01
age
fin
0.00
0.002
-0.01
-0.002
-0.02
Index Index
0.005
0.000
prio
-0.005
Index
Figure 4: Index plots of dfbeta for the Cox regression of time to rearrest on fin, age, and prio.
The index plots produced by these commands appear in Figure 4. Comparing the magnitudes of the largest
dfbeta values to the regression coefficients suggests that none of the observations is terribly influential
individually (even though some of the dfbeta values for age are large compared with the others11 ).
5.3 Nonlinearity
Nonlinearity – that is, an incorrectly specified functional form in the parametric part of the model – is
a potential problem in Cox regression as it is in linear and generalized linear models (see Sections 6.4 and
6.6 of the text). The martingale residuals may be plotted against covariates to detect nonlinearity, and may
1 1 As an exercise, the reader may wish to identify these observations and, in particular, examine their ages.
16
also be used to form component-plus-residual (or partial-residual) plots, again in the manner of linear and
generalized linear models.
For the regression of time to rearrest on financial aid, age, and number of prior arrests, let us examine
plots of martingale residuals and partial residuals against the last two of these covariates; nonlinearity is not
an issue for financial aid, because this covariate is dichotomous:
> par(mfrow=c(2,2))
> res <- residuals(mod.allison.4, type=’martingale’)
> X <- as.matrix(Rossi[,c("age", "prio")]) # matrix of covariates
> par(mfrow=c(2,2))
> for (j in 1:2) { # residual plots
+ plot(X[,j], res, xlab=c("age", "prio")[j], ylab="residuals")
+ abline(h=0, lty=2)
+ lines(lowess(X[,j], res, iter=0))
+ }
The resulting residual and component-plus-residual plots appear in Figure 5. As in the plots of Schoenfeld
residuals, smoothing these plots is also important to their interpretation; The smooths in Figure 5 are
produced by local linear regression (using the lowess function). Nonlinearity, it appears, is slight here.
References
Allison, P. D. 1995. Survival Analysis Using the SAS System: A Practical Guide. Cary NC: SAS Institute.
Cox, D. R. 1972. “Regression Models and Life Tables (with Discussion).” Journal of the Royal Statistical
Society, Series B 34:187—220.
Cox, D. R. & D. Oakes. 1984. Analysis of Survival Data. London: Chapman and Hall.
Rossi, P. H., R. A. Berk & K. J. Lenihan. 1980. Money, Work and Crime: Some Experimental Results. New
York: Academic Press.
Therneau, T. M. 1999. A Package for Survival Analysis in S. Technical Report
<http://www.mayo.edu/hsr/people/therneau/survival.ps> Mayo Foundation.
Therneau, T. M. & P. M. Grambsch. 2000. Modeling Survival Data: Extending the Cox Model. New York:
Springer.
17
1.0
1.0
0.5
0.5
residuals
residuals
0.0
0.0
-0.5
-0.5
-1.0
-1.0
20 25 30 35 40 45 0 5 10 15
age prio
2.5
-3.0 -2.5 -2.0 -1.5 -1.0 -0.5
2.0
component+residual
component+residual
1.5
1.0
0.5
-0.5 0.0
20 25 30 35 40 45 0 5 10 15
age prio
Figure 5: Martingale-residual plots (top) and component-plus-residual plots (bottom) for the covariates age
and prio. The broken lines on the residual plots are at the vertical value 0, and on the component-plus-
residual plots are fit by linear least-squares; the solid lines are fit by local linear regression (lowess).
18