After last week's post on
Theoph I noticed the MEMSS library contained more PK data sets. The Cefamandole data is an IV data set with six subjects. It is somewhat challenging, since there seem to be several elimination rates.
Data
Data are displayed below. Notice that the lines seem somewhat curved, there is clearly more going on that just simple elimination.
Models
The first model fits concentration as function of two eliminations. The problem here is that the eliminations need to be ordered. All faster eliminations should have one prior, all slower another. After much fiddling around I found that I cannot force the ordering at subject level with priors or limits. Hence I chose a different way out. The model gets an extra component, it will predict 0 if the parameters are incorrectly ordered. This indeed gave the desired result in terms of model behavior.
The model itself though, was less to my linking. It goes way too low at the later times. Upon consideration, since the model assumes homoscedastic error it does not matter to the likelihoood if the fit at the lower end is not so good. After all, an error of 1 is nothing compared to errors of 10 or 20 at the upper end of the scale. In addition it can be argued that the error is not homoscedastic. If it were homoscedastic, the underlying data would show way more variation at the lower end of the scale. It would need confirmation from the measuring team, but for now I will assume proportional error.
Model 2: Proportional error
The proportional error will be implemented by using a log normal distribution for concentration. This was pretty simple to code. The log of the expected concentration is used in combination with dlnorm(). The only issue remaining is that expectation 0 for incorrectly ordered parameters has to be shifted a bit, log(0) is a bit of an issue.
This model looks pretty decent. There are still some things to do, similar to last week, translate the parameters
c1star and
c2star into the parameters which can be interpreted from PK perspective.
Model outputs
Model 1
Inference for Bugs model at "C:/Users/Kees/AppData/Local/Temp/RtmpSe5nAn/modele207c187661.txt", fit using jags,
4 chains, each with 10000 iterations (first 5000 discarded), n.thin = 10
n.sims = 2000 iterations saved
mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat n.eff
C[1] 91.473 24.508 51.690 76.444 88.708 103.600 144.987 1.016 160
C[2] 324.947 132.978 147.584 253.433 304.913 366.175 626.711 1.007 460
Ke[1] 0.020 0.002 0.016 0.018 0.019 0.021 0.024 1.045 76
Ke[2] 0.141 0.192 0.081 0.116 0.132 0.147 0.202 1.072 180
c1star[1] 58.783 8.381 43.892 53.222 58.514 63.406 76.601 1.047 64
c1star[2] 59.841 18.962 32.887 47.487 56.324 67.606 108.087 1.048 72
c1star[3] 92.337 10.569 70.647 85.720 92.805 99.469 112.338 1.041 74
c1star[4] 89.554 12.223 67.975 81.086 88.831 96.932 116.699 1.027 110
c1star[5] 146.419 21.504 106.395 132.153 145.199 159.149 193.558 1.032 84
c1star[6] 121.228 14.872 87.712 112.706 122.379 130.819 148.106 1.092 42
c2star[1] 426.241 107.777 278.572 355.121 406.845 471.583 703.121 1.038 92
c2star[2] 201.118 35.861 149.478 179.179 197.081 217.084 278.248 1.033 830
c2star[3] 489.294 218.162 268.493 349.208 419.178 537.325 1125.062 1.088 42
c2star[4] 449.402 68.164 340.627 402.410 440.666 487.652 603.535 1.038 75
c2star[5] 402.518 43.391 335.760 373.724 397.703 423.902 498.253 1.017 380
c2star[6] 140.893 46.737 79.548 114.343 133.240 157.967 237.080 1.024 650
ke1[1] 0.020 0.002 0.016 0.018 0.020 0.021 0.025 1.042 74
ke1[2] 0.021 0.004 0.016 0.019 0.020 0.023 0.032 1.060 57
ke1[3] 0.018 0.002 0.014 0.017 0.018 0.019 0.022 1.031 99
ke1[4] 0.020 0.002 0.017 0.019 0.020 0.021 0.025 1.023 120
ke1[5] 0.020 0.002 0.016 0.019 0.020 0.021 0.024 1.036 74
ke1[6] 0.019 0.002 0.015 0.018 0.019 0.020 0.022 1.071 46
ke2[1] 0.167 0.025 0.128 0.150 0.164 0.181 0.225 1.050 64
ke2[2] 0.101 0.026 0.067 0.085 0.096 0.111 0.164 1.050 91
ke2[3] 0.181 0.042 0.122 0.151 0.173 0.200 0.283 1.076 43
ke2[4] 0.143 0.019 0.110 0.130 0.141 0.155 0.181 1.040 74
ke2[5] 0.112 0.017 0.086 0.100 0.110 0.121 0.151 1.027 130
ke2[6] 0.118 0.038 0.062 0.093 0.114 0.137 0.205 1.051 61
deviance 464.708 7.996 451.264 458.919 464.029 469.608 482.206 1.006 550
For each parameter, n.eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
DIC info (using the rule, pD = var(deviance)/2)
pD = 31.8 and DIC = 496.6
DIC is an estimate of expected predictive error (lower deviance is better).
Model 2
Inference for Bugs model at "C:/Users/Kees/AppData/Local/Temp/RtmpSe5nAn/modele2045997fac.txt", fit using jags,
4 chains, each with 10000 iterations (first 5000 discarded), n.thin = 10
n.sims = 2000 iterations saved
mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat n.eff
C[1] 23.273 10.942 8.344 16.933 21.737 27.259 48.399 1.004 710
C[2] 159.545 32.361 106.916 141.343 157.165 172.618 230.957 1.004 1200
Ke[1] 0.010 0.001 0.009 0.010 0.010 0.011 0.012 1.010 270
Ke[2] 0.040 0.005 0.032 0.036 0.039 0.042 0.050 1.006 660
c1star[1] 14.141 3.164 8.481 11.970 13.923 16.049 20.826 1.012 230
c1star[2] 13.177 4.777 6.178 9.622 12.276 16.064 24.394 1.020 210
c1star[3] 34.030 6.876 20.963 29.102 33.778 38.458 47.854 1.006 450
c1star[4] 12.399 4.038 6.525 9.482 11.712 14.524 22.656 1.019 130
c1star[5] 40.359 8.756 24.398 34.248 40.002 45.836 58.609 1.007 360
c1star[6] 38.415 9.194 21.683 32.043 37.885 44.595 57.469 1.006 440
c2star[1] 123.292 17.872 93.250 110.418 121.625 134.114 164.930 1.000 2000
c2star[2] 134.174 18.465 103.312 121.677 132.580 145.038 176.803 1.004 680
c2star[3] 138.541 24.539 97.522 122.254 135.654 151.958 195.143 1.000 2000
c2star[4] 184.399 20.868 148.465 170.310 183.027 197.048 229.782 1.002 1500
c2star[5] 247.359 38.113 178.403 222.054 244.317 269.412 330.096 1.003 2000
c2star[6] 150.684 22.359 111.345 135.290 149.545 164.626 198.568 1.002 2000
ke1[1] 0.010 0.001 0.008 0.010 0.010 0.011 0.012 1.013 210
ke1[2] 0.012 0.001 0.009 0.011 0.011 0.012 0.014 1.022 180
ke1[3] 0.010 0.001 0.008 0.009 0.010 0.010 0.011 1.005 540
ke1[4] 0.011 0.001 0.009 0.010 0.010 0.011 0.013 1.020 130
ke1[5] 0.010 0.001 0.008 0.010 0.010 0.011 0.012 1.007 350
ke1[6] 0.010 0.001 0.009 0.010 0.010 0.011 0.012 1.005 540
ke2[1] 0.042 0.005 0.034 0.038 0.041 0.044 0.055 1.004 690
ke2[2] 0.041 0.006 0.032 0.037 0.040 0.044 0.056 1.013 230
ke2[3] 0.042 0.008 0.032 0.037 0.040 0.045 0.062 1.006 440
ke2[4] 0.037 0.003 0.031 0.034 0.036 0.038 0.044 1.008 320
ke2[5] 0.040 0.005 0.032 0.037 0.040 0.043 0.053 1.003 1100
ke2[6] 0.036 0.006 0.027 0.033 0.036 0.040 0.048 1.003 1200
deviance 372.564 7.382 359.811 367.354 372.140 376.777 388.833 1.001 2000
For each parameter, n.eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
DIC info (using the rule, pD = var(deviance)/2)
pD = 27.3 and DIC = 399.8
DIC is an estimate of expected predictive error (lower deviance is better).
Code used
library(MEMSS)
library(ggplot2)
library(R2jags)
png('Cefamandole.png')
qplot(y=conc,x=Time,col=Subject,data=Cefamandole) +
scale_y_log10('cefamandole (mcg/ml)')+
geom_line()+
theme(legend.position='bottom')
dev.off()
library(R2jags)
datain <- list(
time=Cefamandole$Time,
conc=Cefamandole$conc,
n=nrow(Cefamandole),
subject =c(1:nlevels(Cefamandole$Subject))[Cefamandole$Subject],
nsubject = nlevels(Cefamandole$Subject)
)
model1 <- function() {
tau <- 1/pow(sigma,2)
sigma ~ dunif(0,100)
for (i in 1:n) {
pred[i] <- (ke1[subject[i]]<ke2[subject[i]])*
(c1star[subject[i]]*exp(-ke1[subject[i]]*time[i])+
c2star[subject[i]]*exp(-ke2[subject[i]]*time[i]))
conc[i] ~ dnorm(pred[i],tau)
}
for(i in 1:nsubject) {
ke1[i] ~ dlnorm(ke[1],kemtau[1])
ke2[i] ~ dlnorm(ke[2],kemtau[2])
c1star[i] ~ dlnorm(cm[1],ctau[1])
c2star[i] ~ dlnorm(cm[2],ctau[2])
}
for (i in 1:2) {
kem[i] ~ dunif(-10,10)
kemtau[i] <- 1/pow(kesd[i],2)
kesd[i] ~ dunif(0,10)
Ke[i] <- exp(ke[i])
cm[i] ~ dunif(-10,20)
ctau[i] <- 1/pow(csd[i],2)
csd[i] ~ dunif(0,100)
C[i] <- exp(cm[i])
}
ke <- sort(kem)
}
parameters <- c('Ke','ke1','ke2','c1star','c2star','C')
inits <- function() {
list(ke1 = rnorm(6,0.13,.03),
ke2= rnorm(6,0.0180,.005),
kem= log(c(rnorm(1,0.13,.03),rnorm(1,0.0180,.005))),
kesd =runif(2,.1,.4),
cm = log(rnorm(2,25,5)),
c1star=rnorm(6,25,3),
c2star=rnorm(6,25,3)
)}
jagsfit1 <- jags(datain, model=model1,
inits=inits,
parameters=parameters,progress.bar="gui",
n.iter=10000,
n.chains=4,n.thin=10)
jagsfit1
#plot(jagsfit1)
#traceplot(jagsfit1)
Time <- seq(0,max(Cefamandole$Time))
la1 <- sapply(1:nrow(jagsfit1$BUGSoutput$sims.matrix),
function(i) {
C1 <- jagsfit1$BUGSoutput$sims.matrix[i,'C[1]']
C2 <- jagsfit1$BUGSoutput$sims.matrix[i,'C[2]']
k1 <- jagsfit1$BUGSoutput$sims.matrix[i,'Ke[1]']
k2 <- jagsfit1$BUGSoutput$sims.matrix[i,'Ke[2]']
y=C1*exp(-k1*Time)+C2*exp(-k2*Time)
})
res1 <- data.frame(
Time=Time,
Conc=apply(la1,1,mean),
C05=apply(la1,1,FUN=function(x) quantile(x,0.05)),
C95=apply(la1,1,FUN=function(x) quantile(x,0.95)))
png('cef1.png')
ggplot(res1, aes(x=Time)) +
geom_line(aes(y=Conc))+
scale_y_log10('cefamandole (mcg/ml)')+
geom_line(aes(y=conc,x=Time,col=Subject),alpha=1,data=Cefamandole)+
geom_ribbon(aes(ymin=C05, ymax=C95),alpha=.2)+
theme(legend.position='bottom')
dev.off()
#########
#
# model 2: proportional error
#
#########
model2 <- function() {
tau <- 1/pow(sigma,2)
sigma ~ dunif(0,100)
for (i in 1:n) {
pred[i] <- log(
(ke1[subject[i]]<ke2[subject[i]])*
(c1star[subject[i]]*exp(-ke1[subject[i]]*time[i])+
c2star[subject[i]]*exp(-ke2[subject[i]]*time[i]))
+.001*(ke1[subject[i]]>ke2[subject[i]]))
conc[i] ~ dlnorm(pred[i],tau)
}
for(i in 1:nsubject) {
ke1[i] ~ dlnorm(ke[1],kemtau[1])
ke2[i] ~ dlnorm(ke[2],kemtau[2])
c1star[i] ~ dlnorm(cm[1],ctau[1])
c2star[i] ~ dlnorm(cm[2],ctau[2])
}
for (i in 1:2) {
kem[i] ~ dunif(-10,10)
kemtau[i] <- 1/pow(kesd[i],2)
kesd[i] ~ dunif(0,10)
Ke[i] <- exp(ke[i])
cm[i] ~ dunif(-10,20)
ctau[i] <- 1/pow(csd[i],2)
csd[i] ~ dunif(0,100)
C[i] <- exp(cm[i])
}
ke <- sort(kem)
}
parameters <- c('Ke','ke1','ke2','c1star','c2star','C')
inits <- function() {
list(ke1 = rnorm(6,0.13,.03),
ke2= rnorm(6,0.0180,.005),
kem= log(c(rnorm(1,0.13,.03),rnorm(1,0.0180,.005))),
kesd =runif(2,.1,.4),
cm = log(rnorm(2,25,5)),
c1star=rnorm(6,25,3),
c2star=rnorm(6,25,3)
)}
jagsfit2 <- jags(datain, model=model2,
inits=inits,
parameters=parameters,progress.bar="gui",
n.iter=10000,
n.chains=4,n.thin=10)
jagsfit2
la2 <- sapply(1:nrow(jagsfit2$BUGSoutput$sims.matrix),
function(i) {
C1 <- jagsfit2$BUGSoutput$sims.matrix[i,'C[1]']
C2 <- jagsfit2$BUGSoutput$sims.matrix[i,'C[2]']
k1 <- jagsfit2$BUGSoutput$sims.matrix[i,'Ke[1]']
k2 <- jagsfit2$BUGSoutput$sims.matrix[i,'Ke[2]']
y=C1*exp(-k1*Time)+C2*exp(-k2*Time)
})
res2 <- data.frame(
Time=Time,
Conc=apply(la2,1,mean),
C05=apply(la2,1,FUN=function(x) quantile(x,0.05)),
C95=apply(la2,1,FUN=function(x) quantile(x,0.95)))
png('cef2.png')
ggplot(res2, aes(x=Time)) +
geom_line(aes(y=Conc))+
scale_y_log10('cefamandole (mcg/ml)')+
geom_line(aes(y=conc,x=Time,col=Subject),alpha=1,data=Cefamandole)+
geom_ribbon(aes(ymin=C05, ymax=C95),alpha=.2)+
theme(legend.position='bottom')
dev.off()