Sunday, October 27, 2013

Symmetry in Williams designs

Based on some comments I am looking at using symmetry to obtain Williams style designs. Symmetry allows reduction of the number of combinations examined hence faster calculation times. Two avenues are examined. Both work for a low number of treatments. For eight treatments they give different sets, which together define all 12 possible designs.

Horizontal reflection

It is well known that each row in an even number of treatments Williams design is also present reversed. But apn noticed something else, within a row there is a horizontal symmetry; the values of the cell i and n+1-i add up to n+1. That is a bit abstract. Lets look at a six treatment design. Cell [2,2] is 4 and cell [2,5] is 3, they add to 7. And so it continues. If this holds, then the permutations to check is decreased enormously. Especially, since cell [2,2] also by the reversal of rows is equal to cell [5,5] and hence cell [5,2] is 3.
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,]    1    2    3    4    5    6
[2,]    2    4    1    6    3    5
[3,]    3    1    5    2    6    4
[4,]    4    6    2    5    1    3
[5,]    5    3    6    1    4    2
[6,]    6    5    4    3    2    1
After putting this in an algorithm it appeared it worked. Run time for 8 treatments only 100 miliseconds.
microbenchmark(gendesign(4),gendesign(6),gendesign(8))
Unit: milliseconds
         expr        min         lq     median         uq        max neval
 gendesign(4)   1.576118   1.609474   1.638796   1.717969   2.896391   100
 gendesign(6)   8.695041   8.876843   9.054615   9.229821  14.714345   100
 gendesign(8) 100.170001 102.489826 106.144955 108.703397 121.015079   100 
The algorithm gave 8 designs for 8 treatments, 192 for 10 treatments, same as previously.

Symmetrical

I am also looking for a smart way to tackle designs for odd products. There is a nine treatment solution, I want to be able to find it. On a whim, I decided to check if symmetrical designs give is the solution. It is not. However, this approach can also be used for the even number of treatments. Again there are 8 designs for 8 treatments. Four of these have the horizontal semi-reflection, four do not have it. Hence all 12 designs which I found previously by brute force are covered. With 13 seconds run time for 8 treatments, I did not check for the number of solutions for 10 treatments..
Note 1: I do know symmetrical has no meaning in a Williams design. I have only used the designs with rows as subjects, columns as periods/rounds and numbers as treatments. And I do randomize subjects over rows, treatments over numbers. Symmetry is thus just a means.
Note 2: The 9 treatment design? It was found via group theory. Apn gave a link Some New Row-Complete Latin Squares, Archdeacon, Dinitz, Stinson and Tillson

Code

The two sets of code are quite similar. This is only logical since one is derived from the other. Each section should be able to be run separately as displayed here.

Horizontal semi reflection

nextpos <- function(desmat) which(desmat==0,arr.ind=TRUE)

gendesign <- function(n=6) {
  nr <- as.integer(n)
  nc <- nr
  
  desmat <- matrix(0L,nrow=nr,ncol=nc)
  desmat[1,] <- 1L:nc
  desmat[,1] <- 1L:nr
  desmat[n,] <- nc:1L
  desmat[,n] <- nr:1L
  desobject <- list(desmat=desmat,nc=1L:n,n=n,np=n+1L)
  desresult <- list()
  addpoint(desobject,desresult)
}

modify <- function(desobject,row,col,i) {
  desobject$desmat[desobject$np-row,desobject$np-col] <- 
      desobject$desmat[row,col] <- i
  desobject$desmat[desobject$np-row,col] <- 
      desobject$desmat[row,desobject$np-col] <- desobject$np-i
  desobject}

addpoint <- function(desobject,desresult) {
  if (!cocheck(desobject)) return(desresult)
  rc <- nextpos(desobject$desmat)
  if (length(rc)==0) {
    if (var(colSums(desobject$desmat))==0) {
      l <- length(desresult)
      desresult[[l+1]] <- desobject$desmat
  }
    return(desresult)
  }
  row <- rc[1,1]
  col <- rc[1,2]
  avoid <- c(desobject$desmat[row,],
      desobject$desmat[,col])
  nc <- desobject$nc[!is.element(desobject$nc,avoid) ]
  for (i in nc) {
       desresult <- addpoint(modify(desobject,row,col,i)
       ,desresult)
  }
  desresult
}

cocheck <- function(desobject) {
  dm2 <- desobject$desmat*desobject$n
  dm2 <- dm2[,-1]
  dm3 <- desobject$desmat[,-desobject$n]
  dm4 <- dm2+dm3
  dm4 <- c(dm4[dm2!=0 & dm3!=0])  
  all(table(dm4)==1)
}

symmetrical

nextpos <- function(desmat) which(desmat==0,arr.ind=TRUE)

gendesign <- function(n=5) {
  nr <- as.integer(n)
  nc <- nr
  desmat <- matrix(0L,nrow=nr,ncol=nc)
  desmat[1,] <- 1L:nc
  desmat[,1] <- 1L:nr
  desobject <- list(desmat=desmat,nc=1L:n,n=n,np=n+1L)
  desresult <- list()
  addpoint(desobject,desresult)
}

modify <- function(desobject,row,col,i) {
  desobject$desmat[col,row] <- 
      desobject$desmat[row,col] <- i
  desobject}

addpoint <- function(desobject,desresult) {
  if (!cocheck(desobject)) return(desresult)
  rc <- nextpos(desobject$desmat)
  if (length(rc)==0) {
    if (var(colSums(desobject$desmat))==0) {
      l <- length(desresult)
      desresult[[l+1]] <- desobject$desmat
    }
    return(desresult)
  }
  row <- rc[1,1]
  col <- rc[1,2]
  avoid <- c(desobject$desmat[row,],
      desobject$desmat[,col])
  nc <- desobject$nc[!is.element(desobject$nc,avoid) ]
  for (i in nc) {
    desresult <- addpoint(modify(desobject,row,col,i)
        ,desresult)
  }
  desresult
}

cocheck <- function(desobject) {
  dm2 <- desobject$desmat[,-1]*desobject$n
  dm3 <- desobject$desmat[,-desobject$n]
  dm4 <- dm2+dm3
  dm4 <- c(dm4[dm2!=0 & dm3!=0])  
  all(table(dm4)==1)
}

Saturday, October 19, 2013

Carry-over balanced designs for 8 treatments

Those are Williams designs you might say, but it has become clear to me that Williams designs are just a subset of all carry-over balanced designs. Not through hard work of mine, comments by Apn on my previous post Creating Williams designs with even number of products lead to this. Among other things, Apn claimed there were designs which did not confirm to the well known symmetry that each row is the reflection of another row. Related to that was a claim that there is a nine treatment square carry-over balanced design. I made my algorithm faster and in this post I can confirm the former, but the latter is too long a calculation for the algorithm I used.

Designs

There are 12 designs, as given below. Numbers 4 and 5 can be obtained from each other by certain recoding of the treatments. The last four are the non-symmetrical ones.
[[1]]
     [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]    1    2    3    4    5    6    7    8
[2,]    2    4    1    6    3    8    5    7
[3,]    3    1    5    2    7    4    8    6
[4,]    4    6    2    8    1    7    3    5
[5,]    5    3    7    1    8    2    6    4
[6,]    6    8    4    7    2    5    1    3
[7,]    7    5    8    3    6    1    4    2
[8,]    8    7    6    5    4    3    2    1

[[2]]
     [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]    1    2    3    4    5    6    7    8
[2,]    2    4    8    3    6    1    5    7
[3,]    3    1    4    7    2    5    8    6
[4,]    4    6    2    8    1    7    3    5
[5,]    5    3    7    1    8    2    6    4
[6,]    6    8    5    2    7    4    1    3
[7,]    7    5    1    6    3    8    4    2
[8,]    8    7    6    5    4    3    2    1

[[3]]
     [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]    1    2    3    4    5    6    7    8
[2,]    2    4    1    3    6    8    5    7
[3,]    3    8    4    7    2    5    1    6
[4,]    4    6    2    8    1    7    3    5
[5,]    5    3    7    1    8    2    6    4
[6,]    6    1    5    2    7    4    8    3
[7,]    7    5    8    6    3    1    4    2
[8,]    8    7    6    5    4    3    2    1

[[4]]
     [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]    1    2    3    4    5    6    7    8
[2,]    2    4    8    6    3    1    5    7
[3,]    3    8    5    2    7    4    1    6
[4,]    4    6    2    8    1    7    3    5
[5,]    5    3    7    1    8    2    6    4
[6,]    6    1    4    7    2    5    8    3
[7,]    7    5    1    3    6    8    4    2
[8,]    8    7    6    5    4    3    2    1

[[5]]
     [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]    1    2    3    4    5    6    7    8
[2,]    2    5    1    6    3    8    4    7
[3,]    3    1    4    2    7    5    8    6
[4,]    4    6    2    8    1    7    3    5
[5,]    5    3    7    1    8    2    6    4
[6,]    6    8    5    7    2    4    1    3
[7,]    7    4    8    3    6    1    5    2
[8,]    8    7    6    5    4    3    2    1

[[6]]
     [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]    1    2    3    4    5    6    7    8
[2,]    2    5    8    3    6    1    4    7
[3,]    3    1    5    7    2    4    8    6
[4,]    4    6    2    8    1    7    3    5
[5,]    5    3    7    1    8    2    6    4
[6,]    6    8    4    2    7    5    1    3
[7,]    7    4    1    6    3    8    5    2
[8,]    8    7    6    5    4    3    2    1

[[7]]
     [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]    1    2    3    4    5    6    7    8
[2,]    2    5    1    3    6    8    4    7
[3,]    3    8    5    7    2    4    1    6
[4,]    4    6    2    8    1    7    3    5
[5,]    5    3    7    1    8    2    6    4
[6,]    6    1    4    2    7    5    8    3
[7,]    7    4    8    6    3    1    5    2
[8,]    8    7    6    5    4    3    2    1

[[8]]
     [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]    1    2    3    4    5    6    7    8
[2,]    2    5    8    6    3    1    4    7
[3,]    3    8    4    2    7    5    1    6
[4,]    4    6    2    8    1    7    3    5
[5,]    5    3    7    1    8    2    6    4
[6,]    6    1    5    7    2    4    8    3
[7,]    7    4    1    3    6    8    5    2
[8,]    8    7    6    5    4    3    2    1

[[9]]
     [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]    1    2    3    4    5    6    7    8
[2,]    2    7    1    8    3    5    4    6
[3,]    3    1    5    7    6    8    2    4
[4,]    4    8    7    5    2    1    6    3
[5,]    5    3    6    2    8    4    1    7
[6,]    6    5    8    1    4    7    3    2
[7,]    7    4    2    6    1    3    8    5
[8,]    8    6    4    3    7    2    5    1

[[10]]
     [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]    1    2    3    4    5    6    7    8
[2,]    2    7    6    3    8    1    5    4
[3,]    3    6    8    5    2    4    1    7
[4,]    4    3    5    7    1    8    6    2
[5,]    5    8    2    1    3    7    4    6
[6,]    6    1    4    8    7    3    2    5
[7,]    7    5    1    6    4    2    8    3
[8,]    8    4    7    2    6    5    3    1

[[11]]
     [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]    1    2    3    4    5    6    7    8
[2,]    2    8    5    1    7    3    6    4
[3,]    3    5    4    6    1    8    2    7
[4,]    4    1    6    8    3    7    5    2
[5,]    5    7    1    3    2    4    8    6
[6,]    6    3    8    7    4    2    1    5
[7,]    7    6    2    5    8    1    4    3
[8,]    8    4    7    2    6    5    3    1

[[12]]
     [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]    1    2    3    4    5    6    7    8
[2,]    2    8    5    7    4    1    3    6
[3,]    3    5    2    6    8    7    1    4
[4,]    4    7    6    2    1    5    8    3
[5,]    5    4    8    1    6    3    2    7
[6,]    6    1    7    5    3    8    4    2
[7,]    7    3    1    8    2    4    6    5
[8,]    8    6    4    3    7    2    5    1

Calculation speed

Up to 5 treatments 100 repeats are done in the blink of an eye. For 6 it is doable (0.025 sec per run). For 7 treatments I did 10 repeats (1.6 sec per run), while for 8 one run took almost 800 sec. Clearly 9 treatments is too much.
> microbenchmark(gendesign(3),gendesign(4),gendesign(5),gendesign(6))
Note: no visible global function definition for 'error'
Unit: microseconds
         expr       min         lq    median         uq       max neval
 gendesign(3)   107.029   113.2605   119.492   125.3565   240.449   100
 gendesign(4)   343.081   355.5430   369.472   405.7585   544.677   100
 gendesign(5)  1141.403  1163.0290  1202.615  1258.6955  1472.755   100
 gendesign(6) 24753.851 25352.7755 25595.058 26917.8980 35721.433   100
> system.time(gendesign(7))
   user  system elapsed
   1.64    0.00    1.67
> microbenchmark(gendesign(7),times=10L)
Unit: seconds
         expr      min       lq   median       uq      max neval
 gendesign(7) 1.624484 1.647051 1.657468 1.669608 1.768613    10
> system.time(gendesign(8))
   user  system elapsed
 793.72    0.17  805.82

Code

Code is below. Running the JIT saves half of the time. It is basically a simplified version of my old algorithm, but with a lot of small modifications, basically avoiding loops, ifs and a load of trial and error for speed.
library(microbenchmark)

nextpos <- function(desmat) which(desmat==0,arr.ind=TRUE)

gendesign <- function(n=6) {
  nr <- as.integer(n)
  nc <- nr
  
  desmat <- matrix(0L,nrow=nr,ncol=nc)
  desmat[1,] <- 1L:nc
  desmat[,1] <- 1L:nr
  
  carover <- matrix(TRUE,nrow=nr,ncol=nc)
  for (i in 1L:(nc-1L))  carover[i,i+1] <- FALSE
  todo <- nextpos(desmat)
  
  desobject <- list(desmat=desmat,carover = carover,nc=1L:n,n=n,
      index =1L,npos=nrow(todo),
      row=todo[,1L],
      col=todo[,2L])
  desresult <- list()
  addpoint(desobject,desresult)
}

modify <- function(desobject,row,col,i,previous) {
  desobject$desmat[row,col] <- i
  desobject$carover[previous,i] <- FALSE
  desobject$index <- desobject$index + 1L
  desobject}

addpoint <- function(desobject,desresult) {
  if (desobject$index>desobject$npos) {
    l <- length(desresult)
    desresult[[l+1]] <- desobject$desmat
  #  cat('#')
    return(desresult)
  } 
  row <- desobject$row[desobject$index]
  col <- desobject$col[desobject$index]
  previous <- desobject$desmat[row,col-1L]
  avoid <- c(desobject$desmat[row,],
      desobject$desmat[,col])
  nc <- desobject$nc[!is.element(desobject$nc,avoid) ]
  nc <- nc[desobject$carover[previous,nc]]
  for (i in nc) {
       desresult <- addpoint(modify(desobject,row,col,i,previous)
       ,desresult)
  }
  desresult
}
library(compiler)
enableJIT(3)

Sunday, October 13, 2013

Prices of houses in the Netherlands

The last couple of days I read a number of times about stabilization in house prices which had been dropping due to the crisis. And you get hit by numbers such as change against Q2 2013 or Q3 2012. These are accompanied by reasons why this or that quarter may be special so changes may be off. To be honest, after years of crisis I for one have no clue how prices have changed overall. Hence this small good news was a good reason to grab some data and look for myself.

Data

The data is obtained from the website of NVM, the largest Dutch organization of real estate agents. Their website contains loads of reports and data on the housing market, most of them in Dutch. Since I wanted analyze data I started with grabbing historical data. These are house prices since 1985. Note that the good news was that the number of transactions seemed to increase, not the prices.

The regions are chosen because they reflect some specific properties. These descriptions are purely my own.
Amsterdam is the financial heart and may be sensitive to the financial markets.
Den Haag (The Hague) is the government and related institutions
Rotterdam the harbor, with logistics, may be sensitive to neighbor Germany
Waterland typical location if you work in Amsterdam but don't want to live there
Almere another typical location to work in Amsterdam, a new grow city, almost doubled in size since 1995
Zuidwest Drente is the countryside
Ede eo is food valley, the more high tech food companies may be found there
Zeeuwse eilanden is more tourism and fishing
Eindhoven is a region with more high tech industries

The historical data is a .pdf per NVM region, covers 1985-2012. These are a bit strange files. I was unable to copy paste the second page of the files, at least into LibreOffice Calc. Plan B was to convert via pdftotext which can be found as part of xpdf. The resulting .txt file is no beauty either, with loads of data on one line, separated by some keywords, then data every other line.
Data for 2013 is not historical. Q3 is in a small report for each region, the new data, which luckily also contains Q2. Q1 was extracted by getting same report for Q2. It seemed most easy to manually extract these in a table and then read them in R.
The code for reading the data is in the appendix.

Analysis

Raw data 

The first plot contains all data, on a logarithmic scale for price. From this plot it seems the prices have increased gigantic. From just over €50000 to €250000 in 30 years. Its also seems there is a linear phase (hence exponential growth) from say 1985 to 1995 followed by an even faster increase till 2002, which flattened till 2007 after which the decrease set in. A long term interpretation might be that the last couple of years prices corrected after the crazy times around the turn of the century.

library(ggplot2)
p <- ggplot(all, aes(x=time, y=price))
p + geom_line() +
    scale_y_log10(limits=c(50,300),breaks=seq(50,300,by=50)) + 
    scale_x_continuous(breaks=seq(1990,2010,by=10),'Year') +
    facet_wrap(~region)

Smoothed data

I love my smoothers, so using them is almost second nature. I also took the opportunity to restrict myself to the year 2000 to 2013 and moved to a linear scale. These plots do suggest that Amsterdam, Den Haag and Eindhoven found the way up again, but there are just as many regions which keep on dropping in price.
p + stat_smooth(method='loess',span=.2) +
    scale_y_continuous(limits=c(130,300),
        'Price (thousands Euros)') + 
    scale_x_continuous(limits=c(2000,2013.8),'Year') +
    facet_wrap(~region)

Forecasting

I have seen many a blog with forecast, all the more reason to try to use it. It was easy to use. However, I have some fears for anything 'auto'. It hides a lot. Further analysis (not shown) showed many competing models and models which could not be estimated. Some of these models are close in aicc, yet predict differently. However, time series has been a bit too long ago to really believe I can do better within a reasonable time. Good thing about 'auto' is my personal desire to obtain certain effects is also out of the picture.
The plots show an increases in Den Haag, maybe in Amsterdam. However, the intervals are rather wide.

library(forecast)
par(mfrow=c(3,4),mar=c(1, 2.5, 2.5, .1) + 0.1)
sapply(unique(all$regionnum),function(x) {
      regio <- all[all$regionnum==x,]
      regio <- regio[order(regio$time),]
      top <-ts(regio$price,start=c(min(regio$year),1),
          end=c(max(regio$year),3),
          deltat=0.25)
      fit <- auto.arima(top,stepwise=FALSE,approximation=FALSE)
      LH.pred<-forecast(fit,n.ahead=4)
      plot(LH.pred,xlim=c(2000,2015),
          ylim=c(min(min(LH.pred$lower),min(regio$price[regio$year>2000])),
              max(max(LH.pred$upper),max(regio$price[regio$year>2000]))),
          main=regio$region[1])
    })

Conclusion

The prices indicate that we may indeed be close to the bottom. We are getting close to the longer term trend. However, just as they might swing over the long term trend, so they might swing under. We do see some prices bottoming out. Unfortunately, by the time we know for sure, the question if Q3 2013 is the point where it stopped getting worse may be a year or more away, at which point it is not so much of practical importance.

Appendix: reading data

vect <- function(x)   unlist(strsplit(x,' '))
dd <- dir(pattern='nvm.*pdf')
la <- lapply(dd[1:9], function(x) {
      #http://www.foolabs.com/xpdf/
      system(paste('pdftotext',x))
      lin1 <- readLines(  gsub('pdf','txt',x))
      region <- lin1[3]
      blok <- grep('^jaar',lin1,value=TRUE)
      jaar <- substr(blok,regexpr('jaar',blok)+5,regexpr('zuiver',blok)-2)
      blok <- substring(blok,regexpr('zuiver kwartaal',blok)+16)
      zk1 <- substr(blok,1,regexpr('voortschrijdend',blok)-2)
      blok <- substring(blok,regexpr('kwartaal',blok)+9)
      vk1 <- substr(blok,1,regexpr('jaar',blok)-2)
      zk2 <- substring(blok,regexpr('kwartaal',blok)+9)
      vk2 <- grep('^voortschrijdend kwartaal',lin1,value=TRUE)[1]
      vk2 <- substring(vk2,26,regexpr('jaar',vk2)-2)
      
      part1 <- data.frame(jaar=vect(jaar),
          vk1=vect(vk1),
          zk1=vect(zk1),
          vk2=vect(vk2),
          zk2=vect(zk2))
      
      blok <- grep('^periode',lin1,value=TRUE)
      jaar <- substr(blok,9,regexpr('zuiver',blok)-2)
      blok <- substring(blok,regexpr('zuiver kwartaal',blok)+16)
      zk1 <- substr(blok,1,regexpr('voortschrijdend',blok)-2)
      blok <- substring(blok,regexpr('kwartaal',blok)+9)
      vk1 <- substr(blok,1,regexpr('jaar',blok)-2)
      zk2 <- substring(blok,regexpr('kwartaal',blok)+9)
      vk2 <- grep('^voortschrijdend kwartaal',lin1,value=TRUE)[2]
      vk2 <- substring(vk2,26,regexpr('jaar',vk2)-2)
      part2 <- data.frame(jaar=vect(jaar),
          vk1=vect(vk1),
          zk1=vect(zk1),
          vk2=vect(vk2),
          zk2=vect(zk2))
      parts <- rbind(part1,part2)
      parts$region <- region
      parts
    })

# extracting data for Netherlands from region data
nl <- data.frame(
    jaar=la[[1]]$jaar,
    zk1=la[[1]]$zk2,
    region='Regio 77 Netherlands'
)
# putting together phase 1 

la2 <- lapply(la,function(x) x[,c(1,3,6)]) 
la2[[10]] <- nl
all <- do.call(rbind,la2)

# current data
new <- read.table(textConnection('
"Regio 12 Zuidwest-Drenthe" 184 189 194
"Regio 31 Waterland" 190 209 201
"Regio 34 Amsterdam" 219 224 228
"Regio 37 Almere" 168 171 161
"Regio 46 Den Haag" 213 215 202
"Regio 49 Rotterdam" 172 181 169
"Regio 55 Ede eo" 214 206 224
"Regio 65 Zeeuwse Eilanden" 184 182 184
"Regio 71 Eindhoven eo" 204 211 210
"Regio 77 Netherlands" 205 207 205
'), col.names=c('region','13-01','13-02','13-03')
)

new <- reshape(new,
    varying=list(names(new[-1])),
    v.names='price',
    timevar='jaar', 
    idvar='region',
    times=c('13-1','13-2','13-3'),
    direction='long',
)
#combining 2
all$price <- as.numeric(sub(',','.',as.character(all$zk1)))
all <- subset(all,select=-zk1)
all <- rbind(all,new)
# variable coding & conversion
all$year <- as.numeric(substr(all$jaar,1,2))
all$year <- ifelse(all$year<50,all$year+2000,all$year+1900)
all$Quarter <- substr(all$jaar,4,4)
all$time <- all$year+as.numeric(all$Quarter)/4-1/8
all$region <- factor(all$region)
all$regionnum <- as.numeric(substr(all$region,7,9))
levels(all$region) <- substr(levels(all$region),10,100)

Sunday, October 6, 2013

Influence Analysis for Repeated Measures Data

I am trying exercise 59.8 (page 5057) of the SAS/STAT Users Guide 12.3 in R. The interesting thing is that influence is investigated on subject level rather than individual level. The diagnostics in nlme does not do leave-subject-out, at least, not that I know of. MCMCglm hardly has any diagnostics. This does not mean no validation is possible, this is R, programming is not optional, but rather expected. Hence with a little bit of work it is possible to estimate PRESS, Cook's D and effects on fixed effects. From this it follows extensive validation is possible, provided we can extract the underlying variables from the model fit object.

Data

Data is same as exercise 59.2 (exercise in R).
r1 <- read.table(textConnection('
1 F 21.0 20.0 21.5 23.0
2 F 21.0 21.5 24.0 25.5
3 F 20.5 24.0 24.5 26.0
4 F 23.5 24.5 25.0 26.5
5 F 21.5 23.0 22.5 23.5
6 F 20.0 21.0 21.0 22.5
7 F 21.5 22.5 23.0 25.0
8 F 23.0 23.0 23.5 24.0
9 F 20.0 21.0 22.0 21.5
10 F 16.5 19.0 19.0 19.5
11 F 24.5 25.0 28.0 28.0
12 M 26.0 25.0 29.0 31.0
13 M 21.5 22.5 23.0 26.5
14 M 23.0 22.5 24.0 27.5
15 M 25.5 27.5 26.5 27.0
16 M 20.0 23.5 22.5 26.0
17 M 24.5 25.5 27.0 28.5
18 M 22.0 22.0 24.5 26.5
19 M 24.0 21.5 24.5 25.5
20 M 23.0 20.5 31.0 26.0
21 M 27.5 28.0 31.0 31.5
22 M 23.0 23.0 23.5 25.0
23 M 21.5 23.5 24.0 28.0
24 M 17.0 24.5 26.0 29.5
25 M 22.5 25.5 25.5 26.0
26 M 23.0 24.5 26.0 30.0
27 M 22.0 21.5 23.5 25.0
'),col.names=c('Person','Gender','Age8','Age10','Age12','Age14'),
colClasses=c('factor','factor',rep('numeric',4)))
rm <- reshape(r1,direction='long',
    varying=list(c('Age8','Age10','Age12','Age14')),
    timevar='Age',idvar=c('Person','Gender'),
    v.names='y',
    times=c(8,10,12,14))
rm$Gender <- relevel(rm$Gender,ref='M')
rm$fage=factor(rm$Age)
rm$Person <- factor(rm$Person,levels=format(1:27,trim=TRUE))
rm <- rm[order(rm$Person,rm$Age),]

nlme

Analysis, standard plot are not too difficult.
lSymm <- lme(y ~  Age * Gender,
    data=rm, random= list(Person =pdSymm(~ fage-1)),method='ML')

plot(lSymm, resid(., type = "p") ~ Age | Person)

Subject level influence

I have chosen to display three items, PRESS, Cook's D and effect on fixed parameters. PRESS is reasonable straightforward, the numbers more or less match. Cook's D on the other hand, is not. I took the formula of the SAS/STAT guide, which calculated it, my wording, as Mahalanobis distance of leave-subject-out fixed parameters. However, the numbers don't match. Part of that may be that I do recalculate random parameters too. Compare my figure with Output 59.8.3 top left, this seems quite similar. 
coefFulllme <- as.numeric(coef(lSymm)[1,1:4])
VCMlme <- vcov(lSymm)
lSymmLSO <- sapply(levels(rm$Person), function(x) {
      rloo <- rm[rm$Person !=x,]
      lSymm <- lme(y ~  Age * Gender,
          data=rloo, random= list(Person =pdSymm(~ fage-1)),
          method='ML')
      coef <- as.numeric(coef(lSymm)[1,1:4])
      genderF <- rm[rm$Person==x,'Gender'][1]=='F'
      pred <- coef[1]+
          c(8,10,12,14)*coef[2]+
          genderF*coef[3]+
          c(8,10,12,14)*coef[4]*genderF
      obs <- rm[rm$Person ==x,'y']
      CD=mahalanobis(coef,coefFulllme,VCMlme)/4
      c(press=sum((obs-pred)^2),cd=CD,coef)
    })
lSymmLSO <- t(lSymmLSO)
lSymmLSO[,c(1,2)]
       press          cd
1  10.323531 0.020169407
2   3.839932 0.043571109
3  10.899537 0.032741341
4  24.050823 0.045570019
5   1.690000 0.016239351
6  11.866688 0.016490548
7   1.191555 0.005405341
8   4.678438 0.027992197
9  13.488546 0.042166906
10 85.581754 0.155169369
11 68.465309 0.106829228
12 39.449016 0.022192784
13 12.920225 0.007427751
14  6.119853 0.001987303
15 26.126968 0.277545188
16 21.065821 0.018968667
17 10.050641 0.016763371
18  7.800734 0.007611981
19 15.196358 0.008490818
20 43.256052 0.395086129
21 96.107295 0.115497649
22 13.879678 0.033635096
23  4.961204 0.011979580
24 41.758892 0.905440147
25  4.643721 0.095851759
26  8.019698 0.026621476
27 19.920691 0.017942506
plot(y=lSymmLSO[,2],x=1:27,main="Cook's D",xlab='Subject',type='h')

Fixed-Effects Deletion Estimates

Having done all the pre-work, the fixed effects deletion statistics are just a plot away.They do look slightly different from PROC MIXED as the model is a bit different in the SAS/STAT Guide.
par(mfrow=c(2,2))
dummy <- sapply(1:4,function(x) {
  plot(y=lSymmLSO[,x+2],x=1:27,main=names(coef(lSymm))[x],ylab='',xlab='Subject')
  abline(h=coefFulllme[x])
})

MCMCglm

The model is easy enough to fit. The approach used in nlme is easy enough to convert. Only first 5 subject's data shown for brevity.
library(MCMCglmm)
prior1 <- list(R=list(V=diag(4),nu=.01),
    G=list(G1=list(V=diag(1),nu=.01) ))

m1 <- MCMCglmm(y ~ Age* Gender , 
    random= ~ Person ,
    rcov=~ us(fage) :Person,
    data=rm,family='gaussian',
    nitt=500000,thin=20,burnin=50000,
    verbose=FALSE
 ,   prior=prior1
)

VCMM1 <- cov(m1$Sol)
coefFullM1 <- colMeans(m1$Sol)
m1LSO <- sapply(levels(rm$Person), function(x) {
      rloo <- rm[rm$Person !=x,]
      m1 <- MCMCglmm(y ~ Age* Gender , 
          random= ~ Person ,
          rcov=~ us(fage) :Person,
          data=rloo,family='gaussian',
     nitt=500000,thin=20,burnin=50000,
          verbose=FALSE
          ,   prior=prior1
      )
      coef <- colMeans(m1$Sol)
      genderF <- rm[rm$Person==x,'Gender'][1]=='F'
      pred <- coef[1]+
          c(8,10,12,14)*coef[2]+
          genderF*coef[3]+
          c(8,10,12,14)*coef[4]*genderF
      obs <- rm[rm$Person ==x,'y']
      CD=mahalanobis(coefFullM1,coef,VCMM1)/4
      c(press=sum((obs-pred)^2),cd=CD,coef)
    })
m1LSO <- t(m1LSO)
m1LSO[1:5,1:2]

      press         cd
1 10.133019 0.01117458
2  3.835048 0.03347745
3 10.891120 0.02740768
4 24.167881 0.03618814
5  1.694508 0.01317537
For both PRESS and Cook's D it is close. For completeness the plot.Not exactly the same, but the trend and interpretations are clear enough.
plot(y=m1LSO[,2]*4,x=1:27,main="Cook's D",xlab='Subject',type='h')

Fixed-Effects Deletion Estimates

Following the template from nlme, the result is similar. 
par(mfrow=c(2,2))
dummy <- sapply(1:4,function(x) {
      plot(y=m1LSO[,x+2],x=1:27,main=names(coefFullM1)[x],ylab='',xlab='Subject')
      abline(h=coefFullM1[x])
    })