Sunday, September 28, 2014

Bayesian models in R

There are many ways to run general Bayesian calculations in or from R. The best known are JAGS, OpenBUGS and STAN. Then some time ago Rasmus Bååth had a post Three ways to run Bayesian models in R in which he mentioned LaplacesDemon (not on CRAN) on top of those. A check of the Bayes task view gives 'MCMCpack (...) contains a generic Metropolis sampler that can be used to fit arbitrary models', 'The mcmc package consists of an R function for a random-walk Metropolis algorithm for a continuous random vector' and 'Note that rcppbugs is a package that attempts to provide a pure R alternative to using OpenBUGS/WinBUGS/JAGS for MCMC'. 
Clearly there is a wealth of approaches and each will have its strengths and weaknesses. To get an idea on their comparison I decided to run a number of calculations through all of them. The source of these calculations will be the fine SAS manual, where PROC MCMC has 21 examples. The first example, which I will try this week, is drawing samples from a number of distributions, from which I selected the mixture of three normal distributions.

Task

Draw samples from a mixture of three normal distributions. Let me start by stating that I would not use anything as complex as an MCMC sampler for this task, my personal SAS solution would be:
data p1;
    do i=1 to 1000;
        s=rand('table', .3, .4);
        select (s);
            when (1) r=rand('normal', -3, 2);
            when (2) r=rand('normal', 2, 1);
            when (3) r=rand('normal', 10, 4);
        end;
        output;
    end;
    keep r;
run;

proc sgplot data=p1;
    density r / type=kernel (c=.5);
run;
The equivalent R code:
library(plyr)
library(dplyr)
nu <- c(-3,2,10)
sd <- c(2,1,4)
tau <- sd^-2
wgt <- c(.3,.4,.3)

sampls <- sample(1:3,1000,replace=TRUE,wgt)
sampls <- rnorm(1000,nu[sampls],sd[sampls])
density(sampls) %>%    plot

MCMCpack

Just having a density was enough for MCMCpack. The samples have quite a long autocorrelation but do follow the distribution(not shown).
library(MCMCpack)
mydens <- function(x,...) {
    step1 <- sum(wgt*dnorm(x,nu,sd))
    log(step1)
}
mcmcpack <- MCMCmetrop1R(mydens,theta.init=1)
acf(mcmcpack)

mcmc

mcmc has the property that its result is not the samples, but rather summary statistics of the samples. Hence it does not give samples from the desired distribution.

JAGS

The JAGS samples had no autocorrelation and followed the distribution.
library(R2jags)
jmodel <- function() {
    i ~ dcat(wgt)
    nunu <- (i==1)*nu[1]+
            (i==2)*nu[2]+
            (i==3)*nu[3]
        tautau <- (i==1)*tau[1]+
            (i==2)*tau[2]+
            (i==3)*tau[3]
       
    p ~ dnorm(nunu,tautau)
}

datain <- list(nu=nu,tau=tau,wgt=wgt)
parameters <- c('i','p')

jj <- jags(model.file=jmodel,data=datain,parameters=parameters,DIC=FALSE)

STAN

I did not manage to get STAN to give me the samples.

rccpbugs

I did not manage to get rccpbugs to give me the samples.

LaplacesDemon

LaplacesDemon feels like the most complex of direct R solutions.  
library('LaplacesDemon')
mon.names <- "LP"
parm.names <- 'x'
MyData <- list(mon.names=mon.names, parm.names=parm.names)
N <-1
Model <- function(parm, Data)
{
    x <- parm[1]
    LL <- sum(wgt*dnorm(x,nu,sd)) %>% log
    Modelout <- list(LP=LL, Dev=-2*LL, Monitor=LL,
        yhat=x, parm=parm)
    return(Modelout)
}
Initial.Values <- 1
Fit <- LaplacesDemon(Model, Data=MyData, Initial.Values)

Sunday, September 21, 2014

Trying dplyr on triathon data

There was a triathlon in Almere last week, like every year since 1983. I pulled the data of all years to get some idea how things have changed in that sport. To get a visual I decided to plot the best 10% athletes. Then later I decided this was an ideal moment to look at plyr and dplyr again, so rewrote everything using those tools. I must say I like them very much and intend to use them again.

Data

Data from triathlon-uitslagen.nl, this years website www.challenge-almere.com and wikipedia which told me there was one lady in 1983. The data is in a bunch of spreadsheets (.xls and .xlsx), with different columns used, detail level and even those merged cells which make things look nice. Some years had besides finishers also those who did not, broken down by time of certain milestones. Some years had teams next to individuals. It was a bit too much variation to be coded in R, I did my first cleaning, removing spurious columns and rows, in libreoffice and made relatively uniform .xls files.
During importing R coded most times as date times some as character. The date times had a date (1899-12-31), so I made time to difftimes and retained the original as character. The actual year of the match is pulled from the spreadsheet name.
library(XLConnect)
library(plyr)
library(dplyr)
library(lattice)
setwd('C:\\Users\\Kees\\Documents\\r hobby\\almeretriatlon')
dd <- dir(,'.xls')

readyear <- function(filename) {
  print(filename)
  wb <- loadWorkbook(filename)
  rw <- readWorksheet(wb,sheet=1)
  names(rw) <- tolower(names(rw))
  names(rw)[names(rw)=='total'] <- 'time'
  mutate(rw,
      timec=if (class(rw$time)[1]=='character') {
            time
          } else format(time,'%H:%M:%S') ,
      time=as.difftime(timec),
      year=as.numeric(gsub('[A-Za-z\\. ]','',filename))
  )
}
years0 <- lapply(dd,readyear)
The resulting data may have a gender variable, and/or, a code variable. The code variable may be character and contain gender plus age bracket as digits (or team, or PRO or ELI), but it can also only contain the age bracket and hence be numerical. Gender can be D, H, M, V, W, interpreted as dame, heer, man, vrouw, woman respectively (female, male, male, female, female).
For plotting purposes a two digit Year factor was added, running from 83 to 14.
triatlon <- ldply(years0,function(x) {
# first unify cat info & gender 
          x <- if ('cat' %in% names(x)) {
                if (class(x$cat)[1]=='character') {
                  mutate(x,
                      gender=substr(cat,1,1),
                      cat=substring(cat,2))
                } else  mutate(x,
                      cat=as.character(cat))
              } else mutate(x,cat=NA)
        }) %>% 
    filter(.,!(gender %in% c('T','P'))) %>%
    mutate(.,
        timen=as.numeric(time) , 
        gender=factor(
            ifelse(gender %in% c('H','M'),
                'Male','Female')) ,
        Year=factor(substr(format(year),3,4),
            levels=substr(format(
                    seq(min(year),max(year))),3,4))) %>%
    select(.,
        gender,
        cat,
        year,
        Year,
        time,
        timec,
        timen)  

Plot 

This is actually where plyr/dplyr worked best to get nice clean code, it was as if it was made for this purpose, define groups, define selection and plot. The choice for lattice plots was made before I went to recode and I did not feel like changing it.
The plot shows times steadily decreasing through the previous century and stabilizing around turn of the century. 2006 was an European championship and 2008 world championship, and these years do dot stick out as being faster. Hence these times should be fairly representative of what can be achieved in the Netherlands. There is quite some year to year variation, but that can be attributed to weather and tracks used.  
group_by(triatlon,year,gender) %>% 
    filter(.,percent_rank(time)<0.1) %>%
    bwplot(timen ~ Year | gender,
        data=.,
        ylab='Time (h)',
        xlab='Year',
        horizon=FALSE,
        layout=c(1,2,1),
        drop.unused.levels=FALSE,
        scales=list(cex=.7))

Sunday, September 14, 2014

Trying a prefmap

Preference mapping is a key technique in sensory and consumer research. It links the sensory perception on products to the liking of products and hence provides clues to the development of new, well tasting, products. Even though it is a key technique, it is also a long standing problem how to perform such an analysis. In R the SensoMineR package provides a prefmap procedure. This post attempts to create such an analysis with Stan.

Data

Data are the coctail data from the SensoMineR package. After conversion to a scale 0 to 10 with 10 most liked, the product means are:
   means
1   5.03
2   3.02
3   5.42
4   3.55
5   5.67
6   5.74
7   3.84
8   3.75
9   4.17
10  4.26
11  3.20
12  3.88
13  5.98
14  3.95
15  6.47
16  4.90


Model

The model is build upon my post of last week: Mapping products in a space . What is added is a consumer section. Each consumer's preference is modeled as a ideal point, where liking is maximum, with points further away liked less and less. In mathematical terms the distance dependent function is maxliking * e-distance*scale. The ideal point is captured by three numbers; its liking and its coordinates. The scale function is, for now, common for all consumers. In addition there is a lot of code for administration of all parameters.
model1 <- "
data {
        int<lower=0> ndescriptor;
        int<lower=0> nproduct;
        matrix[nproduct,ndescriptor] profile;
    int<lower=0> nconsumer;
    matrix[nproduct,nconsumer] liking;
}
parameters {
    row_vector[nproduct] prodsp1;
    row_vector[nproduct] prodsp2;
    real<lower=0,upper=1> sigma1;
    real<lower=0,upper=1> sigma2;
    matrix [nconsumer,2] optim;
    vector <lower=0> [nconsumer] maxima;
    real <lower=0> scale;
    real <lower=0> sliking;
}
transformed parameters {
   vector [ndescriptor] descrsp1;
   vector [ndescriptor] descrsp2;
     matrix[nproduct,ndescriptor] expected1;  
     matrix[nproduct,ndescriptor] expected2;  
     matrix[nproduct,ndescriptor] residual1;  
     vector[nproduct] distances;
     matrix[nproduct,nconsumer] likepred;


   descrsp1 <- profile'*prodsp1';
   expected1 <- (descrsp1*prodsp1)';
   residual1 <- profile-expected1;
   descrsp2 <- profile'*prodsp2';
   expected2 <- (descrsp2*prodsp2)';
   for (i in 1:nconsumer) {
      for (r in 1:nproduct) {
      distances[r] <- sqrt(square(prodsp1[r]-optim[i,1])+
                           square(prodsp2[r]-optim[i,2]));
      likepred[r,i] <- maxima[i]*exp(-distances[r]*scale);
      }
   }
}
model {  
     for (r in 1:nproduct) {
        prodsp1[r] ~ normal(0,1);
        prodsp2[r] ~ normal(0,1);
        for (c in 1:ndescriptor) {
           profile[r,c] ~ normal(expected1[r,c],sigma1);
           residual1[r,c] ~ normal(expected2[r,c],sigma2);
        }
        for (i in 1:nconsumer) {
           liking[r,i] ~ normal(likepred[r,i],sliking);
           optim[i,1] ~ normal(0,2);
           optim[i,2] ~ normal(0,2);
        }
    scale ~ normal(1,.1);
    maxima ~ normal(5,2);
    sliking ~ normal(2,1);
    }
}
generated quantities {
   vector [ndescriptor] descrspace1;
   vector [ndescriptor] descrspace2;
   row_vector [nproduct] prodspace1;
   row_vector [nproduct] prodspace2;
   matrix [nconsumer,2] optima;

   prodspace1 <-(
                     ((prodsp1[12]>0)*prodsp1)-
                     ((prodsp1[12]<0)*prodsp1)
                  );
   prodspace2 <-(
                     ((prodsp2[12]>0)*prodsp2)-
                     ((prodsp2[12]<0)*prodsp2)
                  ); 
   descrspace1 <-(
                     ((prodsp1[12]>0)*descrsp1)-
                     ((prodsp1[12]<0)*descrsp1)
                  );
   descrspace2 <-(
                     ((prodsp2[12]>0)*descrsp2)-
                     ((prodsp2[12]<0)*descrsp2)
                  ); 
   for (i in 1:nconsumer) {
      optima[i,1] <- (
                        ((prodsp1[12]>0)*optim[i,1])-
                        ((prodsp1[12]<0)*optim[i,1])
                     );
      optima[i,2] <- (
                        ((prodsp2[12]>0)*optim[i,2])-
                        ((prodsp2[12]<0)*optim[i,2])
                     );
   }
}
"

Analysis results

Sensominer's result

For comparative reasons the plot resulting from SensoMineR's carto() function. I have followed the parameter settings from the SensoMineR package to get this plot. Color is liking, numbered dots are products. The blue zone is best liked, as can be seen from the products with highest means residing there.

New method

In the plot the blue dots are samples of ideal points, the bigger black numbers are locations of products and the smaller red numbers are consumer's ideal points.
This is different from the SensoMineR map , the consumers have pulled well liked products such as 13 and 15 to the center. In a way, I suspect that in this analysis the consumer's preference has overruled most information from the sensory space. Given that, I will be splitting consumers.

Three groups of consumers

Three groups of consumers were created via k-means clustering. From sensory and consumer insight point of view the clusters may describe three different ways to experience the particular products. Obviously a clustering upon demographics or marketing segments may be equally valid, but I don't have that information. The cluster sizes are 15, 52 and 33 respectively.

Cluster 1

This cluster is characterized by liking for products 8 to 11. Compared to the original space, this cluster does not like products 13 and 15 so much, does not dislike product 4 and 12 so much.

Cluster 2

These are the bulk of the consumers and the result of all consumers is more pronounced. However, product 1 has shifter quite a distance to liked.

Cluster 3

This plot is again fairly similar to the all consumer plot. What is noticeable here is that there is a void in the center. The center of the most liked region is not occupied.

Next Steps

There are still some things to improve in this approach. Better tuning of the various priors in the model. Modeling the range of consumer's liking rather than solely their maximum. It may be possible to have the scale parameter subject dependent. Perhaps a better way to extract the dimensions from sensory space, thereby avoiding the Jacobian warning and using estimated standard deviations of the sensory profiling data. Finally, improved graphics.

Code

# Reading and first map

# senso.cocktail
# hedo.cocktail
library(SensoMineR)
data(cocktail)
res.pca <- PCA(senso.cocktail,graph=FALSE)
# SensoMineR does a dev.new for each graph, hence captured like this.
dev.new <- function() png('carto.png')
res.carto <- carto(res.pca$ind$coord[,1:2],
    graph.tree=FALSE,
    graph.corr=FALSE,
    hedo.cocktail)
dev.off()
# reset default graph settings
rm(dev.new)
dev.new()

# model

library(rstan)
nprod <- 16
ndescr <- 13
nconsumer <- 100
sprofile <- as.matrix(scale(senso.cocktail))
datain <- list(
    nproduct=nprod,
    ndescriptor=ndescr,
    profile=sprofile,
    nconsumer=nconsumer,
    liking = as.matrix(10-hedo.cocktail[,1:nconsumer])
)
data.frame(means=rowMeans(10-hedo.cocktail)  )

model1 <- "
data {
        int<lower=0> ndescriptor;
        int<lower=0> nproduct;
        matrix[nproduct,ndescriptor] profile;
    int<lower=0> nconsumer;
    matrix[nproduct,nconsumer] liking;
}
parameters {
    row_vector[nproduct] prodsp1;
    row_vector[nproduct] prodsp2;
    real<lower=0,upper=1> sigma1;
    real<lower=0,upper=1> sigma2;
    matrix [nconsumer,2] optim;
    vector <lower=0> [nconsumer] maxima;
    real <lower=0> scale;
    real <lower=0> sliking;
}
transformed parameters {
   vector [ndescriptor] descrsp1;
   vector [ndescriptor] descrsp2;
     matrix[nproduct,ndescriptor] expected1;  
     matrix[nproduct,ndescriptor] expected2;  
     matrix[nproduct,ndescriptor] residual1;  
     vector[nproduct] distances;
     matrix[nproduct,nconsumer] likepred;


   descrsp1 <- profile'*prodsp1';
   expected1 <- (descrsp1*prodsp1)';
   residual1 <- profile-expected1;
   descrsp2 <- profile'*prodsp2';
   expected2 <- (descrsp2*prodsp2)';
   for (i in 1:nconsumer) {
      for (r in 1:nproduct) {
      distances[r] <- sqrt(square(prodsp1[r]-optim[i,1])+
                           square(prodsp2[r]-optim[i,2]));
      likepred[r,i] <- maxima[i]*exp(-distances[r]*scale);
      }
   }
}
model {  
     for (r in 1:nproduct) {
        prodsp1[r] ~ normal(0,1);
        prodsp2[r] ~ normal(0,1);
        for (c in 1:ndescriptor) {
           profile[r,c] ~ normal(expected1[r,c],sigma1);
           residual1[r,c] ~ normal(expected2[r,c],sigma2);
        }
        for (i in 1:nconsumer) {
           liking[r,i] ~ normal(likepred[r,i],sliking);
           optim[i,1] ~ normal(0,2);
           optim[i,2] ~ normal(0,2);
        }
    scale ~ normal(1,.1);
    maxima ~ normal(5,2);
    sliking ~ normal(2,1);
    }
}
generated quantities {
   vector [ndescriptor] descrspace1;
   vector [ndescriptor] descrspace2;
   row_vector [nproduct] prodspace1;
   row_vector [nproduct] prodspace2;
   matrix [nconsumer,2] optima;

   prodspace1 <-(
                     ((prodsp1[12]>0)*prodsp1)-
                     ((prodsp1[12]<0)*prodsp1)
                  );
   prodspace2 <-(
                     ((prodsp2[12]>0)*prodsp2)-
                     ((prodsp2[12]<0)*prodsp2)
                  ); 
   descrspace1 <-(
                     ((prodsp1[12]>0)*descrsp1)-
                     ((prodsp1[12]<0)*descrsp1)
                  );
   descrspace2 <-(
                     ((prodsp2[12]>0)*descrsp2)-
                     ((prodsp2[12]<0)*descrsp2)
                  ); 
   for (i in 1:nconsumer) {
      optima[i,1] <- (
                        ((prodsp1[12]>0)*optim[i,1])-
                        ((prodsp1[12]<0)*optim[i,1])
                     );
      optima[i,2] <- (
                        ((prodsp2[12]>0)*optim[i,2])-
                        ((prodsp2[12]<0)*optim[i,2])
                     );
   }
}
"

pars <- c('prodspace1','prodspace2','optima','scale','maxima')

fit <- stan(model_code = model1,
    data = datain,
    pars=pars)

# plotting

fitsamps <- as.data.frame(fit)

combiplot <- function(fitsamps,datain,labs) {
    prod <- reshape(fitsamps,
        drop=names(fitsamps)[33:ncol(fitsamps)],
        direction='long',
        varying=list(names(fitsamps)[1:16],
            names(fitsamps)[17:32]),
        timevar='sample',
        times=1:16,
        v.names=c('PDim1','PDim2')
    )
        sa <- sapply(1:16,function(x)
            c(sample=x,
                Dim1=mean(prod$PDim1[prod$sample==x]),
                Dim2=mean(prod$PDim2[prod$sample==x])))
    sa <- as.data.frame(t(sa))
   
    optimindex <- grep('optima',names(fitsamps))
    noptim <- datain$nconsumer
    loc <- reshape(fitsamps,
        drop=names(fitsamps)[(1:ncol(fitsamps))[-optimindex]],
        direction='long',
        varying=list(names(fitsamps)[optimindex[1:noptim]],
            names(fitsamps)[optimindex[(1:noptim)+noptim]]),
        timevar='subject',
        times=1:noptim,
        v.names=c('Dim1','Dim2')
    )
    locx <- loc[sample(nrow(loc),60000),]
    plot(locx$Dim1,locx$Dim2,
        col='blue',
        pch=46,
        cex=2,
        xlim=c(-1,1)*.7,
        ylim=c(-1,1)*.7)
    sa2 <- sapply(1:noptim,function(x)
            c(sample=x,
                Dim1=mean(loc$Dim1[loc$subject==x]),
                Dim2=mean(loc$Dim2[loc$subject==x])))
    sa2 <- as.data.frame(t(sa2))
    text(x=sa2$Dim1,y=sa2$Dim2,labels=labs,cex=.8,col='red')
    text(x=sa$Dim1,y=sa$Dim2,labels=sa$sample,cex=1.5)
    invisible(fitsamps)
}

combiplot(fitsamps,datain,1:100)

# three clusters

tlik <- t(scale(hedo.cocktail))
km <- kmeans(tlik,centers=3)
table(km$cluster)


datain1 <- list(
    nproduct=nprod,
    ndescriptor=ndescr,
    profile=sprofile,
    nconsumer=sum(km$cluster==1),
    liking = as.matrix(10-hedo.cocktail[,km$cluster==1])
)
fit1 <- stan(model_code = model1,
    data = datain1,
    fit=fit,
    pars=pars)

fitsamps1 <- as.data.frame(fit1)
#

datain2 <- list(
    nproduct=nprod,
    ndescriptor=ndescr,
    profile=sprofile,
    nconsumer=sum(km$cluster==2),
    liking = as.matrix(10-hedo.cocktail[,km$cluster==2])
)
fit2 <- stan(model_code = model1,
    data = datain2,
    fit=fit,
    pars=pars)

fitsamps2 <- as.data.frame(fit2)
##
datain3 <- list(
    nproduct=nprod,
    ndescriptor=ndescr,
    profile=sprofile,
    nconsumer=sum(km$cluster==3),
    liking = as.matrix(10-hedo.cocktail[,km$cluster==3])
)
fit3 <- stan(model_code = model1,
    data = datain3,
    fit=fit,
    pars=pars)

fitsamps3 <- as.data.frame(fit3)
combiplot(fitsamps1,datain1,which(km$cluster==1))
combiplot(fitsamps2,datain2,which(km$cluster==2))
combiplot(fitsamps3,datain3,which(km$cluster==3))

Sunday, September 7, 2014

Mapping products in a space

I have read about people doing a Bayesian PCA at some points and always wondered how that would work. Then, at some point I thought of a way to do so. As ideas evolved my interest became not PCA as such, but rather in a prefmap. As a first step in that this post contains the mapping from a sensory space to a two dimensional space. For prefmap this step is commonly done via a PCA.

Data

Data are the coctail data from sensominer package.

Algorithm

The calculation is mostly inspired by the many PLS algorithms to which I was exposed when I was doing chemometrics. Scores and loadings may be obtained from each other by multiplying with the data matrix. In this case it means I just take a set of product scores and obtain the associated descriptors via a simple matrix multiplication. The resulting product and descriptor vectors can be used to reconstruct the original matrix; the best solution minimizes difference between the constructed and original data. For dimension two subtract reconstructed data from original data and repeat on residuals.

Scaling

PCA has the possibility to have unit length scores or loadings, or R and Q mode if that is your favorite jargon. If one has a more singular value decomposition look, it is just where the eigenvalues go. At this point I made the choice to do that in the variable space. 

Unique solution

PCA is known not to have one unique solution; each solution is equivalent to its mirror image. It seemed most elegant to do this completely at the end, after inspection of the data it seemed the location of product 12 was suitable for making the solution unique, since it was extreme on both dimensions. The final step (generated quantities) forces the location to be top right quadrant for data reported.

Code

library(rstan)
nprod <- 16
ndescr <- 13
sprofile <- as.matrix(scale(senso.cocktail,scale=FALSE))
datain <- list(
    nproduct=nprod,
    ndescriptor=ndescr,
    profile=sprofile
    )
   
model1 <- "
data {
        int<lower=0> ndescriptor;
        int<lower=0> nproduct;
        matrix[nproduct,ndescriptor] profile;
}
parameters {
    row_vector[nproduct] prodsp1;
    row_vector[nproduct] prodsp2;
    real<lower=0,upper=1> sigma1;
    real<lower=0,upper=1> sigma2;
}
transformed parameters {
   vector [ndescriptor] descrsp1;
   vector [ndescriptor] descrsp2;
   matrix[nproduct,ndescriptor] expected1;  
   matrix[nproduct,ndescriptor] expected2;  
   matrix[nproduct,ndescriptor] residual1;  

   descrsp1 <- profile'*prodsp1';
   expected1 <- (descrsp1*prodsp1)';
   residual1 <- profile-expected1;
   descrsp2 <- profile'*prodsp2';
   expected2 <- (descrsp2*prodsp2)';
}
model {  
     for (r in 1:nproduct) {
        prodsp1[r] ~ normal(0,1);
        prodsp2[r] ~ normal(0,1);
        for (c in 1:ndescriptor) {
           profile[r,c] ~ normal(expected1[r,c],sigma1);
           residual1[r,c] ~ normal(expected2[r,c],sigma2);
        }
     }
}
generated quantities {
   vector [ndescriptor] descrspace1;
   vector [ndescriptor] descrspace2;
   row_vector [nproduct] prodspace1;
   row_vector [nproduct] prodspace2;
   prodspace1 <-(
                     ((prodsp1[12]>0)*prodsp1)-
                     ((prodsp1[12]<0)*prodsp1)
                  );
   prodspace2 <-(
                     ((prodsp2[12]>0)*prodsp2)-
                     ((prodsp2[12]<0)*prodsp2)
                  ); 
   descrspace1 <-(
                     ((prodsp1[12]>0)*descrsp1)-
                     ((prodsp1[12]<0)*descrsp1)
                  );
   descrspace2 <-(
                     ((prodsp2[12]>0)*descrsp2)-
                     ((prodsp2[12]<0)*descrsp2)
                  ); 
}
"
pars <- c('prodspace1','prodspace2','descrspace1','descrspace2')
fit1 <- stan(model_code = model1,
    data = datain,
    pars=pars)

Results

For comparison, first a standard biplot.

Product space

It is not difficult to extract the samples and plot them. See end of post. One notable property of the plot is that the products are in ellipses with the minor axis towards the center. Apparently part of variation between MCMC samples is rotational freedom between dimensions. Other than that the solution is actually pretty close to the PCA

Descriptor space

The rotational freedom is even more clear here.

Additional code

data

library(SensoMineR)
data(cocktail)

biplot

pr <- prcomp(senso.cocktail) 
plot(pr)
biplot(pr)

product plot

fit1samps <- as.data.frame(fit1)

prod <- reshape(fit1samps,
    drop=names(fit1samps)[33:59],
    direction='long',
    varying=list(names(fit1samps)[1:16],
        names(fit1samps)[17:32]),
    timevar='sample',
    times=1:16,
    v.names=c('PDim1','PDim2')
)
   
prod <- prod[order(prod$PDim1),]
plot(prod$PDim1,prod$PDim2,
    col=c(2,17,3,4,6,5,7:10,13,12,11,14:16)[prod$sample],
    pch=46,
    cex=2,
    xlim=c(-1,1)*.75,
    ylim=c(-1,1)*.75)
sa <- sapply(1:16,function(x)
        c(sample=x,
            Dim1=mean(prod$PDim1[prod$sample==x]),
            Dim2=mean(prod$PDim2[prod$sample==x])))
sa <- as.data.frame(t(sa))
text(x=sa$Dim1,y=sa$Dim2,labels=sa$sample,cex=1.5)

descriptor plot

descr <- reshape(fit1samps,
    drop=names(fit1samps)[c(1:32,59)],
    direction='long',
    varying=list(names(fit1samps)[33:45],
        names(fit1samps)[46:58]),
    timevar='sample',
    times=1:13,
    v.names=c('DDim1','DDim2')
)

descr <- descr[order(descr$DDim1),]
plot(descr$DDim1,descr$DDim2,
    col=c(2,1,3:13)[descr$sample],
    pch=46,
    cex=2,
    xlim=c(-1,1)*9,
    ylim=c(-1,1)*9)
sa <- sapply(1:13,function(x)
        c(sample=x,
            Dim1=mean(descr$DDim1[descr$sample==x]),
            Dim2=mean(descr$DDim2[descr$sample==x])))
sa <- as.data.frame(t(sa))
text(x=sa$Dim1,y=sa$Dim2,labels=names(senso.cocktail))