Sunday, February 23, 2014

Unemployment revisited

Approximately a year ago I made a post graphing unemployment in Europe and other locations. I have always wanted to do this again, not because the R-code would be so interesting, but just because I wanted to see the plots. As time progressed I attempted not to do this in R, but in Julia. I could not get it good enough in Julia, so this is, alas, the R version.

Data

Data from Eurostat. Or, if you are lazy, Google une_rt_m, which is the name of the table. There is a bit of pre-processing of the data, mostly getting names of countries decent for plotting. The plots shown are unemployment and its first derivative, both smoothed.

Plots

Smoothed data

First derivative


Code

library(ggplot2)
library(KernSmooth)
library(plyr)
library(scales) # to access breaks/formatting functions

r1 <- read.csv("une_rt_m_1_Data.csv",na.strings=':')
levels(r1$GEO) <- sub(' countries)',')' ,levels(r1$GEO),fixed=TRUE)
levels(r1$GEO) <- sub('European Union','EU' ,levels(r1$GEO))
levels(r1$GEO)[levels(r1$GEO)=='Euro area (EA11-2000, EA12-2006, EA13-2007, EA15-2008, EA16-2010, EA17-2013, EA18)'] <- "EAll"
levels(r1$GEO)[levels(r1$GEO)=='United Kingdom'] <- 'UK'
levels(r1$GEO)[levels(r1$GEO)=='United States'] <- 'US'
levels(r1$GEO)[levels(r1$GEO)=='Germany (until 1990 former territory of the FRG)'] <- 'Germany'
levels(r1$GEO)
grep('12|13|15|16|17|25|27',x=levels(r1$GEO),value=TRUE)
r1 <- r1[!(r1$GEO %in% grep('12|13|15|16|17|25|27',x=levels(r1$GEO),value=TRUE)),]
r1$GEO <- factor(r1$GEO)
r1$Age <- factor(r1$AGE,levels=levels(r1$AGE)[c(2,1,3)])
r1$Date <- as.Date(paste(gsub('M','-',as.character(r1$TIME)),'-01',sep=''))

#
maxi <- aggregate(r1$Value,by=list(GEO=r1$GEO),FUN=max,na.rm=TRUE)
parts <- data.frame(
    low = maxi$GEO[maxi$x<quantile(maxi$x,1/3)]
    ,middle = maxi$GEO[maxi$x>quantile(maxi$x,1/3) & maxi$x<quantile(maxi$x,2/3)]
    ,high = maxi$GEO[maxi$x>quantile(maxi$x,2/3)]
)
#ggplot(r1[r1$GEO %in% low,],aes(x=Date,y=Value,colour=Age)) +
#        facet_wrap( ~ GEO, drop=TRUE) +
#        geom_line()  +
#        theme(legend.position = "bottom")
#        ylab('% Unemployment') + xlab('Year')


r1$class <- interaction(r1$GEO,r1$Age)
r3 <- r1[complete.cases(r1),]
r3$class <- factor(r3$class)
Perc <- ddply(.data=r3,.variables=.(class),
    function(piece,...) {
        lp <- locpoly(x=as.numeric(piece$Date),y=piece$Value,
            drv=0,bandwidth=90)
        sdf <- data.frame(Date=as.Date(lp$x,origin='1970-01-01'),
            sPerc=lp$y,Age=piece$Age[1],GEO=piece$GEO[1])}
    ,.inform=FALSE
)
for (i in c('low','middle','high')) {
    png(paste(i,'.png',sep=''))
    print(
        ggplot(Perc[Perc$GEO %in% parts[,i] ,],
                aes(x=Date,y=sPerc,colour=Age)) +
            facet_wrap( ~ GEO, drop=TRUE) +
            geom_line()  +
            theme(legend.position = "bottom")+
            ylab('% Unemployment') + xlab('Year') +
            scale_x_date(breaks = date_breaks("5 years"),
                labels = date_format("%y"))
    )
    dev.off()
}

dPerc <- ddply(.data=r3,.variables=.(class),
    function(piece,...) {
        lp <- locpoly(x=as.numeric(piece$Date),y=piece$Value,
            drv=1,bandwidth=365/2)
        sdf <- data.frame(Date=as.Date(lp$x,origin='1970-01-01'),
            dPerc=lp$y,Age=piece$Age[1],GEO=piece$GEO[1])}
    ,.inform=FALSE
)

for (i in c('low','middle','high')) {
    png(paste('d',i,'.png',sep=''))
    print(
        ggplot(dPerc[dPerc$GEO %in% parts[,i] ,],
                aes(x=Date,y=dPerc,colour=Age)) +
            facet_wrap( ~ GEO, drop=TRUE) +
            geom_line()  +
            theme(legend.position = "bottom")+
            ylab('Change in % Unemployment') + xlab('Year')+
            scale_x_date(breaks = date_breaks("5 years"),
                labels = date_format("%y"))
    )
    dev.off()
}

Sunday, February 16, 2014

Bayesian analysis of sensory profiling data III


Last week I extended my Bayesian model. So this week I wanted to test it with different data. There is one other data set with profiling data in R, french fries in the reshape package. 'This data was collected from a sensory experiment conducted at Iowa State University in 2004. The investigators were interested in the effect of using three different fryer oils had on the taste of the fries'. In this post I try t analyze the data, however the documentation which I found is so limited, it is needed to second guess the objectives of the experiment.

Data

The data includes a treatment and a time variable. The time is in weeks and crossed with the other variables, suggesting that this is systematically varied and hence in itself a factor of interest. In the end I decided to cross treatments and time to generate a product factor with 30 levels. In addition I crossed time with a repetition factor so this effect could function as a random variable. As bonus, there are some missing data, which Stan does not like and are removed before analysis.
data(french_fries,package='reshape')
head(french_fries)
vars <- names(french_fries)
fries <- reshape(french_fries,
        idvar=vars[1:4],
        varying=list(vars[5:9]),
        v.names='Score',
        direction='long',
        timevar='Descriptor',
        times=vars[5:9])
head(fries)
fries$Product <- interaction(fries$treatment,fries$time)
fries$Session <- interaction(fries$time,fries$rep)
fries$Descriptor <- factor(fries$Descriptor)
fries <- fries[complete.cases(fries),]

Model

The model is a variation on last week, because I there is no rounds information but there are sessions. In addition, I made the range for the shift variable a bit larger, because it appeared some panelists have extreme behavior. 
model1 <-
        '
    data {
        int<lower=0> npanelist;
        int<lower=0> nobs;
        int<lower=0> nsession;
        int<lower=0> ndescriptor;
        int<lower=0> nproduct;
        vector[nobs] y;
        int<lower=1,upper=npanelist> panelist[nobs];
        int<lower=1,upper=nproduct> product[nobs];
        int<lower=1,upper=ndescriptor> descriptor[nobs];
        int<lower=1,upper=nsession> session[nobs];
        real maxy;
    }
    parameters {
        matrix<lower=0,upper=maxy> [nproduct,ndescriptor] profile;
        vector<lower=-maxy/2,upper=maxy/2>[npanelist] shift[ndescriptor];
        vector<lower=-maxy/2,upper=maxy/2>[npanelist] sitting[nsession];  
        vector<lower=-1,upper=1> [npanelist] logsensitivity;
        real<lower=0,upper=maxy/5> varr;
        vector [npanelist] lpanelistvar;
        vector [ndescriptor] ldescriptorvar;
        real<lower=0,upper=maxy/5> sigmashift;
        real<lower=0,upper=maxy/5> sigmasitting;
    }
    transformed parameters {
        vector [nobs] expect;
        vector[npanelist] sensitivity;
        real mlogsens;
        real mlpanelistvar;
        real mldescriptorvar;
        real meanshift[ndescriptor];
        real meansit[nsession];
        vector [nobs] sigma;
        mlogsens <- mean(logsensitivity);
        for (i in 1:ndescriptor) {
            meanshift[i] <- mean(shift[i]);
        }
        mlpanelistvar <- mean(lpanelistvar);
        mldescriptorvar <- mean(ldescriptorvar);   
        for (i in 1:nsession) {
            meansit[i] <- mean(sitting[i]);
        }
        for (i in 1:npanelist) {
            sensitivity[i] <- pow(10,logsensitivity[i]-mlogsens);
        }
        for (i in 1:nobs) {
            expect[i] <- profile[product[i],descriptor[i]]
                *sensitivity[panelist[i]]
                + shift[descriptor[i],panelist[i]]-meanshift[descriptor[i]]
                + sitting[session[i],panelist[i]]-meansit[session[i]];
            sigma[i] <- sqrt(varr
                *exp(lpanelistvar[panelist[i]]-mlpanelistvar)
                *exp(ldescriptorvar[descriptor[i]]-mldescriptorvar));
            }
    }
    model {
        logsensitivity ~ normal(0,0.1);
        for (i in 1: ndescriptor) {
            shift[i] ~ normal(0,sigmashift);
            ldescriptorvar[i] ~ normal(0,1);
        }
        for (i in 1:nsession)
            sitting[i] ~ normal(0,sigmasitting);
        for (i in 1:npanelist)
            lpanelistvar[i] ~ normal(0,1);
        y ~ normal(expect,sigma);
    } 
    generated quantities    {
        vector [npanelist] panelistsd;
        vector [ndescriptor] descriptorsd;
        for (i in 1:npanelist) {
            panelistsd[i] <- sqrt(exp(lpanelistvar[i]));
        }
        for (i in 1:ndescriptor) {
            descriptorsd[i] <- sqrt(exp(ldescriptorvar[i]));
        }
    }
        '

Additional code

This is just the stuff needed to get Stan running
datainfries <- with(fries,list(
                panelist=(1:nlevels(subject))[subject],
                npanelist=nlevels(subject),
                session=(1:nlevels(Session))[Session],
                nsession=nlevels(Session),
                product=(1:nlevels(Product))[Product],
                nproduct=nlevels(Product),
                descriptor=(1:nlevels(Descriptor))[Descriptor],
                ndescriptor=nlevels(Descriptor),
                y=Score,
                nobs=length(Score),
                maxy=15))

pars <- c('profile','sensitivity','shift','sitting',
        'panelistsd','descriptorsd')
library(rstan)
fitfries <- stan(model_code = model1,
        data = datainfries,
        pars=pars,
        iter = 1100,
        warmup=100,
        chains = 4)

Results

Traceplot is not shown, but the general plot provides some overview.
Plot of the profile plot has been adapted given the design. This plot should convey the message the data contains. As time progresses the flavor becomes more rancid and pointy, less potato. This process starts at circa 4 weeks.Treatment 3 seems least affected, but the difference is minute.

samples <- extract(fitfries,'profile')$profile
nsamp <- dim(samples)[1]
profile <- expand.grid(Product=levels(fries$Product),
        Descriptor=levels(fries$Descriptor))
profile$des <- as.numeric(profile$Descriptor)
profile$prod <- as.numeric(profile$Product)
profs <- as.data.frame(t(sapply(1:nrow(profile),function(i){
                        subsamples <-c(samples[1:nsamp,

                                         profile$prod[i],profile$des[i]])
                        c(mean=mean(subsamples),quantile(subsamples,c(.1,.9)))
                        })))
names(profs) <- c('score','lmin','lmax')
profile <- cbind(profile,profs)
profile <- merge(profile,
           unique(fries[,c('Product','treatment','time')])
           )

library(ggplot2)

profile$timen <- c(1:12)[profile$time]
profile <- profile[order(profile$timen),]

p1 <- ggplot(profile, aes(y=score, x=timen,color=treatment))

p1 + geom_path() +
        scale_x_continuous(
                breaks=1:nlevels(profile$time),
                labels=levels(profile$time)) +
        xlab('') +
         geom_ribbon(aes(ymin=lmin, ymax=lmax,fill=treatment),
                 alpha=.15,linetype='blank') +
        facet_grid(  Descriptor ~ . )

Sunday, February 9, 2014

Bayesian analysis of sensory profiling data, part 2

Last week I made the core of a Bayesian model for sensory profiling data. This week the extras need to be added. That is, there are a bunch of extra interactions and the error is dependent on panelists and descriptors.
Note that where last week I pointed to influence of Procrustes and STATIS in these models, I probably should have mentioned Per Brockhoff's work too.

Data

See last week

Model

A few features were added compared to last week: Round effect was averaged over all descriptors. It is now dependent on descriptors. As normalization, the sum of the round effects within a descriptor is fixed to be 0. Similar, a shift effect was defined per panelist. It is now per panelist*descriptor combination, again normalized to sum to 0 by descriptor. Residual error was defined as one variable for all data. It is now descriptor and panelist dependent. I decided to add variances there. It turned out to be quite a complex model. Running time was about 100 samples per minute on my hardware: too long to sit there waiting for it, but could fit in during a meeting or lunch.
model1 <-
'
data {
    int<lower=0> npanelist;
       int<lower=0> nobs;
    int<lower=0> nsession;
    int<lower=0> nround;
    int<lower=0> ndescriptor;
    int<lower=0> nproduct;
       vector[nobs] y;
       int<lower=1,upper=npanelist> panelist[nobs];
       int<lower=1,upper=nproduct> product[nobs];
    int<lower=1,upper=ndescriptor> descriptor[nobs];
    int<lower=1,upper=nround> rounds[nobs];
    real maxy;
      }
parameters {
    matrix<lower=0,upper=maxy> [nproduct,ndescriptor] profile;
    vector<lower=-maxy/3,upper=maxy/3>[npanelist] shift[ndescriptor];
    vector<lower=-1,upper=1> [npanelist] logsensitivity;
    vector<lower=-maxy/3,upper=maxy/3> [nround] roundeffect[ndescriptor];
    real<lower=0,upper=maxy/5> varr;
    vector [npanelist] lpanelistvar;
    vector [ndescriptor] ldescriptorvar;
    }
transformed parameters {
    vector [nobs] expect;
    vector[npanelist] sensitivity;
    real mlogsens;
    real mlpanelistvar;
    real mldescriptorvar;
    real mroundeff[ndescriptor];
    real meanshift[ndescriptor];
    vector [nobs] sigma;

    mlogsens <- mean(logsensitivity);
    for (i in 1:ndescriptor) {
       mroundeff[i] <- mean(roundeffect[i]);
       meanshift[i] <- mean(shift[i]);
    }
    mlpanelistvar <- mean(lpanelistvar);
    mldescriptorvar <- mean(ldescriptorvar);   

    for (i in 1:npanelist) {
        sensitivity[i] <- pow(10,logsensitivity[i]-mlogsens);
        }
    for (i in 1:nobs) {
        expect[i] <- profile[product[i],descriptor[i]]
            *sensitivity[panelist[i]]
            + shift[descriptor[i],panelist[i]]-meanshift[descriptor[i]]
            + roundeffect[descriptor[i],rounds[i]]-mroundeff[descriptor[i]];
        sigma[i] <- sqrt(varr
                         *exp(lpanelistvar[panelist[i]]-mlpanelistvar)
                         *exp(ldescriptorvar[descriptor[i]]-mldescriptorvar));
         }
    }
model {
    logsensitivity ~ normal(0,0.1);
    for (i in 1: ndescriptor) {
       roundeffect[i] ~ normal(0,maxy/10);
       shift[i] ~ normal(0,maxy/10);
       ldescriptorvar[i] ~ normal(0,1);
    }
    for (i in 1:npanelist)
       lpanelistvar[i] ~ normal(0,1);
    y ~ normal(expect,sigma);
    } 
generated quantities    {
    vector [npanelist] panelistsd;
    vector [ndescriptor] descriptorsd;
    for (i in 1:npanelist) {
        panelistsd[i] <- sqrt(exp(lpanelistvar[i]));
        }
    for (i in 1:ndescriptor) {
         descriptorsd[i] <- sqrt(exp(ldescriptorvar[i]));
        }
    }
'

model call

pars <- c('profile','shift','roundeffect','sensitivity',
         'panelistsd','descriptorsd')
datainchoc <- with(choc,list(
                panelist=(1:nlevels(Panelist))[Panelist],
                npanelist=nlevels(Panelist),
                session=(1:nlevels(Session))[Session],
                nsession=nlevels(Session),
                rounds=(1:nlevels(Rank))[Rank],
                nround=nlevels(Rank),
                product=(1:nlevels(Product))[Product],
                nproduct=nlevels(Product),
                descriptor=(1:nlevels(Descriptor))[Descriptor],
                ndescriptor=nlevels(Descriptor),
                y=Score,
                nobs=length(Score),
                maxy=10))

fitchoc <- stan(model_code = model1,
        data = datainchoc,
        pars=pars,
        iter = 500,
        warmup=100,
        chains = 4)

Results

I do not think there is much point in showing all printed output. However the summary plot is interesting. There is something with the eight level of some of the factors, a few extra samples might be not unwelcome.
The error is more dependent on panelist than on descriptor, panelists 7, 20 and 29 might benefit from some training. 

profile

The code for profile has been slightly modified, last week I only used a few of the samples. For me the intervals look nice and sharp. Other than that choc3 is very different from the others, more sweet, milk, less cocoa. Choc1 very bitter and cocoa.

rounds

It is premature based on one data set, but rounds do seem to have minimal effect. If this was structural on more data sets, this term might be removed.

Panelists' shift

The plot shows that shift is important and this is a factor which should be in the model. Getting this under control might cost more than it is worth.

Code for plots

library(ggplot2)
samples <- extract(fitchoc,'profile')$profile
nsamp <- dim(samples)[1]
profile <- expand.grid(Product=levels(choc$Product),
        Descriptor=levels(choc$Descriptor))
profile$des <- as.numeric(profile$Descriptor)
profile$prod <- as.numeric(profile$Product)
profs <- as.data.frame(t(sapply(1:nrow(profile),function(i){
            subsamples <- c(samples[1:nsamp,profile$prod[i],profile$des[i]])
            c(mean=mean(subsamples),quantile(subsamples,c(.1,.9)))
        })))
names(profs) <- c('score','lmin','lmax')
profile <- cbind(profile,profs)
p1 <- ggplot(profile, aes(y=score, x=des,color=Product))
p1 + geom_path() +
        scale_x_continuous(
                breaks=1:nlevels(profile$Descriptor),
                labels=levels(profile$Descriptor)) +
    xlab('') +
    geom_ribbon(aes(ymin=lmin, ymax=lmax,fill=Product),
                    alpha=.15,linetype='blank') +
    coord_flip()           
##########
samples <- extract(fitchoc,'roundeffect')$roundeffect
roundeffect <- expand.grid(Round=levels(choc$Rank),
        Descriptor=levels(choc$Descriptor))
roundeffect$des <- as.numeric(roundeffect$Descriptor)
roundeffect$round <- as.numeric(roundeffect$Round)
rounds <- as.data.frame(t(sapply(1:nrow(roundeffect),function(i){
            subsamples <- c(samples[1:nsamp,roundeffect$des[i],roundeffect$round[i]])
                c(mean=mean(subsamples),quantile(subsamples,c(.1,.9)))
                })))
names(rounds) <- c('score','lmin','lmax')
roundeffect <- cbind(roundeffect,rounds)
library(ggplot2)
p1 <- ggplot(roundeffect, aes(y=score, x=des,color=Round))
p1 + geom_path() +
        scale_x_continuous(
                breaks=1:nlevels(roundeffect$Descriptor),
                labels=levels(roundeffect$Descriptor)) +
        xlab('') +
        geom_ribbon(aes(ymin=lmin, ymax=lmax,fill=Round),
                alpha=.15,linetype='blank') +
        coord_flip()           
#########
samples <- extract(fitchoc,'shift')$shift
shift <- expand.grid(Panelist=levels(choc$Panelist),
        Descriptor=levels(choc$Descriptor))
shift$des <- as.numeric(shift$Descriptor)
shift$pnlst <- as.numeric(shift$Panelist)
shifts <- as.data.frame(t(sapply(1:nrow(shift),function(i){
                            subsamples <- c(samples[1:nsamp,shift$des[i],shift$pnlst[i]])
                            c(mean=mean(subsamples),quantile(subsamples,c(.1,.9)))
                        })))
names(shifts) <- c('score','lmin','lmax')
shift <- cbind(shift,shifts)
p1 <- ggplot(shift, aes(y=score, x=des))
p1 + geom_path() +
        scale_x_continuous(
                breaks=1:nlevels(shift$Descriptor),
                labels=levels(shift$Descriptor)) +
        xlab('') +
        geom_ribbon(aes(ymin=lmin, ymax=lmax),
                alpha=.15,linetype='blank') +
        coord_flip() +
        facet_wrap(~ Panelist)

Sunday, February 2, 2014

Bayesian analysis of sensory profiling data

I looked at Bayesian analysis of sensory profiling data in May and June 2012. I do remember not being totally happy with the result and computations taking a bit more time than I wanted. But now it is 2014, I can use STAN and I have been thinking about the model I want a bit more. Hence a fresh start.

Data

Data is the chocolate data from SensoMineR. I find it more convinient to have the data in long format, so the first action is a reshape. Score gets an as.numerical, since it was integer and I do not know how that effects STAN calculations.
data(chocolates,package='SensoMineR')
vars <- names(sensochoc)
choc <- reshape(sensochoc,
        idvar=vars[1:4],
        varying=list(vars[5:18]),
        v.names='Score',
        direction='long',
        timevar='Descriptor',
        times=vars[5:18])
choc$Descriptor <- as.factor(choc$Descriptor)
choc$Score <- as.numeric(choc$Score)

Model

sensory panel data

To explain the model, I first need to explain about sensory panel data. The objective of the sensory experiment is to obtain a profile, a table which shows how strong food tastes on selected descriptors (pre-defined properties). To obtain this, panelists taste food stuffs and score the strength of the food stuff on the descriptors. The order of tasting the food stuffs is registered as rounds. Unfortunately panelists have 'errors' in their scores. To minimize the effect of the errors, 10 to 20 panelists are used, possibly with repetitions.

standard analysis

In general these data are analyzed by descriptor with ANOVA models such as:
score ~ product + panelist + round + session + error,
possibly with second order interaction terms, of which panelist*product would be the largest.

Alternatively, methods such as Procrustes or STATIS are used. These methods try to find communality in the n dimensional configuration which products occupy in the the descriptor space. In particular, Procrustes tries to find a common configuration by shifting, scale (size) change and rotating individual configurations such that they all align. Note that this process destroys the link between products and descriptors.

building a model

The aim of this post is to build the bare bones of a model which combines the good parts of Procrustes with the features of a linear model. The model must analyze all descriptors in one go. Hence one part of the model is the profile. Panelist effects are covered with a shift and shrinking of the profile. At this point the shift and scale is covered by one parameter per panelist, this will probably be changed in a second version of the model. A rounds effect has been added, common for all descriptors and all panelists, another spot for model extensions.
A few priors have been added on scale usage, the values seemed reasonable for a trained panel.
model1 <-
'
data {
    int<lower=0> npanelist;
    int<lower=0> nobs;
    int<lower=0> nsession;
    int<lower=0> nround;
    int<lower=0> ndescriptor;
    int<lower=0> nproduct;
    vector[nobs] y;
    int<lower=1,upper=npanelist> panelist[nobs];
    int<lower=1,upper=nproduct> product[nobs];
    int<lower=1,upper=ndescriptor> descriptor[nobs];
    int<lower=1,upper=nround> rounds[nobs];
    real maxy;
    }
parameters {
    matrix<lower=0,upper=maxy> [nproduct,ndescriptor] profile;
    vector<lower=-maxy/3,upper=maxy/3>[npanelist] shift;
    vector<lower=-1,upper=1> [npanelist] logsensitivity;
    vector<lower=-maxy/3,upper=maxy/3> [nround] roundeffect;
    real<lower=0,upper=maxy/5> sigma;
    }
transformed parameters {
    vector [nobs] expect;
    vector[npanelist] sensitivity;
// the following variables are not strictly needed but
// greatly increase speed and # effective samples
    real meanshift;
    real mlogsens;
    real mroundeff;
    meanshift <- mean(shift);
    mlogsens <- mean(logsensitivity);
    mroundeff <- mean(roundeffect);
// end of definitions of variables for speed
    for (i in 1:npanelist) {
        sensitivity[i] <- pow(10,logsensitivity[i]-mlogsens);
        }
    for (i in 1:nobs) {
        expect[i] <- profile[product[i],descriptor[i]]
            *sensitivity[panelist[i]]
            + shift[panelist[i]]-meanshift
            + roundeffect[rounds[i]]-mroundeff;
        }
    }
model {
    logsensitivity ~ normal(0,0.1);
    shift ~ normal(0,maxy/10);
    roundeffect ~ normal(0,maxy/10);
    y ~ normal(expect,sigma);
    } 
'

running the model

These are just the parts needed to get rstan running.
library(rstan)
datainchoc <- with(choc,list(
                panelist=(1:nlevels(Panelist))[Panelist],
                npanelist=nlevels(Panelist),
                session=(1:nlevels(Session))[Session],
                nsession=nlevels(Session),
                rounds=(1:nlevels(Rank))[Rank],
                nround=nlevels(Rank),
                product=(1:nlevels(Product))[Product],
                nproduct=nlevels(Product),
                descriptor=(1:nlevels(Descriptor))[Descriptor],
                ndescriptor=nlevels(Descriptor),
                y=Score,
                nobs=length(Score),
                maxy=10))

pars <- c('profile','shift','roundeffect','sensitivity','sigma')

fitchoc <- stan(model_code = model1,
        data = datainchoc,
        pars=pars,
        iter = 1100,
        warmup=100,
        chains = 4)

results 

Plotting of chains revealed that 100 warmup samples was sufficient. Quick plot of the output:
plot(fitchoc)
Obviously as for a sensory analyst it is all about the profile. What is more nice than to use ggplot2 for that and include intervals:
samples <- extract(fit1,'profile')$profile
profile <- expand.grid(Product=levels(choc$Product),
        Descriptor=levels(choc$Descriptor))
profile$des <- as.numeric(profile$Descriptor)
profile$prod <- as.numeric(profile$Product)
profs <- as.data.frame(t(sapply(1:nrow(profile),function(i){
            subsamples <- c(samples[1:5,profile$prod[i],profile$des[i]])
            c(mean=mean(subsamples),quantile(subsamples,c(.1,.9)))
        })))
names(profs) <- c('score','lmin','lmax')
profile <- cbind(profile,profs)
library(ggplot2)
p1 <- ggplot(profile, aes(y=score, x=des,color=Product))
p1 + geom_path() +
     scale_x_continuous(
           breaks=1:nlevels(profile$Descriptor),
           labels=levels(profile$Descriptor)) +
    xlab('') +
    geom_ribbon(aes(ymin=lmin, ymax=lmax,fill=Product),
                    alpha=.15,linetype='blank') +
    coord_flip()