Sunday, October 26, 2014

Tuning Laplaces Demon III

This is the third post with LaplacesDemon tuning. same problem, different algorithms. For introduction and other code see this post. The current post takes algorithms Independence Metropolis to Reflective Slice Sampler.

Independence Metropolis

Independence Metropolis expects first a run of e.g. LaplaceApproximation and to be fed the results of that. It should be noted that LaplaceApproximation() did not think it has converged, but I continued anyway.
LA <- LaplaceApproximation(Model,
    parm=Initial.Values,
    Data=MyData)
LA$Summary1[,1]# mode
LA$Covar       # covariance
Fit <- LaplacesDemon(Model,
    Data=MyData,
    Covar=LA$Covar,
    Algorithm='IM',
    Specs=list(mu=LA$Summary1[,1]),
    Initial.Values = Initial.Values
)

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Covar = LA$Covar, Algorithm = "IM", Specs = list(mu = LA$Summary1[,
        1]))

Acceptance Rate: 0.4799
Algorithm: Independence Metropolis
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
     beta[1]      beta[2]
1.3472722602 0.0009681572

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 43.136     43.136
pD    0.244      0.244
DIC  43.380     43.380
Initial Values:
[1] -10   0

Iterations: 10000
Log(Marginal Likelihood): -22.97697
Minutes of run-time: 0.05
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 10
Specs: (NOT SHOWN HERE)
Status is displayed every 100 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 1000
Thinning: 10


Summary of All Samples
                Mean         SD        MCSE  ESS        LB      Median
beta[1]  -11.2837801 1.16059157 0.035012520 1000 -13.56153 -11.3195620
beta[2]    0.2797468 0.02984741 0.000896778 1000   0.21760   0.2803396
Deviance  43.1362705 0.69853281 0.023083591 1000  42.45458  42.9472450
LP       -30.3781418 0.34871038 0.011551050 1000 -31.29227 -30.2797709
                  UB
beta[1]   -8.8565459
beta[2]    0.3395894
Deviance  44.9102538
LP       -30.0383777


Summary of Stationary Samples
                Mean         SD        MCSE  ESS        LB      Median
beta[1]  -11.2837801 1.16059157 0.035012520 1000 -13.56153 -11.3195620
beta[2]    0.2797468 0.02984741 0.000896778 1000   0.21760   0.2803396
Deviance  43.1362705 0.69853281 0.023083591 1000  42.45458  42.9472450
LP       -30.3781418 0.34871038 0.011551050 1000 -31.29227 -30.2797709
                  UB
beta[1]   -8.8565459
beta[2]    0.3395894
Deviance  44.9102538
LP       -30.0383777

Interchain Adaptation

This works on cluster computers only, hence is skipped.

Metropolis-Adjusted Langevin Algorithm

It has specs, the defaults are used. Somehow it ends up proposing thinning 1000 so that is taken.
Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 60000, Status = 2000, Thinning = 1000, Algorithm = "MALA",
    Specs = list(A = 1e+07, alpha.star = 0.574, gamma = 1, delta = 1,
        epsilon = c(1e-06, 1e-07)))

Acceptance Rate: 0.6699
Algorithm: Metropolis-Adjusted Langevin Algorithm
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
3.974723987 0.002746137

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.871     45.323
pD    3.009      3.719
DIC  47.880     49.043
Initial Values:
[1] -10   0

Iterations: 60000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.51
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 30
Recommended Burn-In of Un-thinned Samples: 30000
Recommended Thinning: 1000
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 60
Thinning: 1000


Summary of All Samples
                Mean         SD        MCSE ESS          LB      Median
beta[1]  -11.5549438 2.12680454 0.262494275  60 -15.4489211 -11.6879298
beta[2]    0.2862209 0.05387208 0.006838854  60   0.1775704   0.2869203
Deviance  44.8713764 2.45311900 0.298991760  60  42.5123135  44.0150320
LP       -31.2503453 1.22704758 0.149839992  60 -34.4462764 -30.8362411
                  UB
beta[1]   -7.2809707
beta[2]    0.3806284
Deviance  51.2537115
LP       -30.0625250


Summary of Stationary Samples
                Mean         SD        MCSE ESS          LB      Median
beta[1]  -11.4682552 2.28578268 0.340770357  30 -14.6449856 -11.7783155
beta[2]    0.2821535 0.05793802 0.008335686  30   0.1672378   0.2887207
Deviance  45.3234960 2.72744685 0.429600835  30  42.4870991  44.3327542
LP       -31.4757075 1.36111721 0.149839992  30 -34.8777641 -30.9992113
                  UB
beta[1]   -6.9659465
beta[2]    0.3579202
Deviance  52.1258699
LP       -30.0519847

Metropolis-Coupled Markov Chain Monte Carlo

This algorithm is suitable for multi-modal distributions. Thus not for this specific problem. It also requires at least two cores, which brings it just within my current computing capability. The two cores are used for keeping two chain between which some swapping is done. From examination of my processor occupancy it did not tax either core to the max. As a consequence the algorithm does not run very fast. From what I read the approach is that one makes a first run, takes the covariance of this run as base for the second run and so on. The first run can be started with a small diagonal covariance matrix. There was still quite some change from run one to two, so I did a third run as final. I did not take a fourth run to do the recommended thinning.

Fit1 <- LaplacesDemon(Model,
    Data=MyData,
    Algorithm='MCMCMC',
    Covar=diag(.001,nrow=2),
    Specs=list(lambda=1,CPUs=2,Packages=NULL,
        Dyn.libs=NULL),
    Initial.Values = Initial.Values
)
Fit2 <- LaplacesDemon(Model,
    Data=MyData,
    Algorithm='MCMCMC',
    Covar=var(Fit1$Posterior2),
    Specs=list(lambda=1,CPUs=2,Packages=NULL,
        Dyn.libs=NULL),
    Initial.Values = apply(Fit1$Posterior2,2,median)
)
Fit3 <- LaplacesDemon(Model,
    Data=MyData,
    Algorithm='MCMCMC',
    Covar=var(Fit2$Posterior2),
    Specs=list(lambda=1,CPUs=2,Packages=NULL,
        Dyn.libs=NULL),
    Initial.Values = apply(Fit2$Posterior2,2,median)
)

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = apply(Fit2$Posterior2,
    2, median), Covar = var(Fit2$Posterior2), Algorithm = "MCMCMC",
    Specs = list(lambda = 1, CPUs = 2, Packages = NULL, Dyn.libs = NULL))

Acceptance Rate: 0.5628
Algorithm: Metropolis-Coupled Markov Chain Monte Carlo
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
3.857317095 0.002593814

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.441     44.441
pD    1.975      1.975
DIC  46.416     46.416
Initial Values:
    beta[1]     beta[2]
-11.5172818   0.2863507

Iterations: 10000
Log(Marginal Likelihood): -22.56206
Minutes of run-time: 7.65
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 20
Specs: (NOT SHOWN HERE)
Status is displayed every 100 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 1000
Thinning: 10


Summary of All Samples
                Mean         SD        MCSE      ESS          LB      Median
beta[1]  -11.7429958 2.08203881 0.083365310 767.1824 -16.6468867 -11.5906172
beta[2]    0.2913961 0.05394533 0.002179448 771.8377   0.2007124   0.2880483
Deviance  44.4411762 1.98744454 0.080077425 764.5512  42.4935429  43.8303185
LP       -31.0373786 1.00385816 0.040582713 761.9922 -33.7860907 -30.7307093
                  UB
beta[1]   -8.1852016
beta[2]    0.4208138
Deviance  49.7875520
LP       -30.0545128


Summary of Stationary Samples
                Mean         SD        MCSE      ESS          LB      Median
beta[1]  -11.7429958 2.08203881 0.083365310 767.1824 -16.6468867 -11.5906172
beta[2]    0.2913961 0.05394533 0.002179448 771.8377   0.2007124   0.2880483
Deviance  44.4411762 1.98744454 0.080077425 764.5512  42.4935429  43.8303185
LP       -31.0373786 1.00385816 0.040582713 761.9922 -33.7860907 -30.7307093
                  UB
beta[1]   -8.1852016
beta[2]    0.4208138
Deviance  49.7875520
LP       -30.0545128

Multiple-Try Metropolis

The first run took half an hour. During the run I could see LP of way from the target distribution. On a whim,  I did a second run, with a covariance matrix from LaplaceApproximation() as added information. This run was better but took another half hour.

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Covar = LA$Covar, Algorithm = "MTM", Specs = list(K = 4,
        CPUs = 2, Packages = NULL, Dyn.libs = NULL))

Acceptance Rate: 0.61155
Algorithm: Multiple-Try Metropolis
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
21.36403719  0.01435576

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
         All Stationary
Dbar  52.923     51.254
pD   372.901     50.142
DIC  425.824    101.396
Initial Values:
[1] -10   0

Iterations: 10000
Log(Marginal Likelihood): -48.98845
Minutes of run-time: 29.71
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 600
Recommended Burn-In of Un-thinned Samples: 6000
Recommended Thinning: 250
Specs: (NOT SHOWN HERE)
Status is displayed every 100 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 1000
Thinning: 10


Summary of All Samples
                Mean         SD      MCSE       ESS          LB      Median
beta[1]  -12.3004483  4.6238653 1.0357315  25.37966 -22.0066143 -11.4795448
beta[2]    0.3046245  0.1194878 0.0265865  26.73186   0.1186699   0.2820595
Deviance  52.9226543 27.3093850 1.3945959 616.53888  42.6646332  48.6409842
LP       -35.2933429 13.6579453 0.6986543 615.73612 -51.0643998 -33.1211349
                  UB
beta[1]   -5.1607724
beta[2]    0.5596043
Deviance  84.2719610
LP       -30.1367784


Summary of Stationary Samples
                Mean        SD      MCSE      ESS          LB      Median
beta[1]  -12.4106362  4.446175 1.5145833 11.35760 -23.4916627 -11.4983716
beta[2]    0.3073741  0.114014 0.0387409 13.37635   0.1412665   0.2842726
Deviance  51.2540481 10.014169 0.9717966 64.31246  42.5915491  48.1501360
LP       -34.4595816  5.031687 0.6986543 61.84508 -50.7860991 -32.8935736
                  UB
beta[1]   -5.8450580
beta[2]    0.5940932
Deviance  83.6594896
LP       -30.1021917

NUTS

It gave an error hence no results.

pCN

To get close to the target acceptance rate a beta of 0.015 was used. It seemed that with these settings it was not able to determine the target distribution.
Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 80000, Status = 2000, Thinning = 30, Algorithm = "pCN",
    Specs = list(beta = 0.015))

Acceptance Rate: 0.24766
Algorithm: Preconditioned Crank-Nicolson
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
 beta[1]  beta[2]
2.835066 2.835066

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 54.321         NA
pD   17.889         NA
DIC  72.209         NA
Initial Values:
[1] -10   0

Iterations: 80000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.19
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 2666
Recommended Burn-In of Un-thinned Samples: 79980
Recommended Thinning: 34
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 2666
Thinning: 30


Summary of All Samples
                Mean         SD        MCSE      ESS           LB     Median
beta[1]   -5.9954631 1.51357515 0.333997311 3.200112  -9.78692631  -5.660027
beta[2]    0.1446406 0.03873508 0.008466866 3.168661   0.09195439   0.135685
Deviance  54.3206874 5.98143077 1.127801083 3.341341  43.55137352  54.443521
LP       -35.9251051 2.98165148 0.562093182 3.349492 -41.03715268 -35.982949
                 UB
beta[1]   -4.021785
beta[2]    0.239560
Deviance  64.568537
LP       -30.570165


Summary of Stationary Samples
        Mean SD MCSE ESS LB Median UB
beta[1]   NA NA   NA  NA NA     NA NA
beta[2]   NA NA   NA  NA NA     NA NA

Oblique Hyperrectangle Slice Sampler

I tried to get close or over to the number of discarded samples for the stationary run. As before, that point varied. I did not feel like running a thinning of 390, so stopped here. Just to be precise, spec variable A refers to the number of thinned samples, which translates, in this case to 12000 un-thinned samples

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 20000, Status = 2000, Thinning = 30, Algorithm = "OHSS",
    Specs = list(A = 400, n = 0))

Acceptance Rate: 1
Algorithm: Oblique Hyperrectangle Slice Sampler
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
2.141405086 0.001421053

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.670     44.670
pD    4.873      4.873
DIC  49.543     49.543
Initial Values:
[1] -10   0

Iterations: 20000
Log(Marginal Likelihood): -22.7557
Minutes of run-time: 0.17
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 390
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 666
Thinning: 30


Summary of All Samples
               Mean         SD        MCSE      ESS          LB      Median
beta[1]  -11.723257 2.12131635 0.364333658 63.12568 -15.6882712 -11.7426108
beta[2]    0.291164 0.05530033 0.009836046 53.03589   0.1892168   0.2903171
Deviance  44.669607 3.12190842 0.485345818 68.11717  42.4967868  43.7647781
LP       -31.151444 1.55954405 0.242282520 68.32591 -34.0145598 -30.6944097
                  UB
beta[1]   -7.6953473
beta[2]    0.3978456
Deviance  50.3649578
LP       -30.0590489


Summary of Stationary Samples
               Mean         SD        MCSE      ESS          LB      Median
beta[1]  -11.723257 2.12131635 0.364333658 63.12568 -15.6882712 -11.7426108
beta[2]    0.291164 0.05530033 0.009836046 53.03589   0.1892168   0.2903171
Deviance  44.669607 3.12190842 0.485345818 68.11717  42.4967868  43.7647781
LP       -31.151444 1.55954405 0.242282520 68.32591 -34.0145598 -30.6944097
                  UB
beta[1]   -7.6953473
beta[2]    0.3978456
Deviance  50.3649578
LP       -30.0590489

Random Drive Metropolis-Hastings

The manual states 'RDMH fails in the obscure case when the origin has positive probability'. That obscure case includes the initial values of the parameters, as I had. Apart from that, it could not determine a point where sampling was stationary. Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = c(-10,
    -0.1), Iterations = 60000, Status = 2000, Thinning = 30,
    Algorithm = "RDMH")

Acceptance Rate: 0.10038
Algorithm: Random Dive Metropolis-Hastings
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
2.706896237 0.001829059

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.946         NA
pD    4.953         NA
DIC  49.899         NA
Initial Values:
[1] -10.0  -0.1

Iterations: 60000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.28
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 2000
Recommended Burn-In of Un-thinned Samples: 60000
Recommended Thinning: 33
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 2000
Thinning: 30


Summary of All Samples
              Mean         SD        MCSE      ESS          LB      Median
beta[1]   -9.95876 1.64567600 0.376396772 13.94920 -12.6734418 -10.1840709
beta[2]    0.24500 0.04207685 0.009640869 14.62719   0.1510536   0.2512075
Deviance  44.94647 3.14729140 0.488713442 61.09106  42.4926465  43.9340089
LP       -31.26984 1.56429369 0.241990158 62.00287 -35.1596359 -30.7595949
                  UB
beta[1]   -6.3057001
beta[2]    0.3146071
Deviance  52.7751852
LP       -30.0571998


Summary of Stationary Samples
        Mean SD MCSE ESS LB Median UB
beta[1]   NA NA   NA  NA NA     NA NA
beta[2]   NA NA   NA  NA NA     NA NA

Random-Walk Metropolis

This algorithm required tuning of the covariance matrix, which can be done from previous runs. Hence I started with diagonal and did three subsequent runs. The second run had no stationary samples, so I took all. I did not think setting Thinning to 20 worth the effort of making a fourth run for this exercise (but would have if the result were important, rather than training myself on these algorithms).

Fit1 <- LaplacesDemon(Model,
    Data=MyData,
    Algorithm='RWM',
    Covar=diag(.001,nrow=2),
    Initial.Values = Initial.Values
)
Fit2 <- LaplacesDemon(Model,
    Data=MyData,
    Algorithm='RWM',
    Covar=var(Fit1$Posterior2),
    Initial.Values = apply(Fit1$Posterior2,2,median)
)
Fit3 <- LaplacesDemon(Model,
    Data=MyData,
    Algorithm='RWM',
    Covar=var(Fit2$Posterior1),
    Initial.Values = apply(Fit2$Posterior1,2,median)
)


Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = apply(Fit2$Posterior1,
    2, median), Covar = var(Fit2$Posterior1), Algorithm = "RWM")

Acceptance Rate: 0.5369
Algorithm: Random-Walk Metropolis
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
4.852137624 0.003251339

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.466     44.466
pD    1.974      1.974
DIC  46.440     46.440
Initial Values:
    beta[1]     beta[2]
-11.7220763   0.2901719

Iterations: 10000
Log(Marginal Likelihood): -23.33119
Minutes of run-time: 0.03
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 20
Specs: (NOT SHOWN HERE)
Status is displayed every 100 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 1000
Thinning: 10


Summary of All Samples
                Mean         SD        MCSE      ESS          LB     Median
beta[1]  -11.7487675 2.08295510 0.090017378 801.7285 -16.0709597 -11.634496
beta[2]    0.2916762 0.05365258 0.002304645 808.2413   0.1890284   0.289995
Deviance  44.4658467 1.98718601 0.088248062 692.4438  42.5001258  43.836932
LP       -31.0497836 0.99971429 0.044403544 690.7914 -33.6598820 -30.735747
                 UB
beta[1]   -7.737687
beta[2]    0.398553
Deviance  49.737799
LP       -30.058261


Summary of Stationary Samples
                Mean         SD        MCSE      ESS          LB     Median
beta[1]  -11.7487675 2.08295510 0.090017378 801.7285 -16.0709597 -11.634496
beta[2]    0.2916762 0.05365258 0.002304645 808.2413   0.1890284   0.289995
Deviance  44.4658467 1.98718601 0.088248062 692.4438  42.5001258  43.836932
LP       -31.0497836 0.99971429 0.044403544 690.7914 -33.6598820 -30.735747
                 UB
beta[1]   -7.737687
beta[2]    0.398553
Deviance  49.737799
LP       -30.058261

Reflective Slice Sampler

The manual describes this as a difficult algorithm to tune. Indeed it seems that my runs did not give the desired result.


Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 80000, Status = 2000, Thinning = 30, Algorithm = "RSS",
    Specs = list(m = 5, w = 0.05 * c(0.1, 0.002)))

Acceptance Rate: 1
Algorithm: Reflective Slice Sampler
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
2.275190007 0.002086958

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
          All Stationary
Dbar   49.146     43.048
pD   1498.707      0.109
DIC  1547.853     43.158
Initial Values:
[1] -10   0

Iterations: 80000
Log(Marginal Likelihood): -20.72254
Minutes of run-time: 1.2
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 2128
Recommended Burn-In of Un-thinned Samples: 63840
Recommended Thinning: 34
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 2666
Thinning: 30


Summary of All Samples
                Mean          SD        MCSE       ESS          LB      Median
beta[1]  -12.0360018  1.50814086 0.326429373  5.612834 -14.1006967 -12.4484258
beta[2]    0.2963282  0.04532995 0.009314235 13.964938   0.2059285   0.3089917
Deviance  49.1455311 54.74864402 8.132535667 63.021710  42.4742009  43.1081636
LP       -33.3920123 27.37147853 4.065587914 63.029321 -31.5206568 -30.3749779
                 UB
beta[1]   -8.702122
beta[2]    0.350354
Deviance  45.428853
LP       -30.045841


Summary of Stationary Samples
                Mean         SD        MCSE       ESS          LB      Median
beta[1]  -12.5560430 0.63515155 0.228104908  8.833540 -13.8216246 -12.5461040
beta[2]    0.3114146 0.01605936 0.005788805  6.079825   0.2801353   0.3112473
Deviance  43.0483178 0.46778514 0.105615710 21.341014  42.4904018  42.9445137
LP       -30.3488683 0.24012061 4.065587914 20.065718 -30.9052213 -30.2957432
                  UB
beta[1]  -11.2869843
beta[2]    0.3434128
Deviance  44.1323841
LP       -30.0562172

Sunday, October 19, 2014

Tuning Laplaces Demon II

I am continuing with my trying all algorithms of Laplaces Demon. It is actually quite a bit more work than I expected but I do find that some of the things get clearer. Now that I am close to the end of calculating this second batch I learned that there is loads of adaptive algorithms. The point of those adaptations is not so much getting the correct posterior distribution, but rather getting enough information so one can set up the other algorithms which can get the desired posterior. For example, in this post DRAM is the adaptive version of DRM which form such a pairing of algorithms.
Given all that it may be that I will redo this same exercise with a different estimation, but that is yet to be decided.

Adaptive-Mixture Metropolis

No specs




Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Algorithm = "AMM")

Acceptance Rate: 0.284
Algorithm: Adaptive-Mixture Metropolis
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
   beta[1]    beta[2]
2.73756468 0.00197592

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
         All Stationary
Dbar  45.095     44.425
pD   234.487      2.231
DIC  279.582     46.656
Initial Values:
[1] -10   0

Iterations: 10000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.05
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 500
Recommended Burn-In of Un-thinned Samples: 5000
Recommended Thinning: 150
Specs: (NOT SHOWN HERE)
Status is displayed every 100 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 1000
Thinning: 10


Summary of All Samples
                Mean          SD        MCSE       ESS          LB      Median
beta[1]  -10.8694827  1.66603784 0.154095363  57.79995 -15.5489329 -10.2511972
beta[2]    0.2682103  0.04423248 0.004039057  58.80563   0.2003859   0.2543845
Deviance  45.0951406 21.65580992 1.178504393 534.75650  42.5189305  43.4676853
LP       -31.3536988 10.82813264 0.589266937 534.75770 -33.8231778 -30.5333506
                  UB
beta[1]   -8.3168119
beta[2]    0.3923192
Deviance  50.0480146
LP       -30.0738495


Summary of Stationary Samples
               Mean        SD        MCSE      ESS          LB      Median
beta[1]  -11.574823 2.1300652 0.205608788 285.1749 -16.3533092 -11.3357688
beta[2]    0.286758 0.0553065 0.005306451 287.9282   0.1924482   0.2804307
Deviance  44.425140 2.1124891 0.191862970 191.2321  42.4735763  43.8636196
LP       -31.027498 1.0677294 0.589266937 190.2838 -34.0072489 -30.7386292
                  UB
beta[1]   -7.9334688
beta[2]    0.4128574
Deviance  50.4694327
LP       -30.0463562

Affine-Invariant Ensemble Sampler

It seems to go somewhere, then gets stuck without an exit.


Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 20000, Status = 2000, Thinning = 35, Algorithm = "AIES",
    Specs = list(Nc = 16, Z = NULL, beta = 1.1, CPUs = 1, Packages = NULL,
        Dyn.libs = NULL))

Acceptance Rate: 0.9773
Algorithm: Affine-Invariant Ensemble Sampler
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
     beta[1]      beta[2]
0.5252284175 0.0004811633

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 43.004     43.005
pD    0.053      0.000
DIC  43.057     43.005
Initial Values:
[1] -10   0

Iterations: 20000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.8
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 399
Recommended Burn-In of Un-thinned Samples: 13965
Recommended Thinning: 27
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 571
Thinning: 35


Summary of All Samples
                Mean         SD        MCSE       ESS          LB      Median
beta[1]  -10.2521485 0.72528513 0.153054828  9.623682 -12.7424753  -9.9662793
beta[2]    0.2513389 0.01927108 0.004080774 11.791793   0.2404582   0.2438793
Deviance  43.0041950 0.32410474 0.046924334 74.412690  42.5190044  43.0023753
LP       -30.3005775 0.16647750 0.024331033 74.198162 -30.8672783 -30.2965116
                  UB
beta[1]   -9.8180273
beta[2]    0.3153106
Deviance  44.0738314
LP       -30.0671736


Summary of Stationary Samples
                Mean           SD         MCSE      ESS          LB      Median
beta[1]   -9.9558233 0.0082421078 2.797169e-03 12.56157  -9.9743833  -9.9550952
beta[2]    0.2436365 0.0001725992 5.836733e-05 12.63574   0.2433173   0.2436223
Deviance  43.0047636 0.0021518709 7.092913e-04 12.84743  43.0002894  43.0048552
LP       -30.2976030 0.0009940593 2.433103e-02 12.86979 -30.2995034 -30.2976427
                  UB
beta[1]   -9.9405874
beta[2]    0.2440232
Deviance  43.0088727
LP       -30.2955405

Componentwise Hit-And-Run Metropolis

This never was able to get to the target.

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 40000, Status = 2000, Thinning = 30, Algorithm = "CHARM")

Acceptance Rate: 0.31229
Algorithm: Componentwise Hit-And-Run Metropolis
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
3.580895236 0.002467357

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.445     45.021
pD    2.023      2.256
DIC  46.468     47.278
Initial Values:
[1] -10   0

Iterations: 40000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.18
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 1064
Recommended Burn-In of Un-thinned Samples: 31920
Recommended Thinning: 31
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 1333
Thinning: 30


Summary of All Samples
                Mean         SD       MCSE      ESS          LB      Median
beta[1]  -10.9964257 1.89283881 0.49785194 13.06079 -14.8229746 -10.9766992
beta[2]    0.2717979 0.04913034 0.01300998 11.03343   0.1856506   0.2705021
Deviance  44.4449406 2.01148697 0.18601589 82.06199  42.4984709  43.8291949
LP       -31.0303916 1.00254773 0.09222924 83.71010 -33.6460481 -30.7196890
                  UB
beta[1]   -7.6255135
beta[2]    0.3698866
Deviance  49.6364683
LP       -30.0586484


Summary of Stationary Samples
                Mean         SD       MCSE       ESS        LB     Median
beta[1]   -9.5579858 1.34107513 0.62509204  4.739982 -12.03957  -9.313436
beta[2]    0.2340237 0.03463118 0.01639968  4.804878   0.18134   0.227444
Deviance  45.0214688 2.12434347 0.28430825 16.656844  42.51149  44.655282
LP       -31.3029682 1.05482519 0.09222924 17.255007 -33.92636 -31.139132
                 UB
beta[1]   -7.398433
beta[2]    0.297938
Deviance  50.284536
LP       -30.061842

Delayed Rejection Adaptive Metropolis

This is an interesting algorithm. One can see during sampling the algorithm shifts from a faster to a slower sampling approach. The same shift in gears is seen in the plot. Notice that it recommends thinning 90. In fact I had  it to the point of proposing a thinning of 1000. Since the manual also states on using DRAM as final algorithm: 'DRAM may be used if diminishing adaptation occurs and adaptation ceases effectively'. Given these texts and effects, I tried a different problem, starting with wrong initial values. Indeed, it was able to get close to the true values in all such runs.



Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Thinning = 30, Algorithm = "DRAM")

Acceptance Rate: 0.5221
Algorithm: Delayed Rejection Adaptive Metropolis
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
     beta[1]      beta[2]
11.556472479  0.007722216

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
             All Stationary
Dbar     470.735     48.093
pD   1803475.978     35.962
DIC  1803946.712     84.055
Initial Values:
[1] -10   0

Iterations: 10000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.2
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 165
Recommended Burn-In of Un-thinned Samples: 4950
Recommended Thinning: 270
Specs: (NOT SHOWN HERE)
Status is displayed every 100 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 333
Thinning: 30


Summary of All Samples
                 Mean           SD         MCSE       ESS           LB
beta[1]   -11.7526275    2.3119401   0.34580302  43.51566   -17.027894
beta[2]     0.2000943    0.4891327   0.02860336 130.61590    -1.487342
Deviance  470.7346820 1899.1977136 105.17220909 100.44119    42.511693
LP       -244.1848392  949.6008219  52.58640512 100.44148 -3481.777935
              Median           UB
beta[1]  -11.6929022   -7.8194808
beta[2]    0.2842366    0.4423707
Deviance  44.3755257 6945.9261923
LP       -31.0009124  -30.0634427


Summary of Stationary Samples
                Mean         SD       MCSE ESS         LB      Median
beta[1]  -11.7250338 2.48894626  0.2213773 168 -17.044268 -11.6851217
beta[2]    0.2921779 0.06547124  0.0053019 168   0.166793   0.2909701
Deviance  48.0932958 8.48081577  0.7474430 168  42.527075  45.2507580
LP       -32.8641423 4.24023747 52.5864051 168 -47.454476 -31.4673576
                  UB
beta[1]   -7.5769778
beta[2]    0.4250974
Deviance  77.3040995
LP       -30.0696373

Delayed Rejection Metropolis

This algorithm has the instruction to use the covariance matrix from for instance DRAM. So I pulled those and the summary of stationary samples as input.


Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = c(-11.72,
    0.29), Covar = covar, Algorithm = "DRM")

Acceptance Rate: 0.5659
Algorithm: Delayed Rejection Metropolis
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
     beta[1]      beta[2]
11.556472479  0.007722216

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
         All Stationary
Dbar  48.417     48.417
pD    59.001     59.001
DIC  107.419    107.419
Initial Values:
[1] -11.72   0.29

Iterations: 10000
Log(Marginal Likelihood): -38.65114
Minutes of run-time: 0.09
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 10
Specs: (NOT SHOWN HERE)
Status is displayed every 100 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 1000
Thinning: 10


Summary of All Samples
                Mean          SD        MCSE      ESS        LB      Median
beta[1]  -11.6326715  2.89304045 0.111808577 891.8417 -18.12638 -11.5067562
beta[2]    0.2883743  0.07495814 0.002893592 894.3397   0.13874   0.2834856
Deviance  48.4174842 10.86289496 0.377770754 897.9490  42.52784  44.7049759
LP       -33.0262590  5.43029899 0.188915825 897.4927 -47.25058 -31.1763877
                 UB
beta[1]   -6.014343
beta[2]    0.452027
Deviance  76.915884
LP       -30.075104


Summary of Stationary Samples
                Mean          SD        MCSE      ESS        LB      Median
beta[1]  -11.6326715  2.89304045 0.111808577 891.8417 -18.12638 -11.5067562
beta[2]    0.2883743  0.07495814 0.002893592 894.3397   0.13874   0.2834856
Deviance  48.4174842 10.86289496 0.377770754 897.9490  42.52784  44.7049759
LP       -33.0262590  5.43029899 0.188915825 897.4927 -47.25058 -31.1763877
                 UB
beta[1]   -6.014343
beta[2]    0.452027
Deviance  76.915884
LP       -30.075104

Differential Evolution Markov Chain

Following LP, one can see this algorithm shift its step to step towards the target distribution. The same is visible in the samples.


Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 70000, Status = 2000, Thinning = 36, Algorithm = "DEMC",
    Specs = list(Nc = 3, Z = NULL, gamma = 0, w = 0.1))

Acceptance Rate: 0.94571
Algorithm: Differential Evolution Markov Chain
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
90.26633832  0.04206898

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
          All Stationary
Dbar   89.209     43.944
pD   5238.430      1.706
DIC  5327.639     45.650
Initial Values:
[1] -10   0

Iterations: 70000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.73
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 1164
Recommended Burn-In of Un-thinned Samples: 41904
Recommended Thinning: 32
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 1944
Thinning: 36


Summary of All Samples
               Mean          SD        MCSE      ESS           LB      Median
beta[1]  -17.482864   9.5017889  2.35451645 2.787539  -36.9570864 -13.2979369
beta[2]    0.410902   0.2049482  0.04994949 4.145223    0.1652045   0.3204944
Deviance  89.209031 102.3565307 23.04234800 7.261840   42.4986747  44.6939882
LP       -53.548197  51.3373427 11.56914508 7.198904 -167.9946428 -31.1456515
                  UB
beta[1]   -7.1204957
beta[2]    0.7804724
Deviance 317.1315784
LP       -30.0563707


Summary of Stationary Samples
                Mean        SD         MCSE       ESS          LB      Median
beta[1]  -11.9431454 1.7007792  0.215271093 125.13410 -15.7999987 -11.9807022
beta[2]    0.2969431 0.0441033  0.005730276 118.23767   0.2259635   0.2946702
Deviance  43.9443086 1.8471515  0.371329880  63.98536  42.4849394  43.3846382
LP       -30.7905955 0.9340381 11.569145079  63.51772 -33.2807813 -30.5163527
                  UB
beta[1]   -9.0646733
beta[2]    0.4059097
Deviance  48.8635918
LP       -30.0522792

Elliptical Slice Sampler

Manual states. 'This algorithm is applicable only to models in which the prior mean of all parameters is zero.' That is true for my prior, yet I am not impressed at all. Maybe I should be centering or such, but the current formulation was not a success
Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 60000, Status = 2000, Thinning = 1000, Algorithm = "ESS")

Acceptance Rate: 1
Algorithm: Elliptical Slice Sampler
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
1.514016386 0.001094917

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 53.903     53.806
pD   11.574     13.346
DIC  65.477     67.152
Initial Values:
[1] -10   0

Iterations: 60000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.77
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 18
Recommended Burn-In of Un-thinned Samples: 18000
Recommended Thinning: 1000
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 60
Thinning: 1000


Summary of All Samples
                Mean         SD        MCSE      ESS           LB     Median
beta[1]   -5.9519978 1.12538724 0.199063822 34.62487  -8.11739884  -5.904983
beta[2]    0.1419102 0.02788798 0.004825592 38.79318   0.09329411   0.141184
Deviance  53.9025233 4.81129661 0.854043909 46.96804  46.49740770  53.653605
LP       -35.7152403 2.39947768 0.425934607 46.93833 -41.22733361 -35.594423
                  UB
beta[1]   -3.9669525
beta[2]    0.1932619
Deviance  64.9487765
LP       -32.0237607


Summary of Stationary Samples
                Mean         SD        MCSE      ESS           LB      Median
beta[1]   -5.9962946 1.24391453 0.253903583 22.52123  -8.27636391  -5.9679392
beta[2]    0.1430514 0.03108438 0.006168722 27.21658   0.09456836   0.1467105
Deviance  53.8060933 5.16634523 1.088725764 34.31438  46.18528394  53.6113477
LP       -35.6674227 2.57618453 0.425934607 34.28162 -40.73404524 -35.5728340
                  UB
beta[1]   -4.0287456
beta[2]    0.1938871
Deviance  63.9614942
LP       -31.8687620

Gibbs Sampler

This needs derivatives, hence skipped.

Griddy Gibbs

This takes a grid from which a density is estimated and on which sampling is based. It may be a bit difficult for this grid, since the two parameters have different scales and the same grid is used. With only two parameters it was possible to take a rather high value for the number of grid points. Even so, I am not so happy with the final outcome.
Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 30000, Status = 2000, Thinning = 100, Algorithm = "GG",
    Specs = list(Grid = seq(from = -0.25, to = 0.25, len = 13),
        dparm = NULL, CPUs = 1, Packages = NULL, Dyn.libs = NULL))

Acceptance Rate: 1
Algorithm: Griddy-Gibbs
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
     beta[1]      beta[2]
11.378198005  0.008486228

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
          All Stationary
Dbar   66.161     66.161
pD   1339.075   1339.075
DIC  1405.236   1405.236
Initial Values:
[1] -10   0

Iterations: 30000
Log(Marginal Likelihood): NA
Minutes of run-time: 2.09
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 900
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 300
Thinning: 100


Summary of All Samples
              Mean         SD       MCSE      ESS            LB      Median
beta[1]  -11.00845  3.3782928 0.77873105  23.0348  -18.26815566 -10.7755255
beta[2]    0.27170  0.0909315 0.01994284  30.0812    0.09175425   0.2612613
Deviance  66.16122 51.7508405 2.84979150 300.0000   42.82591844  50.8991200
LP       -41.89256 25.8754878 1.42498409 300.0000 -139.85980754 -34.2665415
                 UB
beta[1]   -4.870858
beta[2]    0.450951
Deviance 262.096671
LP       -30.229348


Summary of Stationary Samples
              Mean         SD       MCSE      ESS            LB      Median
beta[1]  -11.00845  3.3782928 0.77873105  23.0348  -18.26815566 -10.7755255
beta[2]    0.27170  0.0909315 0.01994284  30.0812    0.09175425   0.2612613
Deviance  66.16122 51.7508405 2.84979150 300.0000   42.82591844  50.8991200
LP       -41.89256 25.8754878 1.42498409 300.0000 -139.85980754 -34.2665415
                 UB
beta[1]   -4.870858
beta[2]    0.450951
Deviance 262.096671
LP       -30.229348


Hamiltonian Monte Carlo
A set was of specs was found. Acceptance rate is a bit high compared to the manual.

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Thinning = 100, Algorithm = "HMC", Specs = list(epsilon = 0.9 *
        c(0.1, 0.01), L = 11))

Acceptance Rate: 0.8385
Algorithm: Hamiltonian Monte Carlo
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
3.515108412 0.003083421

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.429     44.562
pD    1.941      1.869
DIC  46.369     46.431
Initial Values:
[1] -10   0

Iterations: 10000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.59
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 80
Recommended Burn-In of Un-thinned Samples: 8000
Recommended Thinning: 100
Specs: (NOT SHOWN HERE)
Status is displayed every 100 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 100
Thinning: 100


Summary of All Samples
               Mean         SD        MCSE ESS          LB      Median
beta[1]  -11.421073 1.87894067 0.242559120 100 -15.4741232 -11.3104311
beta[2]    0.283175 0.04808956 0.006413119 100   0.1975276   0.2818055
Deviance  44.428764 1.97004886 0.159856183 100  42.5400966  43.7163906
LP       -31.027024 0.98807551 0.080511201 100 -33.5265769 -30.6632451
                  UB
beta[1]   -7.9608191
beta[2]    0.3807289
Deviance  49.3945952
LP       -30.0829425


Summary of Stationary Samples
                Mean         SD        MCSE ESS          LB      Median
beta[1]  -11.0974590 1.92886822 0.226325971  20 -15.4741232 -10.9792610
beta[2]    0.2750898 0.04775153 0.005911688  20   0.2034193   0.2713854
Deviance  44.5623988 1.93322645 0.408444645  20  42.5740058  44.0794655
LP       -31.0902147 0.97037005 0.080511201  20 -33.0194587 -30.8456818
                  UB
beta[1]   -8.2355095
beta[2]    0.3807289
Deviance  48.3972203
LP       -30.0962034

Another set of specs


Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Thinning = 100, Algorithm = "HMC", Specs = list(epsilon = 3 *
        c(0.1, 0.001), L = 18))

Acceptance Rate: 0.8855
Algorithm: Hamiltonian Monte Carlo
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
3.640714435 0.003207219

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.404     44.404
pD    2.051      2.051
DIC  46.455     46.455
Initial Values:
[1] -10   0

Iterations: 10000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.96
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 100
Specs: (NOT SHOWN HERE)
Status is displayed every 100 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 100
Thinning: 100


Summary of All Samples
                Mean         SD        MCSE ESS          LB      Median
beta[1]  -11.5949171 1.91103354 0.200246790 100 -15.6570246 -11.5727273
beta[2]    0.2867121 0.04916803 0.005146306 100   0.2097083   0.2865395
Deviance  44.4043210 2.02528350 0.193624072 100  42.4813611  43.7159364
LP       -31.0168639 1.01912132 0.097084186 100 -33.7046786 -30.6665144
                  UB
beta[1]   -8.4758710
beta[2]    0.3936658
Deviance  49.8533556
LP       -30.0520014


Summary of Stationary Samples
                Mean         SD        MCSE ESS          LB      Median
beta[1]  -11.5949171 1.91103354 0.200246790 100 -15.6570246 -11.5727273
beta[2]    0.2867121 0.04916803 0.005146306 100   0.2097083   0.2865395
Deviance  44.4043210 2.02528350 0.193624072 100  42.4813611  43.7159364
LP       -31.0168639 1.01912132 0.097084186 100 -33.7046786 -30.6665144
                  UB
beta[1]   -8.4758710
beta[2]    0.3936658
Deviance  49.8533556
LP       -30.0520014

Sunday, October 12, 2014

Tuning LaplacesDemon

I was continuing with my Bayesian algorithms in R exercise. For these exercises I port SAS PROC MCMC examples to the various R solutions. However, the next example was logit model and that's just too simple, especially after last week's Jacobian for the Box-Cox transformation. Examples for logit and probit models abound on the web. Hence I took the opportunity to experiment a bit with the various algorithms which LaplacesDemon offers on the logit model. I did nine of them, so I might do some more next week.
One thing I noticed for all algorithms. LaplacesDemon has an in build calculation which chooses the length of the burn in and suggests a thinning. This does not work very well. Sometimes it seems for may samples on the target distribution, yet indicates all samples are burn in. Other times it thinks the burn in is done, yet does not find the expected results. Running several chains and human decisions to be more efficient than these long chains with unclear decisions.
It is a bit scary and disappointing that even though this is a very small and simple model and the number of samples is sometimes large, the posterior sometimes does not contain the frequentist solution. How would I know the answer is wrong for more complex models, where the answer is not so obvious?

Data

Data are from example 3 of SAS Proc MCMC. The PROC MCMC stimates are -11.77 and 0.292 respectively. It is a logistic model. The glm() estimates are thus:
Call:  glm(formula = cbind(y, n - y) ~ x, family = binomial, data = set2)

Coefficients:
(Intercept)            x 
   -11.2736       0.2793 

Degrees of Freedom: 19 Total (i.e. Null);  18 Residual
Null Deviance:        71.8
Residual Deviance: 15.25     AIC: 46.44

Setup

For each algorithm I started with the default number of samples and thinning. Based on the output the thinning and number of samples would be adapted until such point that the thinning was at least as large as the recommended thinning and there was a decent number of samples after burn in.

MWG


MWG needed 80000 samples before it gave answers. Notice that beta[2] was not mixing very well. The estimates seem off.


Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 80000, Status = 2000, Thinning = 30)

Acceptance Rate: 0.24453
Algorithm: Metropolis-within-Gibbs
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
 beta[1]  beta[2]
2.835066 2.835066

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 45.242     45.360
pD    1.966      1.955
DIC  47.208     47.315
Initial Values:
[1] -10   0

Iterations: 80000
Log(Marginal Likelihood): -23.51154
Minutes of run-time: 0.35
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 1862
Recommended Burn-In of Un-thinned Samples: 55860
Recommended Thinning: 34
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 2666
Thinning: 30


Summary of All Samples
                Mean         SD        MCSE       ESS          LB      Median
beta[1]   -9.0326306 1.04621897 0.220388172  7.322802 -11.2499597  -8.9042795
beta[2]    0.2209152 0.02658052 0.005719566  7.354199   0.1730201   0.2174406
Deviance  45.2421985 1.98273668 0.194161638 22.653492  42.5636767  44.7799650
LP       -31.4080976 0.98541260 0.095926332 23.196296 -33.8514305 -31.1737818
                  UB
beta[1]   -7.1874098
beta[2]    0.2772263
Deviance  50.1501559
LP       -30.0892713


Summary of Stationary Samples
                Mean         SD        MCSE      ESS          LB      Median
beta[1]   -8.9359418 0.99836162 0.349118267 3.269911 -10.6298830  -8.8780160
beta[2]    0.2188631 0.02541074 0.009077136 2.979785   0.1818618   0.2180453
Deviance  45.3601829 1.97741388 0.197527290 5.636751  42.7114169  45.2011226
LP       -31.4661714 0.98254393 0.095926332 5.760722 -33.7158111 -31.3894625
                  UB
beta[1]   -7.4135378
beta[2]    0.2609228
Deviance  49.8800864
LP       -30.1541170


HARM

No specs

I ended with 160000 samples. From the plot I'd say that is somewhat overkill.

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 160000, Status = 2000, Thinning = 36, Algorithm = "HARM")

Acceptance Rate: 0.06213
Algorithm: Hit-And-Run Metropolis
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
2.863159336 0.001961247

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.449     44.449
pD    1.704      1.704
DIC  46.153     46.153
Initial Values:
[1] -10   0

Iterations: 160000
Log(Marginal Likelihood): -23.13795
Minutes of run-time: 0.38
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 36
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 4444
Thinning: 36


Summary of All Samples
                Mean         SD        MCSE       ESS          LB     Median
beta[1]  -12.5516619 1.69184474 0.251682993  43.36212 -15.7525684 -12.537176
beta[2]    0.3121025 0.04404281 0.006471107  46.72382   0.2306382   0.312301
Deviance  44.4489336 1.84619493 0.166569318 221.64136  42.5029914  43.927156
LP       -31.0503518 0.93299371 0.085590782 211.80050 -33.5115921 -30.783807
                  UB
beta[1]   -9.3734152
beta[2]    0.3968852
Deviance  49.3254872
LP       -30.0617562


Summary of Stationary Samples
                Mean         SD        MCSE       ESS          LB     Median
beta[1]  -12.5516619 1.69184474 0.251682993  43.36212 -15.7525684 -12.537176
beta[2]    0.3121025 0.04404281 0.006471107  46.72382   0.2306382   0.312301
Deviance  44.4489336 1.84619493 0.166569318 221.64136  42.5029914  43.927156
LP       -31.0503518 0.93299371 0.085590782 211.80050 -33.5115921 -30.783807
                  UB
beta[1]   -9.3734152
beta[2]    0.3968852
Deviance  49.3254872
LP       -30.0617562

Specs: list(alpha.star=0.234, B=NULL)

This sometimes got stuck, making all samples the same. Such faulty runs are repeated. In addition it seems the algorithm is not able to detect a good run. When I came to a whopping 1280000 samples I decided enough is enough. 



Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 1280000, Status = 8000, Thinning = 42, Algorithm = "HARM",
    Specs = list(alpha.star = 0.234, B = NULL))

Acceptance Rate: 0.23416
Algorithm: Hit-And-Run Metropolis
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
3.639417089 0.002441388

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.359     44.328
pD    1.597      1.532
DIC  45.956     45.860
Initial Values:
[1] -10   0

Iterations: 1280000
Log(Marginal Likelihood): -23.38909
Minutes of run-time: 2.91
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 6094
Recommended Burn-In of Un-thinned Samples: 255948
Recommended Thinning: 44
Specs: (NOT SHOWN HERE)
Status is displayed every 8000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 30476
Thinning: 42


Summary of All Samples
                Mean         SD        MCSE       ESS         LB      Median
beta[1]  -11.8540232 1.90772737 0.141469541  28.83441 -15.618594 -11.8276283
beta[2]    0.2941959 0.04938246 0.003631577  29.75273   0.206555   0.2931946
Deviance  44.3593200 1.78715332 0.031531123 190.88005  42.500061  43.8415966
LP       -30.9974154 0.90059957 0.016062596 182.03381 -33.375122 -30.7354077
                 UB
beta[1]   -8.489146
beta[2]    0.392589
Deviance  49.076360
LP       -30.058998


Summary of Stationary Samples
                Mean         SD        MCSE       ESS          LB      Median
beta[1]  -11.7694571 1.87121305 0.153383946  24.00847 -15.4323928 -11.8230837
beta[2]    0.2920393 0.04844789 0.003937189  24.51596   0.2027361   0.2930413
Deviance  44.3276492 1.75044211 0.033218178 182.18810  42.5044563  43.8258628
LP       -30.9805115 0.87993039 0.016062596 176.04438 -33.2870801 -30.7284209
                  UB
beta[1]   -8.3411645
beta[2]    0.3869563
Deviance  48.9102683
LP       -30.0612554

Adaptive Directional Metropolis-within-Gibbs

no specs

This algorithm seemed to be able to get stuck outside the target region. As other algorithms it seems that increasing the thinning only leads to suggestions for further increase.
 

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 250000, Status = 2000, Thinning = 35, Algorithm = "ADMG")

Acceptance Rate: 0.0841
Algorithm: Adaptive Directional Metropolis-within-Gibbs
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
3.694667553 0.002472681

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.362     44.362
pD    1.790      1.790
DIC  46.152     46.152
Initial Values:
[1] -10   0

Iterations: 250000
Log(Marginal Likelihood): NA
Minutes of run-time: 1.82
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 38
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 7142
Thinning: 35


Summary of All Samples
                Mean         SD        MCSE       ESS          LB      Median
beta[1]  -11.5514694 1.92227432 0.252091958  40.70347 -15.7178588 -11.4718511
beta[2]    0.2863682 0.04966254 0.006471524  40.98657   0.1965967   0.2829988
Deviance  44.3618406 1.89210234 0.068523057 227.09714  42.4858150  43.7976307
LP       -30.9951604 0.95027797 0.034622622 221.64249 -33.4547190 -30.7101949
                  UB
beta[1]   -8.0953413
beta[2]    0.3936748
Deviance  49.2894939
LP       -30.0514927


Summary of Stationary Samples
                Mean         SD        MCSE       ESS          LB      Median
beta[1]  -11.5514694 1.92227432 0.252091958  40.70347 -15.7178588 -11.4718511
beta[2]    0.2863682 0.04966254 0.006471524  40.98657   0.1965967   0.2829988
Deviance  44.3618406 1.89210234 0.068523057 227.09714  42.4858150  43.7976307
LP       -30.9951604 0.95027797 0.034622622 221.64249 -33.4547190 -30.7101949
                  UB
beta[1]   -8.0953413
beta[2]    0.3936748
Deviance  49.2894939
LP       -30.0514927

Specs: list(n = 0, Periodicity = 100)

This seems to be a run which does not mix very well in beta[1]. The posteriour is not on the target either. However, I do not see an obvious characteristic from which to notice the latter. 

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 64000, Status = 2000, Thinning = 33, Algorithm = "ADMG",
    Specs = list(n = 0, Periodicity = 100))

Acceptance Rate: 0.09229
Algorithm: Adaptive Directional Metropolis-within-Gibbs
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
4.718765252 0.003150388

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.625     44.368
pD    1.848      0.914
DIC  46.473     45.282
Initial Values:
[1] -10   0

Iterations: 64000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.36
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 1544
Recommended Burn-In of Un-thinned Samples: 50952
Recommended Thinning: 32
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 1939
Thinning: 33


Summary of All Samples
               Mean         SD       MCSE      ESS          LB      Median
beta[1]  -11.314847 2.17303791 0.51002442 11.27064 -15.1041582 -11.4860852
beta[2]    0.279974 0.05601643 0.01310839 10.57305   0.1747337   0.2834938
Deviance  44.625141 1.92250413 0.19216964 59.12247  42.5136669  44.1268753
LP       -31.124617 0.95756642 0.09526603 60.52000 -33.5147916 -30.8800217
                  UB
beta[1]   -7.2420670
beta[2]    0.3795564
Deviance  49.4135552
LP       -30.0677263


Summary of Stationary Samples
                Mean         SD        MCSE       ESS          LB      Median
beta[1]  -13.3566117 0.72016481 0.299751317  5.473664 -15.0713103 -13.1769958
beta[2]    0.3325182 0.01971691 0.007968755  6.324196   0.3014903   0.3294776
Deviance  44.3682834 1.35175074 0.137999589 30.199161  42.7948298  44.0306538
LP       -31.0192877 0.68081646 0.095266030 28.411038 -32.6277457 -30.8439440
                  UB
beta[1]  -12.2282779
beta[2]    0.3766297
Deviance  47.5762853
LP       -30.2209601

Adaptive Griddy-Gibbs

A thinning of 1000 was proposed.

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 80000, Status = 2000, Thinning = 1000, Algorithm = "AGG",
    Specs = list(Grid = GaussHermiteQuadRule(3)$nodes, dparm = NULL,
        smax = Inf, CPUs = 1, Packages = NULL, Dyn.libs = NULL))

Acceptance Rate: 1
Algorithm: Adaptive Griddy-Gibbs
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
    beta[1]     beta[2]
19.42384441  0.01546567

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
          All Stationary
Dbar   53.739     53.739
pD   2320.547   2320.547
DIC  2374.286   2374.286
Initial Values:
[1] -10   0

Iterations: 80000
Log(Marginal Likelihood): NA
Minutes of run-time: 2.73
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 1000
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 80
Thinning: 1000


Summary of All Samples
               Mean         SD       MCSE ESS          LB      Median
beta[1]  -10.922145  4.4338567 0.50560810  80 -16.5329510 -11.2359881
beta[2]    0.269604  0.1214608 0.01367873  80   0.1295182   0.2783504
Deviance  53.738973 68.1255818 7.52694767  80  42.5397709  44.3774010
LP       -35.684516 34.0782560 3.76525972  80 -40.1716354 -31.0031726
                  UB
beta[1]   -5.6863543
beta[2]    0.4200815
Deviance  62.5858557
LP       -30.0729465


Summary of Stationary Samples
               Mean         SD       MCSE ESS          LB      Median
beta[1]  -10.922145  4.4338567 0.50560810  80 -16.5329510 -11.2359881
beta[2]    0.269604  0.1214608 0.01367873  80   0.1295182   0.2783504
Deviance  53.738973 68.1255818 7.52694767  80  42.5397709  44.3774010
LP       -35.684516 34.0782560 3.76525972  80 -40.1716354 -31.0031726
                  UB
beta[1]   -5.6863543
beta[2]    0.4200815
Deviance  62.5858557
LP       -30.0729465

Adaptive Hamiltonian Monte Carlo


Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 150000, Status = 2000, Thinning = 36, Algorithm = "AHMC",
    Specs = list(epsilon = rep(0.02, length(Initial.Values)),
        L = 2, Periodicity = 10))

Acceptance Rate: 0.36918
Algorithm: Adaptive Hamiltonian Monte Carlo
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
     beta[1]      beta[2]
1.1442983280 0.0008202951

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 43.736     43.682
pD    1.214      1.272
DIC  44.950     44.954
Initial Values:
[1] -10   0

Iterations: 150000
Log(Marginal Likelihood): NA
Minutes of run-time: 1.74
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 2912
Recommended Burn-In of Un-thinned Samples: 104832
Recommended Thinning: 36
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 4166
Thinning: 36


Summary of All Samples
                Mean         SD        MCSE        ESS          LB      Median
beta[1]  -11.4418192 1.06961398 0.191704524   7.357239 -14.1113189 -11.4442594
beta[2]    0.2837334 0.02830491 0.004949075   9.071151   0.2346106   0.2835194
Deviance  43.7356469 1.55816164 0.036118007 526.351697  42.4689017  43.1991684
LP       -30.6795260 0.78105728 0.018367981 492.409683 -32.7826226 -30.4108318
                  UB
beta[1]   -9.6350028
beta[2]    0.3515657
Deviance  47.9431739
LP       -30.0443227


Summary of Stationary Samples
                Mean         SD       MCSE        ESS          LB      Median
beta[1]  -11.9854873 0.56317415 0.15499286   8.169076 -12.9867511 -11.9369396
beta[2]    0.2982346 0.01576288 0.00396860  11.739663   0.2675003   0.2982497
Deviance  43.6824281 1.59499752 0.04774315 545.963385  42.4718352  43.1363939
LP       -30.6588754 0.79841066 0.01836798 518.072910 -33.0652295 -30.3900004
                  UB
beta[1]  -10.8925810
beta[2]    0.3269875
Deviance  48.4916661
LP       -30.0465224

Adaptive Metropolis

High thinning with 800, but nice mixing.

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 80000, Status = 2000, Thinning = 800, Algorithm = "AM",
    Specs = list(Adaptive = 200, Periodicity = 200))

Acceptance Rate: 0.26118
Algorithm: Adaptive Metropolis
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
     beta[1]      beta[2]
11.850286307  0.008006239

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.437     44.437
pD    1.630      1.630
DIC  46.067     46.067
Initial Values:
[1] -10   0

Iterations: 80000
Log(Marginal Likelihood): NA
Minutes of run-time: 0.36
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 800
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 100
Thinning: 800


Summary of All Samples
                Mean         SD        MCSE ESS          LB      Median
beta[1]  -11.7026349 1.97444818 0.188498237 100 -15.4468354 -11.6906840
beta[2]    0.2905268 0.04989136 0.004767928 100   0.2047257   0.2898906
Deviance  44.4368803 1.80543310 0.166862161 100  42.5326068  43.8582816
LP       -31.0345215 0.90945995 0.084411163 100 -33.4044687 -30.7391583
                  UB
beta[1]   -8.3034612
beta[2]    0.3801087
Deviance  49.1461781
LP       -30.0717992


Summary of Stationary Samples
                Mean         SD        MCSE ESS          LB      Median
beta[1]  -11.7026349 1.97444818 0.188498237 100 -15.4468354 -11.6906840
beta[2]    0.2905268 0.04989136 0.004767928 100   0.2047257   0.2898906
Deviance  44.4368803 1.80543310 0.166862161 100  42.5326068  43.8582816
LP       -31.0345215 0.90945995 0.084411163 100 -33.4044687 -30.7391583
                  UB
beta[1]   -8.3034612
beta[2]    0.3801087
Deviance  49.1461781
LP       -30.0717992

Adaptive Metropolis-within-Gibbs

A nice example where LaplacesDemon first makes me increase the number of samples then suddenly I get a run where it decides no burn in was needed. Needless to say with over a million samples I did not go to increase the thinning a bit more.

Call:
LaplacesDemon(Model = Model, Data = MyData, Initial.Values = Initial.Values,
    Iterations = 1200000, Status = 2000, Thinning = 41, Algorithm = "AMWG",
    Specs = list(Periodicity = 200))

Acceptance Rate: 0.24915
Algorithm: Adaptive Metropolis-within-Gibbs
Covariance Matrix: (NOT SHOWN HERE; diagonal shown instead)
 beta[1]  beta[2]
2.835066 2.835066

Covariance (Diagonal) History: (NOT SHOWN HERE)
Deviance Information Criterion (DIC):
        All Stationary
Dbar 44.407     44.407
pD    1.896      1.896
DIC  46.304     46.304
Initial Values:
[1] -10   0

Iterations: 1200000
Log(Marginal Likelihood): NA
Minutes of run-time: 4.95
Model: (NOT SHOWN HERE)
Monitor: (NOT SHOWN HERE)
Parameters (Number of): 2
Posterior1: (NOT SHOWN HERE)
Posterior2: (NOT SHOWN HERE)
Recommended Burn-In of Thinned Samples: 0
Recommended Burn-In of Un-thinned Samples: 0
Recommended Thinning: 44
Specs: (NOT SHOWN HERE)
Status is displayed every 2000 iterations
Summary1: (SHOWN BELOW)
Summary2: (SHOWN BELOW)
Thinned Samples: 29268
Thinning: 41


Summary of All Samples
               Mean         SD        MCSE       ESS          LB     Median
beta[1]  -11.826585 1.96434369 0.146404922  39.11735 -15.8139146 -11.715120
beta[2]    0.293541 0.05072947 0.003808978  38.10170   0.1983134   0.290513
Deviance  44.407436 1.94752859 0.026815581 197.45010  42.4872692  43.822249
LP       -31.021258 0.98017230 0.013578417 192.69216 -33.7006842 -30.727256
                  UB
beta[1]   -8.0854510
beta[2]    0.3961088
Deviance  49.7428566
LP       -30.0525790


Summary of Stationary Samples
               Mean         SD        MCSE       ESS          LB     Median
beta[1]  -11.826585 1.96434369 0.146404922  39.11735 -15.8139146 -11.715120
beta[2]    0.293541 0.05072947 0.003808978  38.10170   0.1983134   0.290513
Deviance  44.407436 1.94752859 0.026815581 197.45010  42.4872692  43.822249
LP       -31.021258 0.98017230 0.013578417 192.69216 -33.7006842 -30.727256
                  UB
beta[1]   -8.0854510
beta[2]    0.3961088
Deviance  49.7428566
LP       -30.0525790

Common Code

All calls use the same data, model and plotting as given below.
s1 <- scan(what=list(integer(),double(),double()),text='
6 0 25.7  8 2 35.9 5 2 32.9 7 7 50.4 6 0 28.3 
7 2 32.3  5 1 33.2 8 3 40.9 6 0 36.5 6 1 36.5
6 6 49.6  6 3 39.8 6 4 43.6 6 1 34.1 7 1 37.4
8 2 35.2  6 6 51.3 5 3 42.5 7 0 31.3 3 2 40.6')
set2 <- data.frame(n=s1[[1]],x=s1[[3]],y=s1[[2]])
set2
glm(cbind(y,n-y)~ x,data=set2,family=binomial)
library(LaplacesDemon)
J <- 2 #Number of parameters
mon.names <- "LP"
parm.names <- c("beta[1]","beta[2]")
PGF <- function(Data) return(rnormv(Data$J,0,1000))
MyData <- list(J=J, PGF=PGF, n=set2$n, mon.names=mon.names,
    parm.names=parm.names, x=set2$x, y=set2$y)

Model <- function(parm, Data)
{
    ### Parameters
    beta <- parm[1:Data$J]
    ### Log-Prior
    beta.prior <- sum(dnormv(beta, 0, 1000, log=TRUE))
    ### Log-Likelihood
    mu <- beta[1] + beta[2]*Data$x
    p <- invlogit(mu)
    LL <- sum(dbinom(Data$y, Data$n, p, log=TRUE))
    ### Log-Posterior
    LP <- LL + beta.prior
    Modelout <- list(LP=LP, Dev=-2*LL, Monitor=LP,
        yhat=rbinom(length(p), Data$n, p), parm=parm)
    return(Modelout)
}
Initial.Values <- c(-10,0)


myplot <- function(where='none') {
    if (where!='none') {
        png(paste(where,'.png',sep=''))
    }
    par(mfrow=c(2,2),mar=rep(2,4),oma=c(0,0,2,0))
    plot(ts(Fit$Posterior1[,1]),ylab='')
    plot(ts(Fit$Posterior2[,1]),ylab='')
    plot(ts(Fit$Posterior1[,2]),ylab='')
    plot(ts(Fit$Posterior2[,2]),ylab='')
    title(main=Fit$Algorithm,outer=TRUE)
    if (where!='none') dev.off()
}