ผลต่างระหว่างรุ่นของ "Nested sampling manual"

จาก Theory Wiki
ไปยังการนำทาง ไปยังการค้นหา
(minor parameters)
แถว 9: แถว 9:
 
===NOTE for Oli===
 
===NOTE for Oli===
 
To test the correctness of our implementations, I also provide a simple MOGs likelihood
 
To test the correctness of our implementations, I also provide a simple MOGs likelihood
for us [see (1.2) below]: a mixture of three spherical gaussian with std. dev. = 1, so that the  
+
for us [see (1.2) below]: a mixture of three spherical gaussian with <math> \sigma = 1 </math>, so that the  
integral result will be 3*(2*pi)^(D/2) where D is the number of dimensions in the parameter space.
+
integral result will be <math> 3 \cdot (2 \pi)^{D/2} </math> where <math>D</math> is the number of dimensions in the parameter space.
  
 
==Program parameters==
 
==Program parameters==
แถว 16: แถว 16:
 
Normally, Nested sampling is controlled by 4 main parameters:
 
Normally, Nested sampling is controlled by 4 main parameters:
  
 +
====<math>Nclus</math> (no default value)====
 +
This defines the number of dimensions of the parameter space <math>D</math>.
  
(1) Nclus (no default value): this defines the number of dimensions of the parameter space D.  
+
#  In our problem of learning (spherical) MOGs, <math>D = (2+d) \cdot Nclus</math> where <math> d </math> is the dimension of the data space; To visualize the result, I usually use <math>d = 2</math>;
 +
#  (** just for developers**) if the likelihood is a simple MOG, I define <math>D = Nclus</math>. (see (6.2))
  
(1.1)  In our problem of learning (spherical) MOGs, D = (2+d)*Nclus where d is the dimension
 
of the data space; To visualize the result, I usually use d = 2;
 
  
(1.2) ** just for developers** if the likelihood is a simple MOG, I define D = Nclus.
+
==== <math>Next</math> (default = <math>150 \cdot D \log D</math>)====
(see (6.2))
+
The degree of extreme value distribution (Skilling, 2006; eq.(17)).
 
 
 
 
(2) Next (default= 150*DlogD): the degree of extreme value distribution (Skilling, 2006; eq.(17)).
 
 
This is the number of initial points for each nested sampling iteration which we can use to solve
 
This is the number of initial points for each nested sampling iteration which we can use to solve
 
the problem of 'sampling from a truncated prior' (My MCMCMC paper).  
 
the problem of 'sampling from a truncated prior' (My MCMCMC paper).  
แถว 32: แถว 30:
  
  
(3) Nwalk(default= DlogD): the so-called 'burn-in' parameter in MCMC literatures. This parameter
+
==== <math> Nwalk </math> (default= <math>D \log D</math>) ====
is used to solve the problem of 'sampling from a truncated prior'.
+
The so-called 'burn-in' parameter in MCMC literatures. This parameter is used to solve the problem of 'sampling from a truncated prior'.
 +
 
  
 +
==== <math>Niter</math> (default= <math>2 Next \cdot Nwalk</math>) ====
  
(4) Niter(default= 2*Next*Nwalk): The (estimated) maximum number of nested sampling iterations.
+
The (estimated) maximum number of nested sampling iterations.
  
 
I will write some theoretical/heuristic explanations about all default values later...
 
I will write some theoretical/heuristic explanations about all default values later...

รุ่นแก้ไขเมื่อ 06:26, 10 เมษายน 2550

Introduction

Generally, nested sampling is used for calculating any integrals, e.g. an evidence in Bayesian model selection problems. This program will concentrate only on the problem of selecting the number of components in a Mixture of [Spherical] Gaussians (MOGs) given observed data. In this problem, the likelihood is a product of MOGs, and we assume that the prior is uniform (or truncated-log-uniform for the deviation parameter) over the parameter space.

NOTE for Oli

To test the correctness of our implementations, I also provide a simple MOGs likelihood for us [see (1.2) below]: a mixture of three spherical gaussian with , so that the integral result will be where is the number of dimensions in the parameter space.

Program parameters

Main parameters

Normally, Nested sampling is controlled by 4 main parameters:

(no default value)

This defines the number of dimensions of the parameter space .

  1. In our problem of learning (spherical) MOGs, where is the dimension of the data space; To visualize the result, I usually use ;
  2. (** just for developers**) if the likelihood is a simple MOG, I define . (see (6.2))


(default = )

The degree of extreme value distribution (Skilling, 2006; eq.(17)). This is the number of initial points for each nested sampling iteration which we can use to solve the problem of 'sampling from a truncated prior' (My MCMCMC paper). This number also controls stability of nested sampling (greater --> more stable).


(default= )

The so-called 'burn-in' parameter in MCMC literatures. This parameter is used to solve the problem of 'sampling from a truncated prior'.


(default= )

The (estimated) maximum number of nested sampling iterations.

I will write some theoretical/heuristic explanations about all default values later...

Minor parameters

(5) test_likelihood (default [undefined]): if define, the program will switch the likelihood to a simple mixture of spherical gaussians (explained above).


(6) walk (default 1): determine the type of random walk

(6.1) walk = 1, use slice sampling with hyperrectangle (Neal, 2003; section 5.1)

(6.2) walk = 2, use metropolis-hasting sampler with GP-proposal (our project)


(7) Ngp: if (walk = 2), we have to define this number. This defines the number of pseudo-walk using GP as an approximation of the real likelihood.