
Dirichlet-multinomial distribution versus Pitman-Yor-multinomial
March 18, 2015If you’re familiar with the Dirichlet distribution as the workhorse for discrete Bayesian modelling, then you should know about the Dirichlet-multinomial. This is what happens when you combine a Dirichlet with a multinomial and integrate/marginalise out the common probability vector.
Graphically, you have a probability vector that is the mean for a probability vector
. You then have data, a count vector
, drawn using a multinomial distribution with mean
. If we integrate out the vector
then we get the Dirichlet-multinomial.
The standard formulation is to have:
Manipulating this, we get the distribution for the Dirichlet-multinomial:
Now we will rewrite this into the Pochhammer symbol notation we use for Pitman-Yor marginals,
is the Pochhammer symbol (a generalised version is used, and a different notation) which is a form of rising factorial so its given by |
|
(special case) a related rising factorial given by |
The Beta functions simplify to yield the Dirichlet-multinomial in Pochhammer symbol notation:
Now lets do the same with the Pitman-Yor process (PYP).
The derivation of the combination is more detailed but is found in the Buntine Hutter Arxiv report or my tutorials. For this, you have to introduce a new latent vector where
represents the subset of the count
that will be passed up from the
-node up to the
-node to convey information about the data
. Keep this phrase in mind. It will be explained a bit later. These
are constrained as follows:
constraint | |
constraint | |
total | |
total |
With these, we get the PYP-multinomial in Pochhammer symbol notation:
You can see the three main differences. With the PYP:
- the terms in
appear as simple powers, just like a multinomial likelihood, so this form is readily used in a hierarchy;
- you now have to work with the generalised Stirling numbers
; and
- you have to introduce the new latent vector
.
The key thing is that only appears in the expression
which is a multinomial likelihood. So, as said earlier,
represents the subset of the count
that will be passed up from the
-node up to the
-node. If
then all the data is passed up, and the likelihood looks like
, which is what you would get if
.
If we use a Dirichlet process (DP) rather than a PYP, the only simplification is that simplifies to
. This small change means that the above formula can be rewritten as:
This has quite broad implications as the are now independent variables! In fact, their posterior distribution takes the form:
The penalty for using the PYP or DP is that you now have to work with the generalised Stirling numbers and the latent vector . With the right software, the Stirling numbers are easy to handle. While my students have built their own Java packages for that, I use a C library available from MLOSS. The latent vectors
at each node require different sampling algorithms.
Now one final note, this isn’t how we implement these. Sampling the full range of is too slow … there are too many possibilities. Moreover, if sampling
, you ignore the remainder of the hierarchy. For fast mixing of Markov chains, you want to sample a cluster of related variables. The hierarchical CRP does this implicitly as it resamples and samples up and down the parent hierarchy. So to achieve this same effect using the marginalised posteriors above we have too Booleanise (turn into a Boolean) the
‘s and sample a cluster of related Booleans up and down the hierarchy. We figure out how to do this in 2011, paper at ECML.
Leave a Reply