The Indian Buffet Process: An Introduction and Review

Page created by Bob Mendez
 
CONTINUE READING
Journal of Machine Learning Research 12 (2011) 1185-1224                   Submitted 3/10; Revised 3/11; Published 4/11

            The Indian Buffet Process: An Introduction and Review

Thomas L. Griffiths                                                         TOM GRIFFITHS @ BERKELEY. EDU
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720-1650, USA
Zoubin Ghahramani∗                                                                   ZOUBIN @ ENG . CAM . AC . UK
Department of Engineering
University of Cambridge
Cambridge CB2 1PZ, UK

Editor: David M. Blei

                                                           Abstract
     The Indian buffet process is a stochastic process defining a probability distribution over equiva-
     lence classes of sparse binary matrices with a finite number of rows and an unbounded number of
     columns. This distribution is suitable for use as a prior in probabilistic models that represent objects
     using a potentially infinite array of features, or that involve bipartite graphs in which the size of at
     least one class of nodes is unknown. We give a detailed derivation of this distribution, and illustrate
     its use as a prior in an infinite latent feature model. We then review recent applications of the Indian
     buffet process in machine learning, discuss its extensions, and summarize its connections to other
     stochastic processes.
     Keywords: nonparametric Bayes, Markov chain Monte Carlo, latent variable models, Chinese
     restaurant processes, beta process, exchangeable distributions, sparse binary matrices

1. Introduction
Unsupervised learning aims to recover the latent structure responsible for generating observed data.
One of the key problems faced by unsupervised learning algorithms is thus determining the amount
of latent structure—the number of clusters, dimensions, or variables—needed to account for the
regularities expressed in the data. Often, this is treated as a model selection problem, choosing
the model with the dimensionality that results in the best performance. This treatment of the prob-
lem assumes that there is a single, finite-dimensional representation that correctly characterizes the
properties of the observed objects. An alternative is to assume that the amount of latent structure is
actually potentially unbounded, and that the observed objects only manifest a sparse subset of those
classes or features (Rasmussen and Ghahramani, 2001).
    The assumption that the observed data manifest a subset of an unbounded amount of latent
structure is often used in nonparametric Bayesian statistics, and has recently become increasingly
popular in machine learning. In particular, this assumption is made in Dirichlet process mixture
models, which are used for nonparametric density estimation (Antoniak, 1974; Escobar and West,
1995; Ferguson, 1983; Neal, 2000). Under one interpretation of a Dirichlet process mixture model,
each datapoint is assigned to a latent class, and each class is associated with a distribution over

 ∗. Also at the Machine Learning Department, Carnegie Mellon University, Pittsburgh PA 15213, USA.

c 2011 Thomas L. Griffiths and Zoubin Ghahramani.
G RIFFITHS AND G HAHRAMANI

observable properties. The prior distribution over assignments of datapoints to classes is specified
in such a way that the number of classes used by the model is bounded only by the number of
objects, making Dirichlet process mixture models “infinite” mixture models (Rasmussen, 2000).
     Recent work has extended Dirichlet process mixture models in a number of directions, making
it possible to use nonparametric Bayesian methods to discover the kinds of structure common in
machine learning: hierarchies (Blei et al., 2004; Heller and Ghahramani, 2005; Neal, 2003; Teh
et al., 2008), topics and syntactic classes (Teh et al., 2004) and the objects appearing in images
(Sudderth et al., 2006). However, the fact that all of these models are based upon the Dirichlet
process limits the kinds of latent structure that they can express. In many of these models, each
object described in a data set is associated with a latent variable that picks out a single class or
parameter responsible for generating that datapoint. In contrast, many models used in unsupervised
learning represent each object as having multiple features or being produced by multiple causes.
For instance, we could choose to represent each object with a binary vector, with entries indicating
the presence or absence of each feature (e.g., Ueda and Saito, 2003), allow each feature to take on
a continuous value, representing datapoints with locations in a latent space (e.g., Jolliffe, 1986), or
define a factorial model, in which each feature takes on one of a discrete set of values (e.g., Zemel
and Hinton, 1994; Ghahramani, 1995). Infinite versions of these models are difficult to define using
the Dirichlet process.
     In this paper, we summarize recent work exploring the extension of this nonparametric approach
to models in which objects are represented using an unknown number of latent features. Following
Griffiths and Ghahramani (2005, 2006), we provide a detailed derivation of a distribution that can be
used to define probabilistic models that represent objects with infinitely many binary features, and
can be combined with priors on feature values to produce factorial and continuous representations.
This distribution can be specified in terms of a simple stochastic process called the Indian buffet
process, by analogy to the Chinese restaurant process used in Dirichlet process mixture models. We
illustrate how the Indian buffet process can be used to specify prior distributions in latent feature
models, using a simple linear-Gaussian model to show how such models can be defined and used.
     The Indian buffet process can also be used to define a prior distribution in any setting where the
latent structure expressed in data can be expressed in the form of a binary matrix with a finite number
of rows and infinite number of columns, such as the adjacency matrix of a bipartite graph where one
class of nodes is of unknown size, or the adjacency matrix for a Markov process with an unbounded
set of states. As a consequence, this approach has found a number of recent applications within
machine learning. We review these applications, summarizing some of the innovations that have
been introduced in order to use the Indian buffet process in different settings, as well as extensions
to the basic model and alternative inference algorithms. We also describe some of the interesting
connections to other stochastic processes that have been identified. As for the Chinese restaurant
process, we can arrive at the Indian buffet process in a number of different ways: as the infinite limit
of a finite model, via the constructive specification of an infinite model, or by marginalizing out an
underlying measure. Each perspective provides different intuitions, and suggests different avenues
for designing inference algorithms and generalizations.
     The plan of the paper is as follows. Section 2 summarizes the principles behind infinite mixture
models, focusing on the prior on class assignments assumed in these models, which can be defined in
terms of a simple stochastic process—the Chinese restaurant process. We then develop a distribution
on infinite binary matrices by considering how this approach can be extended to the case where
objects are represented with multiple binary features. Section 3 discusses the role of a such a

                                                 1186
I NDIAN B UFFET P ROCESS

distribution in defining infinite latent feature models. Section 4 derives the distribution, making use
of the Indian buffet process. Section 5 illustrates how this distribution can be used as a prior in a
nonparametric Bayesian model, defining an infinite-dimensional linear-Gaussian model, deriving a
sampling algorithm for inference in this model, and applying it to two simple data sets. Section 6
describes further applications of this approach, both in latent feature models and for inferring graph
structures, and Section 7 discusses recent work extending the Indian buffet process and providing
connections to other stochastic processes. Section 8 presents conclusions and directions for future
work.

2. Latent Class Models
Assume we have N objects, with the ith object having D observable properties represented by a row
vector xi . In a latent class model, such as a mixture model, each object is assumed to belong to
a single class, ci , and the properties xi are generated from a distribution determined by that class.
                                         T
Using the matrix X = xT1 xT2 · · · xTN to indicate the properties of all N objects, and the vector c =
                           

[c1 c2 · · · cN ]T to indicate their class assignments, the model is specified by a prior over assignment
vectors P(c), and a distribution over property matrices conditioned on those assignments, p(X|c).1
These two distributions can be dealt with separately: P(c) specifies the number of classes and their
relative probability, while p(X|c) determines how these classes relate to the properties of objects.
In this section, we will focus on the prior over assignment vectors, P(c), showing how such a prior
can be defined without placing an upper bound on the number of classes.

2.1 Finite Mixture Models
Mixture models assume that the assignment of an object to a class is independent of the assignments
of all other objects. If there are K classes, we have

                                                       N               N
                                          P(c|θ) = ∏ P(ci |θ) = ∏ θci ,
                                                      i=1             i=1

where θ is a multinomial distribution over those classes, and θk is the probability of class k under
that distribution. Under this assumption, the probability of the properties of all N objects X can be
written as
                                                      N     K
                                         p(X|θ) = ∏ ∑ p(xi |ci = k) θk .                                              (1)
                                                     i=1 k=1

The distribution from which each xi is generated is thus a mixture of the K class distributions
p(xi |ci = k), with θk determining the weight of class k.
    The mixture weights θ can be treated as a parameter to be estimated. In Bayesian approaches
to mixture modeling, θ is assumed to follow a prior distribution p(θ), with a standard choice being
a symmetric Dirichlet distribution. The Dirichlet distribution on multinomials over K classes has
parameters α1 , α2 , . . . , αK , and is conjugate to the multinomial (e.g., Bernardo and Smith, 1994).

 1. We will use P(·) to indicate probability mass functions, and p(·) to indicate probability density functions. We will
    assume that xi ∈ RD , and p(X|c) is thus a density, although variants of the models we discuss also exist for discrete
    data.

                                                            1187
G RIFFITHS AND G HAHRAMANI

The probability density for the parameter θ of a multinomial distribution is given by
                                                               α −1
                                                     ∏Kk=1 θk k
                                        p(θ) =                              ,
                                                   D(α1 , α2 , . . . , αK )

in which D(α1 , α2 , . . . , αK ) is the Dirichlet normalizing constant
                                                                Z      K
                                D(α1 , α2 , . . . , αK ) =            ∏
                                                                 ∆K k=1
                                                                        θkαk −1 dθ

                                                                ∏Kk=1 Γ(αk )
                                                          =                   ,                    (2)
                                                                Γ(∑Kk=1 αk )

where ∆K is the simplex of multinomials over K classes, and Γ(·) is the gamma, or generalized
factorial, function, with Γ(m) = (m − 1)! for any non-negative integer m. In a symmetric Dirichlet
distribution, all αk are equal. For example, we could take αk = Kα for all k. In this case, Equation 2
becomes
                                                                 Γ( α )K
                                      D( Kα , Kα , . . . , Kα ) = K ,
                                                                  Γ(α)
and the mean of θ is the multinomial that is uniform over all classes.
   The probability model that we have defined is

                                      θ | α ∼ Dirichlet( Kα , Kα , . . . , Kα ),
                                     ci | θ ∼ Discrete(θ)

where Discrete(θ) is the multiple-outcome analogue of a Bernoulli event, where the probabilities
of the outcomes are specified by θ (i.e., P(ci = k|θ) = θk ). The dependencies among variables in
this model are shown in Figure 1. Having defined a prior on θ, we can simplify this model by
integrating over all values of θ rather than representing them explicitly. The marginal probability of
an assignment vector c, integrating over all values of θ, is
                                            Z     n
                              P(c) =             ∏ P(ci |θ) p(θ) dθ
                                             ∆K i=1
                                            Z              m +α/K−1
                                                ∏Kk=1 θk k
                                      =             α α            α dθ
                                             ∆K D( K , K , . . . , K )
                                            D(m1 + Kα , m2 + Kα , . . . , mk + Kα )
                                      =
                                                     D( Kα , Kα , . . . , Kα )
                                            ∏Kk=1 Γ(mk + Kα ) Γ(α)
                                      =                               ,                            (3)
                                                 Γ( Kα )K    Γ(N + α)

where mk = ∑Ni=1 δ(ci = k) is the number of objects assigned to class k. The tractability of this
integral is a result of the fact that the Dirichlet is conjugate to the multinomial.
    Equation 3 defines a joint probability distribution for all class assignments c in which individual
class assignments are not independent. Rather, they are exchangeable (Bernardo and Smith, 1994),
with the probability of an assignment vector remaining the same when the indices of the objects are
permuted. Exchangeability is a desirable property in a distribution over class assignments, because

                                                       1188
I NDIAN B UFFET P ROCESS

                        α                         θ                     zi
                                                                              N

Figure 1: Graphical model for the Dirichlet-multinomial model used in defining the Chinese restau-
          rant process. Nodes are variables, arrows indicate dependencies, and plates (Buntine,
          1994) indicate replicated structures.

we have no special knowledge about the objects that would justify treating them differently from
one another. However, the distribution on assignment vectors defined by Equation 3 assumes an
upper bound on the number of classes of objects, since it only allows assignments of objects to up
to K classes.

2.2 Infinite Mixture Models
Intuitively, defining an infinite mixture model means that we want to specify the probability of X in
terms of infinitely many classes, modifying Equation 1 to become
                                              N   ∞
                                   p(X|θ) = ∏ ∑ p(xi |ci = k) θk ,
                                             i=1 k=1

where θ is an infinite-dimensional multinomial distribution. In order to repeat the argument above,
we would need to define a prior, p(θ), on infinite-dimensional multinomials, and compute the prob-
ability of c by integrating over θ. This is essentially the strategy that is taken in deriving infinite
mixture models from the Dirichlet process (Antoniak, 1974; Ferguson, 1983; Ishwaran and James,
2001; Sethuraman, 1994). Instead, we will work directly with the distribution over assignment
vectors given in Equation 3, considering its limit as the number of classes approaches infinity (cf.,
Green and Richardson, 2001; Neal, 1992, 2000).
    Expanding the gamma functions in Equation 3 using the recursion Γ(x) = (x − 1)Γ(x − 1) and
cancelling terms produces the following expression for the probability of an assignment vector c:
                                                                 !
                                    α K+ K + mk −1                 Γ(α)
                           P(c) =
                                     K       ∏    ∏   ( j + Kα )
                                                                   Γ(N + α)
                                                                            ,                       (4)
                                             k=1 j=1

where K+ is the number of classes for which mk > 0, and we have re-ordered the indices such that
mk > 0 for all k ≤ K+ . There are K N possible values for c, which diverges as K → ∞. As this
happens, the probability of any single set of class assignments goes to 0. Since K+ ≤ N and N is
finite, it is clear that P(c) → 0 as K → ∞, since K1 → 0. Consequently, we will define a distribution
over equivalence classes of assignment vectors, rather than the vectors themselves.
     Specifically, we will define a distribution on partitions of objects. In our setting, a partition
is a division of the set of N objects into subsets, where each object belongs to a single subset
and the ordering of the subsets does not matter. Two assignment vectors that result in the same
division of objects correspond to the same partition. For example, if we had three objects, the class

                                                  1189
G RIFFITHS AND G HAHRAMANI

assignments {c1 , c2 , c3 } = {1, 1, 2} would correspond to the same partition as {2, 2, 1}, since all that
differs between these two cases is the labels of the classes. A partition thus defines an equivalence
class of assignment vectors, which we denote [c], with two assignment vectors belonging to the same
equivalence class if they correspond to the same partition. A distribution over partitions is sufficient
to allow us to define an infinite mixture model, provided the prior distribution on the parameters is
the same for all classes. In this case, these equivalence classes of class assignments are the same as
those induced by identifiability: p(X|c) is the same for all assignment vectors c that correspond to
the same partition, so we can apply statistical inference at the level of partitions rather than the level
of assignment vectors.
    Assume we have a partition of N objects into K+ subsets, and we have K = K0 + K+ class
labels that can be applied to those subsets. Then there are KK!0 ! assignment vectors c that belong to
the equivalence class defined by that partition, [c]. We can define a probability distribution over
partitions by summing over all class assignments that belong to the equivalence class defined by
each partition. The probability of each of those class assignments is equal under the distribution
specified by Equation 4, so we obtain

                       P([c]) =      ∑ P(c)
                                     c∈[c]
                                                                       !
                                                     K + mk −1
                                     K!  α K+                              Γ(α)
                                =
                                     K0 ! K         ∏ ∏ (j+       α
                                                                  K)       Γ(N + α)
                                                                                    .
                                                    k=1 j=1

Rearranging the first two terms, we can compute the limit of the probability of a partition as K → ∞,
which is
                                                               !
                                      K!      K+ mk −1
                                                                     Γ(α)
                        lim α ·K+
                                            · ∏ ∏ (j+ K) · α
                        K→∞               K
                                   K0 ! K +   k=1 j=1              Γ(N + α)
                                                             !
                                                K+
                                                                     Γ(α)
                           = αK+ ·       1 · ∏ (mk − 1)!         ·           .                    (5)
                                               k=1                 Γ(N + α)

The details of the steps taken in computing this limit are given in Appendix A. These limiting
probabilities define a valid distribution over partitions, and thus over equivalence classes of class
assignments, providing a prior over class assignments for an infinite mixture model. Objects are
exchangeable under this distribution, just as in the finite case: the probability of a partition is not
affected by the ordering of the objects, since it depends only on the counts mk .
    As noted above, the distribution over partitions specified by Equation 5 can be derived in a vari-
ety of ways—by taking limits (Green and Richardson, 2001; Neal, 1992, 2000), from the Dirichlet
process (Blackwell and MacQueen, 1973), or from other equivalent stochastic processes (Ishwaran
and James, 2001; Sethuraman, 1994). We will briefly discuss a simple process that produces the
same distribution over partitions: the Chinese restaurant process.

2.3 The Chinese Restaurant Process
The Chinese restaurant process (CRP) was named by Jim Pitman and Lester Dubins, based upon
a metaphor in which the objects are customers in a restaurant, and the classes are the tables at
which they sit (the process first appears in Aldous 1985, where it is attributed to Pitman, although

                                                   1190
I NDIAN B UFFET P ROCESS

 1                              2                        6                   7
                4
                                                                                                                          ...
                                         10
3              8                                                                         9
                           5
Figure 2: A partition induced by the Chinese restaurant process. Numbers indicate customers (ob-
          jects), circles indicate tables (classes).

it is identical to the extended Polya urn scheme introduced by Blackwell and MacQueen 1973).
Imagine a restaurant with an infinite number of tables, each with an infinite number of seats.2 The
customers enter the restaurant one after another, and each choose a table at random. In the CRP
with parameter α, each customer chooses an occupied table with probability proportional to the
number of occupants, and chooses the next vacant table with probability proportional to α. For
example, Figure 2 shows the state of a restaurant after 10 customers have chosen tables using this
procedure. The first customer chooses the first table with probability αα = 1. The second customer
                                           1                                            α
chooses the first table with probability 1+α , and the second table with probability 1+α   . After the
second customer chooses the second table, the third customer chooses the first table with probability
  1                                       1                                          α
2+α , the second table with probability 2+α , and the third table with probability 2+α . This process
continues until all customers have seats, defining a distribution over allocations of people to tables,
and, more generally, objects to classes. Extensions of the CRP and connections to other stochastic
processes are pursued in depth by Pitman (2002).
     The distribution over partitions induced by the CRP is the same as that given in Equation 5. If
we assume an ordering on our N objects, then we can assign them to classes sequentially using the
method specified by the CRP, letting objects play the role of customers and classes play the role of
tables. The ith object would be assigned to the kth class with probability
                                                                          mk
                                                                         i−1+α    k ≤ K+
                               P(ci = k|c1 , c2 , . . . , ci−1 ) =         α
                                                                         i−1+α   k = K +1

where mk is the number of objects currently assigned to class k, and K+ is the number of classes for
which mk > 0. If all N objects are assigned to classes via this process, the probability of a partition
of objects c is that given in Equation 5. The CRP thus provides an intuitive means of specifying a
prior for infinite mixture models, as well as revealing that there is a simple sequential process by
which exchangeable class assignments can be generated.

2.4 Inference by Gibbs Sampling
Inference in an infinite mixture model is only slightly more complicated than inference in a mixture
model with a finite, fixed number of classes. The standard algorithm used for inference in infinite
mixture models is Gibbs sampling (Bush and MacEachern, 1996; Neal, 2000). Gibbs sampling
 2. Pitman and Dubins, both statisticians at the University of California, Berkeley, were inspired by the apparently infinite
    capacity of Chinese restaurants in San Francisco when they named the process.

                                                             1191
G RIFFITHS AND G HAHRAMANI

is a Markov chain Monte Carlo (MCMC) method, in which variables are successively sampled
from their distributions when conditioned on the current values of all other variables (Geman and
Geman, 1984). This process defines a Markov chain, which ultimately converges to the distribution
of interest (see Gilks et al., 1996). Recent work has also explored variational inference algorithms
for these models (Blei and Jordan, 2006), a topic we will return to later in the paper.
    Implementing a Gibbs sampler requires deriving the full conditional distribution for all variables
to be sampled. In a mixture model, these variables are the class assignments c. The relevant full
conditional distribution is P(ci |c−i , X), the probability distribution over ci conditioned on the class
assignments of all other objects, c−i , and the data, X. By applying Bayes’ rule, this distribution can
be expressed as
                                P(ci = k|c−i , X) ∝ p(X|c)P(ci = k|c−i ),
where only the second term on the right hand side depends upon the distribution over class assign-
ments, P(c). Here we assume that the parameters associated with each class can be integrated out,
so we that the probability of the data depends only on the class assignment. This is possible when a
conjugate prior is used on these parameters. For details, and alternative algorithms that can be used
when this assumption is violated, see Neal (2000).
    In a finite mixture model with P(c) defined as in Equation 3, we can compute P(ci = k|c−i ) by
integrating over θ, obtaining
                                                 Z
                            P(ci = k|c−i ) =         P(ci = k|θ)p(θ|c−i ) dθ
                                                   m−i,k + Kα
                                            =                 ,                                      (6)
                                                   N −1+α
where m−i,k is the number of objects assigned to class k, not including object i. This is the posterior
predictive distribution for a multinomial distribution with a Dirichlet prior.
     In an infinite mixture model with a distribution over class assignments defined as in Equation 5,
we can use exchangeability to find the full conditional distribution. Since it is exchangeable, P([c])
is unaffected by the ordering of objects. Thus, we can choose an ordering in which the ith object
is the last to be assigned to a class. It follows directly from the definition of the Chinese restaurant
process that                                     m−i,k
                                                 N−1+α m−i,k > 0
                                                      α
                              P(ci = k|c−i ) =               k = K−i,+ + 1                           (7)
                                                 N−1+α
                                                      0      otherwise
where K−i,+ is the number of classes for which m−i,k > 0. The same result can be found by taking
the limit of the full conditional distribution in the finite model, given by Equation 6 (Neal, 2000).
     When combined with some choice of p(X|c), Equations 6 and 7 are sufficient to define Gibbs
samplers for finite and infinite mixture models respectively. Demonstrations of Gibbs sampling
in infinite mixture models are provided by Neal (2000) and Rasmussen (2000). Similar MCMC
algorithms are presented in Bush and MacEachern (1996), West et al. (1994), Escobar and West
(1995) and Ishwaran and James (2001). Algorithms that go beyond the local changes in class
assignments allowed by a Gibbs sampler are given by Jain and Neal (2004) and Dahl (2003).

2.5 Summary
Our review of infinite mixture models serves three purposes: it shows that infinite statistical models
can be defined by specifying priors over infinite combinatorial objects; it illustrates how these priors

                                                  1192
I NDIAN B UFFET P ROCESS

can be derived by taking the limit of priors for finite models; and it demonstrates that inference in
these models can remain possible, despite the large hypothesis spaces they imply. However, infinite
mixture models are still fundamentally limited in their representation of objects, assuming that each
object can only belong to a single class. In the next two sections, we use the insights underlying
infinite mixture models to derive methods for representing objects in terms of infinitely many latent
features, leading us to derive a distribution on infinite binary matrices.

3. Latent Feature Models

In a latent feature model, each object is represented by a vector of latent feature values fi , and the
properties xi are generated from a distribution determined by those latent feature values. Latent fea-
ture values can be continuous, as in factor analysis (Roweis and Ghahramani, 1999) and probabilis-
tic principal component analysis (PCA; Tipping and Bishop, 1999), or discrete, as in cooperative
vector quantization (CVQ; Zemel and Hinton, 1994; Ghahramani, 1995). In the remainder of this
                                                                                                  T
section, we will assume that feature values are continuous. Using the matrix F = fT1 fT2 · · · fTN to
                                                                                     

indicate the latent feature values for all N objects, the model is specified by a prior over features,
p(F), and a distribution over observed property matrices conditioned on those features, p(X|F). As
with latent class models, these distributions can be dealt with separately: p(F) specifies the number
of features, their probability, and the distribution over values associated with each feature, while
p(X|F) determines how these features relate to the properties of objects. Our focus will be on p(F),
showing how such a prior can be defined without placing an upper bound on the number of features.
    We can break the matrix F into two components: a binary matrix Z indicating which features
are possessed by each object, with zik = 1 if object i has feature k and 0 otherwise, and a second
matrix V indicating the value of each feature for each object. F can be expressed as the elementwise
(Hadamard) product of Z and V, F = Z⊗V, as illustrated in Figure 3. In many latent feature models,
such as PCA and CVQ, objects have non-zero values on every feature, and every entry of Z is 1. In
sparse latent feature models (e.g., sparse PCA; d’Aspremont et al., 2004; Jolliffe and Uddin, 2003;
Zou et al., 2006) only a subset of features take on non-zero values for each object, and Z picks out
these subsets.
    A prior on F can be defined by specifying priors for Z and V separately, with p(F) = P(Z)p(V).
We will focus on defining a prior on Z, since the effective dimensionality of a latent feature model is
determined by Z. Assuming that Z is sparse, we can define a prior for infinite latent feature models
by defining a distribution over infinite binary matrices. Our analysis of latent class models provides
two desiderata for such a distribution: objects should be exchangeable, and inference should be
tractable. It also suggests a method by which these desiderata can be satisfied: start with a model
that assumes a finite number of features, and consider the limit as the number of features approaches
infinity.

4. A Distribution on Infinite Sparse Binary Matrices

In this section, we derive a distribution on infinite binary matrices by starting with a simple model
that assumes K features, and then taking the limit as K → ∞. The resulting distribution corresponds
to a simple generative process, which we term the Indian buffet process.

                                                 1193
G RIFFITHS AND G HAHRAMANI

(a)          K features                (b)                  K features                     (c)              K features

                                                    0.9 1.4        0     0   −0.3                       1   3   0   0    4

                                                    −3.2 0         0.9   0                              5   0   3   0
 N objects

                                        N objects

                                                                                            N objects
                                                     0     0.2 −2.8                                     0   1   4

                                                    1.8    0                                            2   0

                                                    −0.1                                                5

Figure 3: Feature matrices. A binary matrix Z, as shown in (a), can be used as the basis for sparse
          infinite latent feature models, indicating which features take non-zero values. Element-
          wise multiplication of Z by a matrix V of continuous values gives a representation like
          that shown in (b). If V contains discrete values, we obtain a representation like that shown
          in (c).

4.1 A Finite Feature Model
We have N objects and K features, and the possession of feature k by object i is indicated by a
binary variable zik . Each object can possess multiple features. The zik thus form a binary N × K
feature matrix, Z. We will assume that each object possesses feature k with probability πk , and that
the features are generated independently. In contrast to the class models discussed above, for which
∑k θk = 1, the probabilities πk can each take on any value in [0, 1]. Under this model, the probability
of a matrix Z given π = {π1 , π2 , . . . , πK }, is
                                           K        N                        K
                              P(Z|π) = ∏ ∏ P(zik |πk ) = ∏ πm
                                                            k (1 − πk )
                                                             k          N−mk
                                                                             ,
                                        k=1 i=1                              k=1

where mk = ∑Ni=1 zik is the number of objects possessing feature k.
    We can define a prior on π by assuming that each πk follows a beta distribution. The beta
distribution has parameters r and s, and is conjugate to the binomial. The probability of any πk
under the Beta(r, s) distribution is given by

                                                               πr−1
                                                                k (1 − πk )
                                                                            s−1
                                                p(πk ) =                        ,
                                                                    B(r, s)
where B(r, s) is the beta function,
                                                            Z 1
                                     B(r, s) =                      πr−1
                                                                     k (1 − πk )
                                                                                 s−1
                                                                                     dπk
                                                               0
                                                            Γ(r)Γ(s)
                                                     =               .                                                       (8)
                                                            Γ(r + s)
                     α
We will take r =     K    and s = 1, so Equation 8 becomes

                                                                    Γ( Kα )
                                               B( Kα , 1) =                  = K,
                                                                   Γ(1 + Kα ) α

                                                                   1194
I NDIAN B UFFET P ROCESS

                        α                            πk                        zik
                                                                                        N
                                                                                                  K

Figure 4: Graphical model for the beta-binomial model used in defining the Indian buffet process.
          Nodes are variables, arrows indicate dependencies, and plates (Buntine, 1994) indicate
          replicated structures.

exploiting the recursive definition of the gamma function.3
   The probability model we have defined is
                                              πk | α ∼ Beta( Kα , 1),
                                             zik | πk ∼ Bernoulli(πk ).                                          (9)
Each zik is independent of all other assignments, conditioned on πk , and the πk are generated in-
dependently. A graphical model illustrating the dependencies among these variables is shown in
Figure 4. Having defined a prior on π, we can simplify this model by integrating over all values for
π rather than representing them explicitly. The marginal probability of a binary matrix Z is
                                            Z
                                                            !
                                                K         N
                                P(Z) =         ∏        ∏ P(zik |πk )     p(πk ) dπk
                                               k=1      i=1
                                                K  B(mk + Kα , N − mk + 1)
                                         =     ∏         B( Kα , 1)
                                               k=1
                                                   α        α
                                                   K Γ(mk + K )Γ(N − mk + 1)
                                                K
                                         =     ∏         Γ(N + 1 + Kα )
                                                                             .                                  (10)
                                               k=1

Again, the result follows from conjugacy, this time between the binomial and beta distributions.
This distribution is exchangeable, depending only on the counts mk .
    This model has the important  property that the expectation of the number of non-zero entries
in the matrix Z, E 1T Z1 = E [∑ik zik ], has an upper bound that is independent of K. Since each
                          

column of Z is independent, the expectation is K times the expectation of the sum of a single
column, E 1T zk . This expectation is easily computed,
                                   N           N Z 1                                   α
                         E 1T zk = ∑ E(zik ) = ∑     πk p(πk ) dπk = N                 K
                               
                                                                                              ,                 (11)
                                       i=1             i=1 0                         1 + Kα
                                                                                                r
where the result follows from the fact that the expectation of a Beta(r, s) random variable is r+s .
                                         Nα
                 T           T 
Consequently, E 1 Z1 = KE 1 zk = 1+ α . For finite K, the expectation of the number of entries
                                           K
in Z is bounded above by Nα.
 3. The motivation for choosing r = Kα will be clear when we take the limit K → ∞ in Section 4.3, while the choice of
    s = 1 will be relaxed in Section 7.1.

                                                        1195
G RIFFITHS AND G HAHRAMANI

                                                      lof

Figure 5: Binary matrices and the left-ordered form. The binary matrix on the left is transformed
          into the left-ordered binary matrix on the right by the function lo f (·). This left-ordered
          matrix was generated from the exchangeable Indian buffet process with α = 10. Empty
          columns are omitted from both matrices.

4.2 Equivalence Classes
In order to find the limit of the distribution specified by Equation 10 as K → ∞, we need to define
equivalence classes of binary matrices—the analogue of partitions for assignment vectors. Identi-
fying these equivalence classes makes it easier to be precise about the objects over which we are
defining probability distributions, but the reader who is satisfied with the intuitive idea of taking the
limit as K → ∞ can safely skip the technical details presented in this section.
     Our equivalence classes will be defined with respect to a function on binary matrices, lo f (·).
This function maps binary matrices to left-ordered binary matrices. lo f (Z) is obtained by order-
ing the columns of the binary matrix Z from left to right by the magnitude of the binary number
expressed by that column, taking the first row as the most significant bit. The left-ordering of a
binary matrix is shown in Figure 5. In the first row of the left-ordered matrix, the columns for which
z1k = 1 are grouped at the left. In the second row, the columns for which z2k = 1 are grouped at the
left of the sets for which z1k = 1. This grouping structure persists throughout the matrix.
     Considering the process of placing a binary matrix in left-ordered form motivates the defini-
tion of a further technical term. The history of feature k at object i is defined to be (z1k , . . . , z(i−1)k ).
Where no object is specified, we will use history to refer to the full history of feature k, (z1k , . . . , zNk ).
We will individuate the histories of features using the decimal equivalent of the binary numbers cor-
responding to the column entries. For example, at object 3, features can have one of four histories:
0, corresponding to a feature with no previous assignments, 1, being a feature for which z2k = 1
but z1k = 0, 2, being a feature for which z1k = 1 but z2k = 0, and 3, being a feature possessed by
both previous objects were assigned. Kh will denote the number of features possessing the history
                                                                            2N −1
h, with K0 being the number of features for which mk = 0 and K+ = ∑h=1            Kh being the number of
features for which mk > 0, so K = K0 + K+ . The function lo f thus places the columns of a matrix
in ascending order of their histories.
     lo f (·) is a many-to-one function: many binary matrices reduce to the same left-ordered form,
and there is a unique left-ordered form for every binary matrix. We can thus use lo f (·) to define a
set of equivalence classes. Any two binary matrices Y and Z are lo f -equivalent if lo f (Y) = lo f (Z),
that is, if Y and Z map to the same left-ordered form. The lo f -equivalence class of a binary matrix
Z, denoted [Z], is the set of binary matrices that are lo f -equivalent to Z. lo f -equivalence classes

                                                      1196
I NDIAN B UFFET P ROCESS

are preserved through permutation of either the rows or the columns of a matrix, provided the same
permutations are applied to the other members of the equivalence class. Performing inference at
the level of lo f -equivalence classes is appropriate in models where feature order is not identifiable,
with p(X|F) being unaffected by the order of the columns of F. Any model in which the probability
of X is specified in terms of a linear function of F, such as PCA or CVQ, has this property.
     We need to evaluate the cardinality of [Z], being the number of matrices that map to the same
left-ordered form. The columns of a binary matrix are not guaranteed to be unique: since an object
can possess multiple features, it is possible for two features to be possessed by exactly the same set
of objects. The number of matrices in [Z] is reduced if Z contains identical columns, since some
re-orderings of the columns
                               of Z result in exactly the same matrix. Taking this into account, the
                            K
cardinality of [Z] is K0 ...K N     = 2NK!−1
                                                , where Kh is the count of the number of columns with
                            2 −1       ∏h=0 Kh !
full history h.
     lo f -equivalence classes play the same role for binary matrices as partitions do for assignment
vectors: they collapse together all binary matrices (assignment vectors) that differ only in column
ordering (class labels). This relationship can be made precise by examining the lo f -equivalence
classes of binary matrices constructed from assignment vectors. Define the class matrix generated
by an assignment vector c to be a binary matrix Z where zik = 1 if and only if ci = k. It is straight-
forward to show that the class matrices generated by two assignment vectors that correspond to the
same partition belong to the same lo f -equivalence class, and vice versa.

4.3 Taking the Infinite Limit
Under the distribution defined by Equation 10, the probability of a particular lo f -equivalence class
of binary matrices, [Z], is

                         P([Z]) =       ∑      P(Z)
                                       Z∈[Z]
                                                       α        α
                                                       K Γ(mk + K )Γ(N − mk + 1)
                                                      K
                                            K!
                                   =    2N −1         ∏      Γ(N + 1 + Kα )
                                                                                 .                 (12)
                                       ∏h=0   Kh ! k=1

In order to take the limit of this expression as K → ∞, we will divide the columns of Z into two
subsets, corresponding to the features for which mk = 0 and the features for which mk > 0. Re-
ordering the columns such that mk > 0 if k ≤ K+ , and mk = 0 otherwise, we can break the product
in Equation 12 into two parts, corresponding to these two subsets. The product thus becomes
                     α
                 K   Γ(mk + Kα )Γ(N − mk + 1)
                ∏K        Γ(N + 1 + Kα )
                k=1
                    α α                         K+ α           α
                     K Γ( K )Γ(N + 1)                K Γ(mk + K )Γ(N − mk + 1)
                                       K−K+
                 =                α
                      Γ(N + 1 + K )             ∏          Γ(N + 1 + Kα )
                                                k=1
                    α α
                       Γ( )Γ(N + 1) K             K+
                                                      Γ(mk + Kα )Γ(N − mk + 1)
                                       
                 = K K
                      Γ(N + 1 + Kα )              ∏        Γ( Kα )Γ(N + 1)
                                                 k=1
                                              α K+ K+ (N − mk )! ∏mk −1 ( j + α )
                                      !K
                             N!
                                                      ∏
                                                                       j=1      K
                 =                α
                                                                                    ,              (13)
                      ∏ j=1 ( j + K )
                        N                      K      k=1            N!

                                                      1197
G RIFFITHS AND G HAHRAMANI

where we have used the fact that Γ(x) = (x − 1)Γ(x − 1) for x > 1. Substituting Equation 13 into
Equation 12 and rearranging terms, we can compute our limit
                                                                K+ (N − m )! ∏mk −1 ( j + α )
                                                            !K
                    αK+            K!            N!
                                                               ·∏
                                                                         k    j=1         K
           lim                ·           ·             α
                                            ∏N
                    N −1               K
          K→∞ ∏   2
                         K  !   K0 ! K  +         ( j +   )                  N!
                  h=1     h                   j=1       K       k=1

                       αK+                                           K+
                                                                        (N − mk )!(mk − 1)!
               =    2N −1
                                 ·       1   ·    exp{−αHN }       ·∏                       ,                 (14)
                   ∏h=1   Kh !                                      k=1         N!

where HN is the Nth harmonic number, HN = ∑Nj=1 1j . The details of the steps taken in computing
this limit are given in Appendix A. Again, this distribution is exchangeable: neither the number of
identical columns nor the column sums are affected by the ordering on objects.

4.4 The Indian Buffet Process
The probability distribution defined in Equation 14 can be derived from a simple stochastic process.
As with the CRP, this process assumes an ordering on the objects, generating the matrix sequen-
tially using this ordering. We will also use a culinary metaphor in defining our stochastic process,
appropriately adjusted for geography.4 Many Indian restaurants offer lunchtime buffets with an
apparently infinite number of dishes. We can define a distribution over infinite binary matrices by
specifying a procedure by which customers (objects) choose dishes (features).
     In our Indian buffet process (IBP), N customers enter a restaurant one after another. Each cus-
tomer encounters a buffet consisting of infinitely many dishes arranged in a line. The first customer
starts at the left of the buffet and takes a serving from each dish, stopping after a Poisson(α) number
of dishes as his plate becomes overburdened. The ith customer moves along the buffet, sampling
dishes in proportion to their popularity, serving himself with probability mik , where mk is the number
of previous customers who have sampled a dish. Having reached the end of all previous sampled
dishes, the ith customer then tries a Poisson( αi ) number of new dishes.
     We can indicate which customers chose which dishes using a binary matrix Z with N rows and
infinitely many columns, where zik = 1 if the ith customer sampled the kth dish. Figure 6 shows
a matrix generated using the IBP with α = 10. The first customer tried 17 dishes. The second
customer tried 7 of those dishes, and then tried 3 new dishes. The third customer tried 3 dishes tried
by both previous customers, 5 dishes tried by only the first customer, and 2 new dishes. Vertically
concatenating the choices of the customers produces the binary matrix shown in the figure.
               (i)
     Using K1 to indicate the number of new dishes sampled by the ith customer, the probability of
any particular matrix being produced by this process is

                                       αK+                    K+
                                                                  (N − mk )!(mk − 1)!
                        P(Z) =              (i)
                                                  exp{−αHN } ∏                        .                       (15)
                                     ∏Ni=1 K1 !               k=1         N!

As can be seen from Figure 6, the matrices produced by this process are generally not in left-ordered
form. However, these matrices are also not ordered arbitrarily because the Poisson draws always
result in choices of new dishes that are to the right of the previously sampled dishes. Customers
                                                                                   (i)
are not exchangeable under this distribution, as the number of dishes counted as K1 depends upon
 4. This work was started when both authors were at the Gatsby Computational Neuroscience Unit in London, where the
    Indian buffet is the dominant culinary metaphor.

                                                       1198
I NDIAN B UFFET P ROCESS

                                                              Dishes
            1
            2
            3
            4
            5
            6
            7
            8
Customers

            9
            10
            11
            12
            13
            14
            15
            16
            17
            18
            19
            20

                 Figure 6: A binary matrix generated by the Indian buffet process with α = 10.

the order in which the customers make their choices. However, if we only pay attention to the
lo f -equivalence classes of the matrices generated by this process, we obtain the exchangeable dis-
                                                     (i)
                                             ∏N
                                              i=1 K1 !
tribution P([Z]) given by Equation 14:          N           matrices generated via this process map to the same
                                             ∏2h=1−1 Kh !
left-ordered form, and P([Z]) is obtained by multiplying P(Z) from Equation 15 by this quantity.
     It is possible to define a similar sequential process that directly produces a distribution on lo f
equivalence classes in which customers are exchangeable, but this requires more effort on the part
of the customers. In the exchangeable Indian buffet process, the first customer samples a Poisson(α)
number of dishes, moving from left to right. The ith customer moves along the buffet, and makes
a single decision for each set of dishes with the same history. If there are Kh dishes with history h,
under which mh previous customers have sampled each of those dishes, then the customer samples a
Binomial( mih , Kh ) number of those dishes, starting at the left. Having reached the end of all previous
sampled dishes, the ith customer then tries a Poisson( αi ) number of new dishes. Attending to the
history of the dishes and always sampling from the left guarantees that the resulting matrix is in
left-ordered form, and it is easy to show that the matrices produced by this process have the same
probability as the corresponding lo f -equivalence classes under Equation 14.

4.5 A Distribution over Collections of Histories
In Section 4.2, we noted that lo f -equivalence classes of binary matrices generated from assignment
vectors correspond to partitions. Likewise, lo f -equivalence classes of general binary matrices cor-
respond to simple combinatorial structures: vectors of non-negative integers. Fixing some ordering
of N objects, a collection of feature histories on those objects can be represented by a frequency

                                                            1199
G RIFFITHS AND G HAHRAMANI

vector K = (K1 , . . . , K2N −1 ), indicating the number of times each history appears in the collection.
A collection of feature histories can be translated into a left-ordered binary matrix by horizontally
concatenating an appropriate number of copies of the binary vector representing each history into
a matrix. A left-ordered binary matrix can be translated into a collection of feature histories by
counting the number of times each history appears in that matrix. Since partitions are a subset
of all collections of histories—namely those collections in which each object appears in only one
history—this process is strictly more general than the CRP.
    This connection between lo f -equivalence classes of feature matrices and collections of feature
histories suggests another means of deriving the distribution specified by Equation 14, operating
directly on the frequencies of these histories. We can define a distribution on vectors of non-negative
integers K by assuming that each Kh is generated independently from a Poisson distribution with
parameter αB(mh , N − mh + 1) = α (mh −1)!(N−m  N!
                                                     h )!
                                                          where mh is the number of non-zero elements in
the history h. This gives
                                                              Kh
                              2N −1       α (mh −1)!(N−mh )!
                                                                 (mh − 1)!(N − mh )!
                                                                                     
                              ∏
                                                   N!
               P(K) =                                  exp −α
                              h=1            Kh !                        N!
                                   N
                                  2 −1
                              α∑h=1 Kh               2N −1
                                                             (mh − 1)!(N − mh )! Kh
                                                                               
                          =     2N −1
                                           exp{−αHN } ∏                             ,
                              ∏h=1    Kh !            h=1            N!

which is easily seen to be the same as P([Z]) in Equation 14. The harmonic number    in the expo-
                                     (mh −1)!(N−m)!
nential term is obtained by summing        N!       over all histories h. There are Nj histories for
which mh = j, so we have

                 2N −1                                  N                              N
                         (mh − 1)!(N − mh )!                     ( j − 1)!(N − j)!         1
                  ∑              N!
                                             =         ∑ (Nj )           N!
                                                                                   =   ∑     = HN .   (16)
                 h=1                                   j=1                             j=1 j

4.6 Properties of this Distribution
These different views of the distribution specified by Equation 14 make it straightforward to derive
some of its properties. First, the effective dimension of the model, K+ , follows a Poisson(αHN )
distribution. This is easily shown using the generative process described in Section 4.5: K+ =
  2N −1
∑h=1    Kh , and under this process is thus the sum of a set of Poisson distributions. The sum of a set
of Poisson distributions is a Poisson distribution with parameter equal to the sum of the parameters
of its components. Using Equation 16, this is αHN . Alternatively, we can use the fact that the
number of new columns generated at the ith row is Poisson( αi ), with the total number of columns
being the sum of these quantities.
     A second property of this distribution is that the number of features possessed by each object
follows a Poisson(α) distribution. This follows from the definition of the exchangeable IBP. The
first customer chooses a Poisson(α) number of dishes. By exchangeability, all other customers must
also choose a Poisson(α) number of dishes, since we can always specify an ordering on customers
which begins with a particular customer.
     Finally, it is possible to show that Z remains sparse as K → ∞. The simplest way to do this is to
exploit the previous result: if the number of features possessed by each object follows a Poisson(α)
distribution, then the expected number of entries in Z is Nα. This is consistent with the quantity

                                                            1200
I NDIAN B UFFET P ROCESS

obtained by
           taking
                 the limit ofNαthis expectation in the finite model, which is given in Equation 11:
limK→∞ E 1T Z1 = limK→∞ 1+       α = Nα.
                                 K

4.7 Inference by Gibbs Sampling
We have defined a distribution over infinite binary matrices that satisfies one of our desiderata—
objects (the rows of the matrix) are exchangeable under this distribution. It remains to be shown
that inference in infinite latent feature models is tractable, as was the case for infinite mixture mod-
els. We will derive a Gibbs sampler for sampling from the distribution defined by the IBP, which
suggests a strategy for inference in latent feature models in which the exchangeable IBP is used as
a prior. We will consider alternative inference algorithms later in the paper.
    To sample from the distribution defined by the IBP, we need to compute the conditional distri-
bution P(zik = 1|Z−(ik) ), where Z−(ik) denotes the entries of Z other than zik . In the finite model,
where P(Z) is given by Equation 10, it is straightforward to compute the conditional distribution
for any zik . Integrating over πk gives
                                                 Z 1
                          P(zik = 1|z−i,k ) =          P(zik |πk )p(πk |z−i,k ) dπk
                                                  0
                                                 m−i,k + Kα
                                             =              ,                                       (17)
                                                  N + Kα

where z−i,k is the set of assignments of other objects, not including i, for feature k, and m−i,k is the
number of objects possessing feature k, not including i. We need only condition on z−i,k rather than
Z−(ik) because the columns of the matrix are generated independently under this prior.
    In the infinite case, we can derive the conditional distribution from the exchangeable IBP. Choos-
ing an ordering on objects such that the ith object corresponds to the last customer to visit the buffet,
we obtain
                                                            m−i,k
                                        P(zik = 1|z−i,k ) =       ,                                 (18)
                                                              N
for any k such that m−i,k > 0. The same result can be obtained by taking the limit of Equation 17
as K → ∞. Similarly the number of new features associated with object i should be drawn from a
Poisson( Nα ) distribution. This can also be derived from Equation 17, using the same kind of limiting
argument as that presented above to obtain the terms of the Poisson.
    This analysis results in a simple Gibbs sampling algorithm for generating samples from the
distribution defined by the IBP. We start with an arbitrary binary matrix. We then iterate through the
rows of the matrix, i. For each column k, if m−i,k is greater than 0 we set zik = 1 with probability
given by Equation 18. Otherwise, we delete that column. At the end of the row, we add Poisson( Nα )
new columns that have ones in that row. After sufficiently many passes through the rows, the
resulting matrix will be a draw from the distribution P(Z) given by Equation 15.
    This algorithm suggests a heuristic strategy for sampling from the posterior distribution P(Z|X)
in a model that uses the IBP to define a prior on Z. In this case, we need to sample from the full
conditional distribution

                           P(zik = 1|Z−(ik) , X) ∝ p(X|Z)P(zik = 1|Z−(ik) )

where p(X|Z) is the likelihood function for the model, and we assume that parameters of the like-
lihood have been integrated out. We can proceed as in the Gibbs sampler given above, simply

                                                  1201
G RIFFITHS AND G HAHRAMANI

incorporating the likelihood term when sampling zik for columns for which m−i,k is greater than 0
and drawing the new columns from a distribution where the prior is Poisson( Nα ) and the likelihood
is given by P(X|Z).5

5. An Example: A Linear-Gaussian Latent Feature Model with Binary Features
We have derived a prior for infinite sparse binary matrices, and indicated how statistical inference
can be done in models defined using this prior. In this section, we will show how this prior can be
put to use in models for unsupervised learning, illustrating some of the issues that can arise in this
process. We will describe a simple linear-Gaussian latent feature model, in which the features are
binary. As above, we will start with a finite model and then consider the infinite limit.

5.1 A Finite Linear-Gaussian Model

In our finite model, the D-dimensional vector of properties of an object i, xi is generated from a
Gaussian distribution with mean zi A and covariance matrix ΣX = σ2X I, where zi is a K-dimensional
binary vector, and A is a K × D matrix of weights. In matrix notation, E [X] = ZA. If Z is a feature
matrix, this is a form of binary factor analysis. The distribution of X given Z, A, and σX is matrix
Gaussian:
                                        1               1
                  p(X|Z, A, σX ) =      2  ND/2
                                                exp{− 2 tr((X − ZA)T (X − ZA))}                 (19)
                                   (2πσX )            2σX

where tr(·) is the trace of a matrix. This makes it easy to integrate out the model parameters A. To
do so, we need to define a prior on A, which we also take to be matrix Gaussian:

                                                    1             1
                                p(A|σA ) =                  exp{− 2 tr(AT A)},                                      (20)
                                               (2πσ2A )KD/2      2σA

where σA is a parameter setting the diffuseness of the prior. The dependencies among the variables
in this model are shown in Figure 7.
    Combining Equations 19 and 20 results in an exponentiated expression involving the trace of

              1                       1
                 (X − ZA)T (X − ZA) + 2 AT A
             σX2                     σA
                         1 T       1 T        1               1        1
                      = 2 X X − 2 X ZA − 2 AT ZT X + AT ( 2 ZT Z + 2 I)A
                        σX        σX         σX              σX        σA
                         1
                      = 2 (XT (I − ZMZT )X) + (MZT X − A)T (σ2X M)−1 (MZT X − A),
                        σX

 5. As was pointed out by an anonymous reviewer, this is a heuristic strategy rather than a valid algorithm for sampling
    from the posterior because it violates one of the assumptions of Markov chain Monte Carlo algorithms, with the order
    in which variables are sampled being dependent on the state of the Markov chain. This is not an issue in the algorithm
    for sampling from P(Z), since the columns of Z are independent, and the kernels corresponding to sampling from
    each of the conditional distributions thus act independently of one another.

                                                         1202
I NDIAN B UFFET P ROCESS

                                                               σX

                             α                         Z
                                                                                 X
                          σA                           A

           Figure 7: Graphical model for the linear-Gaussian model with binary features.

                                                 σ2
where I is the identity matrix, M = (ZT Z + σX2 I)−1 , and the last line is obtained by completing the
                                              A
square for the quadratic term in A in the second line. We can then integrate out A to obtain

          p(X|Z, σX , σA )
                 Z
            =        p(X|Z, A, σX )p(A|σA ) dA
                             1                      1
            =                             exp{−         tr(XT (I − ZMZT )X)}
                 (2π)(N+K)D/2 σND
                               X σA
                                  KD               2σ2X
                                  Z
                                       1
                                  exp{− tr((MZT X − A)T (σ2X M)−1 (MZT X − A))} dA
                                       2
                    |σ2X M|D/2           1
            =                     exp{− 2 tr(XT (I − ZMZT )X)}
                 (2π) ND/2 σX σA
                            ND KD      2σX
                                  1
            =
                             (N−K)D KD T     σ2
                 (2π)ND/2 σX       σA |Z Z + σX2 I|D/2
                                               A

                                           1       T         T     σ2X −1 T
                                  exp{−        tr(X  (I − Z(Z  Z +     I) Z )X)}.                   (21)
                                          2σ2X                     σ2A

This result is intuitive: the exponentiated term is the difference between the inner product matrix
of the raw values of X and their projections onto the space spanned by Z, regularized to an extent
determined by the ratio of the variance of the noise in X to the variance of the prior on A. This is
simply the marginal likelihood for a Bayesian linear regression model (Minka, 2000).
     We can use this derivation of p(X|Z, σX , σA ) to infer Z from a set of observations X, provided
we have a prior on Z. The finite feature model discussed as a prelude to the IBP is such a prior. The
full conditional distribution for zik is given by:

                        P(zik |X, Z−(i,k) , σX , σA ) ∝ p(X|Z, σX , σA )P(zik |z−i,k ).             (22)

While evaluating p(X|Z, σX , σA ) always involves matrix multiplication, it need not always involve
a matrix inverse. ZT Z can be rewritten as ∑i zTi zi , allowing us to use rank one updates to efficiently

                                                      1203
G RIFFITHS AND G HAHRAMANI

                                                                                    σ2
compute the inverse when only one zi is modified. Defining M−i = (∑ j6=i zTj z j + σX2 I)−1 , we have
                                                                                     A

                                  M−i = (M−1 − zTi zi )−1
                                             MzTi zi M
                                      = M−                ,                                       (23)
                                            zi MzTi − 1
                                     M = (M−1    T
                                           −i + zi zi )
                                                        −1

                                                     M−i zTi zi M−i
                                         = M−i −                    .                             (24)
                                                     zi M−i zTi + 1

Iteratively applying these updates allows p(X|Z, σX , σA ), to be computed via Equation 21 for dif-
ferent values of zik without requiring an excessive number of inverses, although a full rank update
should be made occasionally to avoid accumulating numerical errors. The second part of Equation
22, P(zik |z−i,k ), can be evaluated using Equation 17.

5.2 Taking the Infinite Limit
To make sure that we can define an infinite version of this model, we need to check that p(X|Z, σX , σA )
remains well-defined if Z has an unbounded number of columns. Z appears in two places in Equa-
                    σ2                     σ2
tion 21: in |ZT Z + σX2 I| and in Z(ZT Z + σX2 I)−1 ZT . We will examine how these behave as K → ∞.
                     A                      A
   If Z is in left-ordered form, we can write it as [Z+ Z0 ], where Z+ consists of K+ columns with
sums mk > 0, and Z0 consists of K0 columns with sums mk = 0. It follows that the first of the two
expressions we are concerned with reduces to

                                  σ2X                             σ2
                                                T            
                             T                    Z+ Z+ 0
                           Z Z+ 2 I =                           + X2 IK
                                  σA                 0    0       σA
                                               2 K0
                                                 σX       T        σ2X
                                         =              Z +  Z + +     IK .                   (25)
                                                 σ2A               σ2A +
The appearance of K0 in this expression is not a problem, as we will see shortly. The abundance of
zeros in Z leads to a direct reduction of the second expression to

                                    σ2X −1 T              σ2X
                         Z(ZT Z +       I) Z = Z+ (ZT
                                                      Z
                                                    + + +     IK )−1 ZT+ ,
                                    σ2A                   σ2A +
which only uses the finite portion of Z. Combining these results yields the likelihood for the infinite
model
                                                     1
        p(X|Z, σX , σA ) =
                                         (N−K )D K D         σ2
                               (2π)ND/2 σX + σA+ |ZT+ Z+ + σX2 IK+ |D/2
                                                               A

                                             1                       σ2
                                     exp{− 2 tr(XT (I − Z+ (ZT+ Z+ + X2 IK+ )−1 ZT+ )X)}.         (26)
                                            2σX                      σA

The K+ in the exponents of σA and σX appears as a result of introducing D/2 multiples of the factor
    2 K0
     σ
of σX2     from Equation 25. The likelihood for the infinite model is thus just the likelihood for the
       A
finite model defined on the first K+ columns of Z.

                                                 1204
You can also read