Exponential Family Harmoniums with an Application to Information Retrieval

Page created by Peter Chavez
 
CONTINUE READING
Exponential Family Harmoniums
   with an Application to Information Retrieval

      Max Welling & Michal Rosen-Zvi                      Geoffrey Hinton
      Information and Computer Science            Department of Computer Science
           University of California                    University of Toronto
         Irvine CA 92697-3425 USA                 Toronto, 290G M5S 3G4, Canada
             welling@ics.uci.edu                       hinton@cs.toronto.edu

                                       Abstract

         Directed graphical models with one layer of observed random variables
         and one or more layers of hidden random variables have been the dom-
         inant modelling paradigm in many research fields. Although this ap-
         proach has met with considerable success, the causal semantics of these
         models can make it difficult to infer the posterior distribution over the
         hidden variables. In this paper we propose an alternative two-layer model
         based on exponential family distributions and the semantics of undi-
         rected models. Inference in these “exponential family harmoniums” is
         fast while learning is performed by minimizing contrastive divergence.
         A member of this family is then studied as an alternative probabilistic
         model for latent semantic indexing. In experiments it is shown that they
         perform well on document retrieval tasks and provide an elegant solution
         to searching with keywords.

1 Introduction

Graphical models have become the basic framework for generative approaches to proba-
bilistic modelling. In particular models with latent variables have proven to be a powerful
way to capture hidden structure in the data. In this paper we study the important subclass
of models with one layer of observed units and one layer of hidden units.
Two-layer models can be subdivided into various categories depending on a number of
characteristics. An important property in that respect is given by the semantics of the
graphical model: either directed (Bayes net) or undirected (Markov random field). Most
two-layer models fall in the first category or are approximations derived from it: mixtures
of Gaussians (MoG), probabilistic PCA (pPCA), factor analysis (FA), independent compo-
nents analysis (ICA), sigmoid belief networks (SBN), latent trait models, latent Dirichlet
allocation (LDA, otherwise known as multinomial PCA, or mPCA) [1], exponential fam-
ily PCA (ePCA), probabilistic latent semantic indexing (pLSI) [6], non-negative matrix
factorization (NMF), and more recently the multiple multiplicative factor model (MMF)
[8].
Directed models enjoy important advantages such as easy (ancestral) sampling and easy
handling of unobserved attributes under certain conditions. Moreover, the semantics of
directed models dictates marginal independence of the latent variables, which is a suitable
modelling assumption for many datasets. However, it should also be noted that directed
models come with an important disadvantage: inference of the posterior distribution of the
latent variables given the observations (which is, for instance, needed within the context of
the EM algorithm) is typically intractable resulting in approximate or slow iterative proce-
dures. For important applications, such as latent semantic indexing (LSI), this drawback
may have serious consequences since we would like to swiftly search for documents that
are similar in the latent topic space.
A type of two-layer model that has not enjoyed much attention is the undirected analogue
of the above described family of models. It was first introduced in [10] where it was named
“harmonium”. Later papers have studied the harmonium under various names (the “com-
bination machine” in [4] and the “restricted Boltzmann machine” in [5]) and turned it into
a practical method by introducing efficient learning algorithms. Harmoniums have only
been considered in the context of discrete binary variables (in both hidden and observed
layers), and more recently in the Gaussian case [7]. The first contribution of this paper is to
extend harmoniums into the exponential family which will make them much more widely
applicable.
Harmoniums also enjoy a number of important advantages which are rather orthogonal to
the properties of directed models. Firstly, their product structure has the ability to produce
distributions with very sharp boundaries. Unlike mixture models, adding a new expert may
decrease or increase the variance of the distribution, which may be a major advantage in
high dimensions. Secondly, unlike directed models, inference in these models is very fast,
due to the fact that the latent variables are conditionally independent given the observations.
Thirdly, the latent variables of harmoniums produce distributed representations of the input.
This is much more efficient than the “grandmother-cell” representation associated with
mixture models where each observation is generated by a single latent variable. Their most
important disadvantage is the presence of a global normalization factor which complicates
both the evaluation of probabilities of input vectors1 and learning free parameters from
examples. The second objective of this paper is to show that the introduction of contrastive
divergence has greatly improved the efficiency of learning and paved the way for large scale
applications.
Whether a directed two-layer model or a harmonium is more appropriate for a particu-
lar application is an interesting question that will depend on many factors such as prior
(conditional) independence assumptions and/or computational issues such as efficiency of
inference. To expose the fact that harmoniums can be viable alternatives to directed mod-
els we introduce an entirely new probabilistic extension of latent semantic analysis (LSI)
[3] and show its usefulness in various applications. We do not want to claim superiority
of harmoniums over their directed cousins, but rather that harmoniums enjoy rather dif-
ferent advantages that deserve more attention and that may one day be combined with the
advantages of directed models.

2 Extending Harmoniums into the Exponential Family

Let xi , i = 1...Mx be the set of observed random variables and hj , j = 1...Mh be the set
of hidden (latent) variables. Both x and h can take values in either the continuous or the
discrete domain. In the latter case, each variable has states a = 1...D.
To construct an exponential family harmonium (EFH) we first choose Mx independent
distributions pi (xi ) for the observed variables and Mh independent distributions pj (hj )
   1
     However, it is easy to compute these probabilities up to a constant so it is possible to compare
probabilities of data-points.
for the hidden variables from the exponential family and combine them multiplicatively,
                                         Mx
                                         Y                       X
                        p({xi }) =             ri (xi ) exp [             θia fia (xi ) − Ai ({θia })]                  (1)
                                         i=1                         a
                                         Mh
                                         Y                           X
                        p({hj }) =             sj (hj ) exp [              λjb gjb (hj ) − Bj ({λjb })]                 (2)
                                         j=1                         b

where {fia (xi ), gjb (hj )} are the sufficient statistics for the models (otherwise known as
features), {θia , λjb } the canonical parameters of the models and {Ai , Bj } the log-partition
functions (or log-normalization factors). In the following we will consider log(ri (xi )) and
log(sj (hj )) as additional features multiplied by a constant.
Next, we couple the random variables in the log-domain by the introduction of a quadratic
interaction term,
                          X                 X                 X jb
    p({xi , hj }) ∝ exp [   θia fia (xi ) +   λjb gjb (hj ) +  Wia fia (xi )gjb (hj )] (3)
                               ia                       jb                            ijab

Note that we did not write the log-partition function for this joint model in order to indi-
cate our inability to compute it in general. For some combinations of exponential family
                                                                jb
distributions it may be necessary to restrict the domain of Wia    in order to maintain nor-
                                                           jb            jb
malizability of the joint probability distribution (e.g. Wia ≤ 0 or Wia     ≥ 0). Although
we could also have mutually coupled the observed variables (and/or the hidden variables)
using similar interaction terms we refrain from doing so in order to keep the learning and
inference procedures efficient. Consequently, by this construction the conditional probabil-
ity distributions are a product of independent distributions in the exponential family with
shifted parameters,
                      Mx
                      Y             X                                                               X       jb
p({xi }|{hj }) =            exp [        θ̂ia fia (xi ) − Ai ({θ̂ia })]            θ̂ia = θia +            Wia gjb (hj ) (4)
                      i=1           a                                                                jb
                       Mh
                       Y            X                                                                X      jb
p({hj }|{xi }) =            exp [        λ̂jb gjb (hj ) − Bj ({λ̂jb })] λ̂jb = λjb +                       Wia fia (xi )(5)
                      j=1            b                                                                ia
                                        P       P
Finally, using the following identity, y exp a θa fa (y) = exp A({θa }) we can also
compute the marginal distributions of the observed and latent variables,
                           X                  X            X jb
          p({xi }) ∝ exp [    θia fia (xi ) +   Bj ({λjb +     Wia fia (xi )})] (6)
                                    ia                       j                        ia
                                X                            X                        X       jb
             p({hj }) ∝ exp [            λjb gjb (hj ) +                 Ai ({θia +          Wia gjb (hj )})]           (7)
                                    jb                           i                     jb

Note that 1) we can only compute the marginal distributions up to the normalization con-
stant and 2) in accordance with the semantics of undirected models, there is no marginal
independence between the variables (but rather conditional independence).

2.1 Training EF-Harmoniums using Contrastive Divergence

Let p̃({xi }) denote the data distribution (or the empirical distribution in case we observe a
finite dataset), and p the model distribution. Under the maximum likelihood objective the
learning rules for the EFH are conceptually simple2 ,
                                                                              0                 0
          δθia ∝ hfia (xi )ip̃ − hfia (xi )ip                        δλjb ∝ hBjb (λ̂jb )ip̃ − hBjb (λ̂jb )ip            (8)
   2
       These learning rules are derived by taking derivatives of the log-likelihood objective using Eqn.6.
δWijab ∝ hfia (xi )Bjb
                                         0                          0
                                            (λ̂jb )ip̃ − hfia (xi )Bjb (λ̂jb )ip                 (9)
                            0
where we have defined Bjb     = ∂Bj (λ̂jb )/∂ λ̂jb with λ̂jb defined in Eqn.5. One should note
that these learning rules are changing the parameters in an attempt to match the expected
sufficient statistics of the data distribution and the model distribution (while maximizing
entropy). Their simplicity is somewhat deceptive, however, since the averages h·ip are
intractable to compute analytically and Markov chain sampling or mean field calculations
are typically wheeled out to approximate them. Both have difficulties: mean field can only
represent one mode of the distribution and MCMC schemes are slow and suffer from high
variance in their estimates.
In the case of binary harmoniums (restricted BMs) it was shown in [5] that contrastive di-
vergence has the potential to greatly improve on the efficiency and reduce the variance of
the estimates needed in the learning rules. The idea is that instead of running the Gibbs
sampler to its equilibrium distribution we initialize Gibbs samplers on each data-vector
and run them for only one (or a few) steps in parallel. Averages h·ip in the learning rules
Eqns.8,9 are now replaced by averages h·ipCD where pCD is the distribution of samples
that resulted from the truncated Gibbs chains. This idea is readily generalized to EFHs.
Due to space limitations we refer to [5] for more details on contrastive divergence learn-
ing3 . Deterministic learning rules can also be derived straightforwardly by generalizing the
results described in [12] to the exponential family.

3 A Harmonium Model for Latent Semantic Indexing
To illustrate the new possibilities that have opened up by extending harmoniums to the
exponential family we will next describe a novel model for latent semantic indexing (LSI).
This will represent the undirected counterpart of pLSI [6] and LDA [1].
One of the major drawbacks of LSI is that inherently discrete data (word counts) are being
modelled with variables in the continuous domain. The power of LSI on the other hand is
that it provides an efficient mapping of the input data into a lower dimensional (continuous)
latent space that has the effect of de-noising the input data and inferring semantic relation-
ships among words. To stay faithful to this idea and to construct a probabilistic model on
the correct (discrete) domain we propose the following EFH with continuous latent topic
variables, hj , and discrete word-count variables, xia ,
                                               Mh
                                               Y          X j
                          p({hj }|{xia }) =          Nhj [ Wia xia , 1]                         (10)
                                               j=1         ia
                                               Mx
                                               Y                      X       j
                          p({xia }|{hj }) =          S{xia } [αia +       hj Wia ]              (11)
                                               i=1                    j
                                                           P
Note that {xia } represent indicator variables satisfying a xia = 1 ∀i, where xia = 1
means that word “i” in the vocabulary was observed “a” times. Nh [µ, σ] denotes a normal
                                                              PD
distribution with mean µ and std.σ and S{xa } [γa ] ∝ exp ( a=1 γa xa ) is the softmax
function defining a probability distribution over x. Using Eqn.6 we can easily deduce the
marginal distribution of the input variables,
                                     X              1X X j
                   p({xia }) ∝ exp [      αia xia +      (    Wia xia )2 ]           (12)
                                      ia
                                                    2  j   ia

   3
     Non-believers in contrastive divergence are invited to simply run the the Gibbs sampler to equi-
librium before they do an update of the parameters. They will find that due to the special bipartite
structure of EFHs learning is still more efficient than for general Boltzmann machines.
j
We observe that the role of the components Wia      is that of templates or prototypes: input
                                      P        j
vectors xia with large inner products ia Wia xia ∀j will have high probability under this
model. Just like pLSI and LDA can be considered as natural generalizations of factor
analysis (which underlies LSI) into the class of directed models on the discrete domain, the
above model can be considered as the natural generalization of factor analysis into class
of undirected models on the discrete domain. This idea is supported by the result that the
same model with Gaussian units in both hidden and observed layers is in fact equivalent to
factor analysis [7].

3.1 Identifiability

From the form of the marginal distribution Eqn.12 we can derive a number of transforma-
tions of the parameters that will leave the distribution invariant. First we note that the com-
            j                                                                j     P
ponents Wia   can be rotated and mirrored arbitrarily in latent space4 : Wia   → k U jk Wia   k
        T
with U U = I. Secondly, we note that observed variables xia satisfy a constraint,
P                                                                                         j
   a xia = 1 ∀i. This results in a combined shift invariance for the components Wia and
the offsets αia . Taken together, this results in the following set of transformations,
        j
              X                                                    XX j             j
     Wia   →       U jk (Wia
                          k
                             + Vik )        αia → (αia + βi ) −        (    Vl )(Wia  )     (13)
                k                                                       j    l

where U T U = I. Although these transformations leave the marginal distribution over the
observable variables invariant, they do change the latent representation and as such may
have an impact on retrieval performance (if we use a fixed similarity measure between
topic representations of documents). To fix the spurious degrees of freedom we have cho-
                                                                         P      j n
sen to impose conditions on the representations in latent space: hnj = ia Wia      xia . First,
we center the latent representations which has the effect of minimizing the “activity” of
the latent variables and moving as much log-probability as possible to the constant com-
ponent αia . Next we align the axes in latent space with the eigen-directions of the latent
covariance matrix. This has the effect of approximately decorrelating the marginal latent
activities. This follows because the marginal distribution in latent space can be approx-
                          P Q          P       j n
imated by: p({hj }) ≈ n j Nhj [ ia Wia           xia , 1]/N where we have used Eqn.10 and
replaced p({xia }) by its empirical distribution. Denoting by µ and Σ = U T ΛU the sample
mean and sample covariance of {hnj }, it is not hard to show that the following transforma-
tion will have the desired effect5 :
                   X        µ               ¶                           X
             j           jk     k     1 k                                        j
           Wia →       U      Wia −       µ                αia → αia +      µj Wia        (14)
                                     Mx                                  j
                     k

One could go one step further than the de-correlation process described above by introduc-
ing covariances Σ in the conditional Gaussian distribution of the latent variables Eqn.10.
This would not result in a more general model because the effect of this on the marginal
                                                         j     P
distribution over the observed variables is given by: Wia   → k K jk Wia  k
                                                                              KK T = Σ.
However, the extra freedom can be used to define axes in latent space for which the pro-
jected data become approximately independent and have the same scale in all directions.
   4
     Technically we call this the Euclidean group of transformations.
                                                                                          P
   5
     Some spurious degrees of freedom remain since shifts βi and shifts Vij that satisfy i Vij = 0
will not affect the projection into latent space. One could decide to fix the remaining degrees of
freedom by for                                                           P in Lj2 norm
              Pexample requiring that components are asjsmall asj possible                 (subject
                                                                                            P       to
the constraint i Vij = 0), leading to the further shifts, Wia         1
                                                              → Wia − D     a W ia +    1
                                                                                      DMx      ia W j
                                                                                                    ia
                    1
                      P
and αia → αia − D a αia .
Precision vs. Recall for Newsgroups Dataset                      Precision vs. Recall for Newgroups Dataset                             Precision vs. Recall for Newsgroups Dataset
        0.8                                                            0.65                                                                     0.8

                                                    EFH                    0.6             EFH+MF                             LSA                                                  EFH    EFH+MF
        0.7                                                                                                                                     0.7
                                                                       0.55
                               LSA
        0.6                                                                0.5                                                                  0.6
                                                                                                     EFH

                                                               Precision
Precision

                                                                                                                                        Precision
        0.5
                                                                       0.45
                                                                                                                                                0.5                                                LSA
                                                                           0.4
                                                                                                                                                                                         TFIDF
        0.4                                    TFIDF                                                                                            0.4
                                                                       0.35                                      TFIDF

        0.3                                                                0.3                                                                  0.3
                                                                       0.25                                                                                 RANDOM
        0.2
                      RANDOM                                                                                                                    0.2
                                                                                    RANDOM
                                                                           0.2

        0.1 −5        −4       −3         −2     −1        0           0.15 −5                                                                  0.1 −5       −4       −3           −2        −1          0
                                                                                      −4       −3           −2           −1         0
         10          10     10        10       10         10              10        10       10            10        10         10               10        10        10        10          10       10
                                 Recall                                                           Recall                                                                  Recall

                               (a)                                                           (b)                                                                      (c)

Figure 1: Precision-recall curves when the query was (a) entire documents, (b) 1 keyword, (c) 2
keywords for the EFH with and without 10 MF iterations, LSI , TFIDF weighted words and random
guessing. PR curves with more keywords looked very similar to (c). A marker at position k (counted
from the left along a curve) indicates that 2k−1 documents were retrieved.

4 Experiments
Newsgroups:
 We have used the reduced version of the “20newsgroups” dataset prepared for MATLAB
by Roweis6 . Documents are presented as 100 dimensional binary occurrence vectors and
tagged as a member of 1 out of 4 domains. Documents contain approximately 4% of the
words, averaged across the 16242 postings.
An EFH model with 10 latent variables was trained on 12000 training cases using sto-
chastic gradient descent on mini-batches of 1000 randomly chosen documents (training
time approximately 1 hour on a 2GHz PC). A momentum term was added to speed up
convergence. To test the quality of the trained model we mapped the remaining 4242
                                                      P     j                    j
query documents into latent space using hj = ia Wia           xia and where {Wia   , αia } were
“gauged” as in Eqns.14. Precision-recall curves were computed by comparing training
and query documents using the usual “cosine coefficient” (cosine of the angle between
documents) and reporting success when the retrieved document was in the same domain
as the query (results averaged over all queries). In figure 1a we compare the results
with LSI (also 10 dimensions) [3] where we preprocessed the data in the standard way
(x → log(1 + x) and entropy weighting of the words) and to similarity in word space
using TF-IDF weighting of the words. In figure 1b,c we show PR curves when only 1 or
2 keywords were provided corresponding to randomly observed words in the query docu-
ment. The EFH model allows a principled way to deal with unobserved entries by inferring
them using the model (in all other methods we insert 0 for the unobserved entries which
corresponds to ignoring
                     P P    them). We have used a few iterations of mean field to achieve
                                  k    k
that: x̂ia → exp [ jb ( k Wia       Wjb  + αjb )x̂jb ]/γi where γi is a normalization constant
                                                         PD
and where x̂ia represent probabilities: x̂ia ∈ [0, 1],     a=1 x̂ia = 1 ∀i. We note that this is
still highly efficient and achieves a significant improvement in performance. In all cases we
find that without any preprocessing or weighting EFH still outperforms the other methods
except when large numbers of documents were retrieved.
In the next experiment we compared performance of EFH, LSI and LDA by training mod-
els on a random subset of 15430 documents with 5 and 10 latent dimensions (this was
found to be close to optimal for LDA). The EFH and LSI models were trained as in the
previous experiment while the training and testing details7 for LDA can be found in [9].
For the remaining test documents we clamped a varying number of observed words and
              6
                  http://www.cs.toronto.edu/∼roweis/data.html
              7
                  The approximate inference procedure was implemented using Gibbs sampling.
Document Reconstruction for Newsgroup Dataset         Latent Representations of Newsgroups Data                                                         Retrieval Performance for NIPS Dataset

fraction observed words correctly predicted

                                                                                                                                                       fraction documents also retrieved by TF−IDF
                                               0.3                                                             8                                                                                     0.52

                                                                                                               6
                                                                              EFH+MF 5D                                                                                                               0.5
                                              0.25                                                             4
                                                                 EFH+MF 10D
                                                                                                          −4                                                                                         0.48
                                               0.2
                                                                                                          −2
                                                                                                 LSI 5D                                                                                              0.46
                                                                                       LDA 5D              0
                                              0.15                LDA 10D
                                                                                                           2                                                                                         0.44

                                               0.1                                                             4
                                                                                              LSI 10D                                                                                                0.42
                                                                                                               6

                                              0.05                                                             8
                                                                                                                                                                                                      0.4
                                                                                                               10
                                                0                                                              12                                                                                    0.38
                                                 0           2        4        6          8         10                                                                                                   0         20      40      60      80     100
                                                                                                                                      −2   −4   −6
                                                                 number of keywords                                 6   4   2     0                                                                              number of retrieved documents

                                                                     (a)                                                    (b)                                                                                            (c)

Figure 2: (a) Fraction of observed words that was correctly observed by EFH, LSI and LDA using 5
and 10 latent variables when we vary the number of keywords (observed words that were “clamped”),
(b) latent 3-D representations of newsgroups data, (c) Fraction of documents retrieved by EFH on the
NIPS dataset which was also retrieved by the TF-IDF method.

asked the models to predict the remaining observed words in the documents by comput-
ing the probabilities for all words in the vocabulary to be present and ranking them (see
previous paragraph for details). By comparing the list of the R remaining observed words
in the document with the top-R ranked inferred words we computed the fraction of cor-
rectly predicted words. The results are shown in figure 2a as a function of the number of
clamped words. To provide anecdotal evidence that EFH can infer semantic relationships
we clamped the words ’drive’ ’driver’ and ’car’ which resulted in: ’car’ ’drive’ ’engine’
’dealer’ ’honda’ ’bmw’ ’driver’ ’oil’ as the most probable words in the documents. Also,
clamping ’pc’ ’driver’ and ’program’ resulted in: ’windows’ ’card’ ’dos’ ’graphics’ ’soft-
ware’ ’pc’ ’program’ ’files’.
NIPS Conference Papers:
Next we trained a model with 5 latent dimensions on the NIPS dataset8 which has a large
vocabulary size (13649 words) and contains 1740 documents of which 1557 were used
for training and 183 for testing. Count values were redistributed in 12 bins. The array
W contains therefore 5 · 13649 · 12 = 818940 parameters. Training was completed in the
order of a few days. Due to the lack of document labels it is hard to assess the quality of the
trained model. We choose to compare performance on document retrieval with the “golden
standard”: cosine similarity in TF-IDF weighted word space. In figure 2c we depict the
fraction of documents retrieved by EFH that was also retrieved by TF-IDF as we vary the
number of retrieved documents. This correlation is indeed very high but note that EFH
computes similarity in a 5-D space while TF-IDF computes similarity in a 13649-D space.

5 Discussion
The main point of this paper was to show that there is a flexible family of 2-layer probabilis-
tic models that represents a viable alternative to 2-layer causal (directed) models. These
models enjoy very different properties and can be trained efficiently using contrastive di-
vergence. As an example we have studied an EFH alternative for latent semantic indexing
where we have found that the EFH has a number of favorable properties: fast inference al-
lowing fast document retrieval and a principled approach to retrieval with keywords. These
were preliminary investigations and it is likely that domain specific adjustments such as a
more intelligent choice of features or parameterization could further improve performance.
Previous examples of EFH include the original harmonium [10], Gaussian variants thereof
[7], and the PoT model [13] which couples a gamma distribution with the covariance of a
                                                8
                                                     Obtained from http://www.cs.toronto.edu/∼roweis/data.html.
normal distribution. Some exponential family extensions of general Boltzmann machines
were proposed in [2], [14], but they do not have the bipartite structure that we study here.
While the components of the Gaussian-multinomial EFH act as prototypes or templates
for highly probable input vectors, the components of the PoT act as constraints (i.e. input
vectors with large inner product have low probability). This can be traced back to the
shape of the non-linearity B in Eqn.6. Although by construction B must be convex (it
is the log-partition function), for large input values it can both be positive (prototypes,
e.g. B(x) = x2 ) or negative (constraints, e.g. B(x) = −log(1 + x)). It has proven
difficult to jointly model both prototypes and constraints in the this formalism except for
the fully Gaussian case [11]. A future challenge is therefore to start the modelling process
with the desired non-linearity and to subsequently introduce auxiliary variables to facilitate
inference and learning.

References
 [1] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning
     Research, 3:993–1022, 2003.
 [2] C.K.I.Williams. Continuous valued Boltzmann machines. Technical report, 1993.
 [3] S.C. Deerwester, S.T. Dumais, T.K. Landauer, G.W. Furnas, and R.A. Harshman. Indexing by
     latent semantic analysis. Journal of the American Society of Information Science, 41(6):391–
     407, 1990.
 [4] Y. Freund and D. Haussler. Unsupervised learning of distributions of binary vectors using
     2-layer networks. In Advances in Neural Information Processing Systems, volume 4, pages
     912–919, 1992.
 [5] G.E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Com-
     putation, 14:1771–1800, 2002.
 [6] Thomas Hofmann. Probabilistic latent semantic analysis. In Proc. of Uncertainty in Artificial
     Intelligence, UAI’99, Stockholm, 1999.
 [7] T. K. Marks and J. R. Movellan. Diffusion networks, products of experts, and factor analysis.
     Technical Report UCSD MPLab TR 2001.02, University of California San Diego, 2001.
 [8] B. Marlin and R. Zemel. The multiple multiplicative factor model for collaborative filtering. In
     Proceedings of the 21st International Conference on Machine Learning, volume 21, 2004.
 [9] M. Rosen-Zvi, T. Griffiths, M. Steyvers, and P. Smyth. The author-topic model for authors
     and documents. In Proceedings of the Conference on Uncertainty in Artificial Intelligence,
     volume 20, 2004.
[10] P. Smolensky. Information processing in dynamical systems: foundations of harmony theory.
     In D.E. Rumehart and J.L. McClelland, editors, Parallel Distributed Processing: Explorations
     in the Microstructure of Cognition. Volume 1: Foundations. McGraw-Hill, New York, 1986.
[11] M. Welling, F. Agakov, and C.K.I. Williams. Extreme components analysis. In Advances in
     Neural Information Processing Systems, volume 16, Vancouver, Canada, 2003.
[12] M. Welling and G.E. Hinton. A new learning algorithm for mean field Boltzmann machines.
     In Proceedings of the International Conference on Artificial Neural Networks, Madrid, Spain,
     2001.
[13] M. Welling, G.E. Hinton, and S. Osindero. Learning sparse topographic representations with
     products of student-t distributions. In Advances in Neural Information Processing Systems,
     volume 15, Vancouver, Canada, 2002.
[14] R. Zemel, C. Williams, and M. Mozer. Lending direction to neural networks. Neural Networks,
     8(4):503–512, 1995.
You can also read