Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Adaptor Nips

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Adaptor Grammars: A Framework for Specifying

Compositional Nonparametric Bayesian Models

Mark Johnson
Microsoft Research / Brown University
Mark Johnson@Brown.edu

Thomas L. Griffiths
University of California, Berkeley
Tom Griffiths@Berkeley.edu

Sharon Goldwater
Stanford University
sgwater@gmail.com

Abstract
This paper introduces adaptor grammars, a class of probabilistic models of language that generalize probabilistic context-free grammars (PCFGs). Adaptor
grammars augment the probabilistic rules of PCFGs with adaptors that can induce dependencies among successive uses. With a particular choice of adaptor,
based on the Pitman-Yor process, nonparametric Bayesian models of language
using Dirichlet processes and hierarchical Dirichlet processes can be written as
simple grammars. We present a general-purpose inference algorithm for adaptor
grammars, making it easy to define and use such models, and illustrate how several
existing nonparametric Bayesian models can be expressed within this framework.

Introduction

Probabilistic models of language make two kinds of substantive assumptions: assumptions about
the structures that underlie language, and assumptions about the probabilistic dependencies in the
process by which those structures are generated. Typically, these assumptions are tightly coupled.
For example, in probabilistic context-free grammars (PCFGs), structures are built up by applying a
sequence of context-free rewrite rules, where each rule in the sequence is selected independently at
random. In this paper, we introduce a class of probabilistic models that weaken the independence
assumptions made in PCFGs, which we call adaptor grammars. Adaptor grammars insert additional stochastic processes called adaptors into the procedure for generating structures, allowing the
expansion of a symbol to depend on the way in which that symbol has been rewritten in the past.
Introducing dependencies among the applications of rewrite rules extends the set of distributions
over linguistic structures that can be characterized by a simple grammar.
Adaptor grammars provide a simple framework for defining nonparametric Bayesian models of
language. With a particular choice of adaptor, based on the Pitman-Yor process [1, 2, 3], simple
context-free grammars specify distributions commonly used in nonparametric Bayesian statistics,
such as Dirichlet processes [4] and hierarchical Dirichlet processes [5]. As a consequence, many
nonparametric Bayesian models that have been used in computational linguistics, such as models of
morphology [6] and word segmentation [7], can be expressed as adaptor grammars. We introduce a
general-purpose inference algorithm for adaptor grammars, which makes it easy to define nonparametric Bayesian models that generate different linguistic structures and perform inference in those
models.
The rest of this paper is structured as follows. Section 2 introduces the key technical ideas we
will use. Section 3 defines adaptor grammars, while Section 4 presents some examples. Section 5
describes the Markov chain Monte Carlo algorithm we have developed to sample from the posterior

distribution over structures generated by an adaptor grammar. Software implementing this algorithm
is available from http://cog.brown.edu/mj/Software.htm.

Background

In this section, we introduce the two technical ideas that are combined in the adaptor grammars
discussed here: probabilistic context-free grammars, and the Pitman-Yor process. We adopt a nonstandard formulation of PCFGs in order to emphasize that they are a kind of recursive mixture, and
to establish the formal devices we use to specify adaptor grammars.
2.1

Probabilistic context-free grammars

A context-free grammar (CFG) is a quadruple (N, W, R, S) where N is a finite set of nonterminal


symbols, W is a finite set of terminal symbols disjoint from N , R is a finite set of productions or
rules of the form A where A N and (N W ) (the Kleene closure of the terminal and
nonterminal symbols), and S N is a distinguished nonterminal called the start symbol. A CFG
associates with each symbol A N W a set TA of finite, labeled, ordered trees. If A is a terminal
symbol then TA is the singleton set consisting of a unit tree (i.e., containing a single node) labeled
A. The sets of trees associated with nonterminals are defined recursively as follows:
[
TA =
T REEA (TB1 , . . . , TBn )
AB1 ...Bn RA

where RA is the subset of productions in R with left-hand side A, and T REEA (TB1 , . . . , TBn ) is
the set of all trees whose root node is labeled A, that have n immediate subtrees, and where the ith
subtree is a member of TBi . The set of trees generated by the CFG is TS , and the language generated
by the CFG is the set {Y IELD(t) : t TS } of terminal strings or yields of the trees TS .
A probabilistic context-free grammar (PCFG) is a quintuple (N, W, R, S, ), where (N, W, R, S) is
a CFG and is a vector of non-negative real numbers indexed by productions R such that
X
A = 1.
ARA

Informally, A is the probability of expanding the nonterminal A using the production A .


is used to define a distribution GA over the trees TA for each symbol A. If A is a terminal symbol,
then GA is the distribution that puts all of its mass on the unit tree labeled A. The distributions GA
for nonterminal symbols are defined recursively over TA as follows:
X
(1)
AB1 ...Bn T REE D ISTA (GB1 , . . . , GBn )
GA =
AB1 ...Bn RA

where T REE D ISTA (GB1 , . . . , GBn ) is the distribution over T REEA (TB1 , . . . , TBn ) satisfying:
!
n
Y
A
T REE D ISTA (G1 , . . . , Gn )
Gi (ti ).
=
XX
t1 . . . tn
i=1

That is, T REE D ISTA (G1 , . . . , Gn ) is a distribution over trees where the root node is labeled A and
each subtree ti is generated independently from Gi ; it is this assumption that adaptor grammars
relax. The distribution over trees generated by the PCFG is GS , and the probability of a string is the
sum of the probabilities of all trees with that string as their yields.
2.2

The Pitman-Yor process

The Pitman-Yor process [1, 2, 3] is a stochastic process that generates partitions of integers. It is
most intuitively described using the metaphor of seating customers at a restaurant. Assume we have
a numbered sequence of tables, and zi indicates the number of the table at which the ith customer is
seated. Customers enter the restaurant sequentially. The first customer sits at the first table, z1 = 1,
and the n + 1st customer chooses a table from the distribution
m
X
ma + b
nk a
zn+1 |z1 , . . . , zn
m+1 +
k
(2)
n+b
n+b
k=1

where m is the number of different indices appearing in the sequence z = (z1 , . . . , zn ), nk is the
number of times k appears in z, and k is the Kronecker delta function, i.e., the distribution that
puts all of its mass on k. The process is specified by two real-valued parameters, a [0, 1] and
b 0. The probability of a particular sequence of assignments, z, with a corresponding vector of
table counts n = (n1 , . . . , nm ) is
Qm
Qnk 1
k=1 (a(k 1) + b)
j=1 (j a)
.
(3)
P(z) = PY(n | a, b) =
Qn1
i=0 (i + b)
From this it is easy to see that the distribution produced by the Pitman-Yor process is exchangeable,
with the probability of z being unaffected by permutation of the indices of the zi .

Equation 2 instantiates a kind of rich get richer dynamics, with customers being more likely to sit
at more popular tables. We can use the Pitman-Yor process to define distributions with this character
on any desired domain. Assume that every table in our restaurant has a value xj placed on it, with
those values being generated from an exchangeable distribution G, which we will refer to as the
generator. Then, we can sample a sequence of variables y = (y1 , . . . , yn ) by using the Pitman-Yor
process to produce z and setting yi = xzi . Intuitively, this corresponds to customers entering the
restaurant, and emitting the values of the tables they choose. The distribution defined on y by this
process will be exchangeable, and has two interesting special cases that depend on the parameters
of the Pitman-Yor process. When a = 1, every customer is assigned to a new table, and the yi are
drawn from G. When a = 0, the distribution on the yi is that induced by the Dirichlet process [4],
a stochastic process that is commonly used in nonparametric Bayesian statistics, with concentration
parameter b and base distribution G.
We can also identify another scheme that generates the distribution outlined in the previous paragraph. Let H be a discrete distribution produced by generating a set of atoms x from G and weights
on those atoms from the two-parameter Poisson-Dirichlet distribution [2]. We could then generate a
sequence of samples y from H. If we integrate over values of H, the distribution on y is the same
as that obtained via the Pitman-Yor process [2, 3].

Adaptor grammars

In this section, we use the ideas introduced in the previous section to give a formal definition of
adaptor grammars. We first state this definition in full generality, allowing any choice of adaptor,
and then consider the case where the adaptor is based on the Pitman-Yor process in more detail.
3.1

A general definition of adaptor grammars

Adaptor grammars extend PCFGs by inserting an additional component called an adaptor into the
PCFG recursion (Equation 1). An adaptor C is a function from a distribution G to a distribution
over distributions with the same support as G. An adaptor grammar is a sextuple (N, W, R, S, , C)
where (N, W, R, S, ) is a PCFG and the adaptor vector C is a vector of (parameters specifying)
adaptors indexed by N . That is, CA maps a distribution over trees TA to another distribution over
TA , for each A N . An adaptor grammar associates each symbol with two distributions GA and
HA over TA . If A is a terminal symbol then GA and HA are distributions that put all their mass on
the unit tree labeled A, while GA and HA for nonterminal symbols are defined as follows:1
X
(4)
AB1 ...Bn T REE D ISTA (HB1 , . . . , GHn )
GA =
AB1 ...Bn RA

HA

CA (GA )

The intuition here is that GA instantiates the PCFG recursion, while the introduction of HA makes
it possible to modify the independence assumptions behind the resulting distribution through the
choice of the adaptor, CA . If the adaptor is the identity function, with HA = GA , the result is
just a PCFG. However, other distributions over trees can be defined by choosing other adaptors. In
practice, we integrate over HA , to define a single distribution on trees for any choice of adaptors C.
1
This definition allows an adaptor grammar to include self-recursive or mutually recursive CFG productions
(e.g., X X Y or X Y Z, Y X W ). Such recursion complicates inference, so we restrict ourselves
to grammars where the adapted nonterminals are not recursive.

3.2

Pitman-Yor adaptor grammars

The definition given above allows the adaptors to be any appropriate process, but our focus in the
remainder of the paper will be on the case where the adaptor is based on the Pitman-Yor process.
Pitman-Yor processes can cache, i.e., increase the probability of, frequently occurring trees. The capacity to replace the independent selection of rewrite rules with an exchangeable stochastic process
enables adaptor grammars based on the Pitman-Yor process to define probability distributions over
trees that cannot be expressed using PCFGs.
A Pitman-Yor adaptor grammar (PYAG) is an adaptor grammar where the adaptors C are based on
the Pitman-Yor process. A Pitman-Yor adaptor CA (GA ) is the distribution obtained by generating a
set of atoms from the distribution GA and weights on those atoms from the two-parameter PoissonDirichlet distribution. A PYAG has an adaptor CA with parameters aA and bA for each non-terminal
A N . As noted above, if aA = 1 then the Pitman-Yor process is the identity function, so A is
expanded in the standard manner for a PCFG. Each adaptor CA will also be associated with two
vectors, xA and nA , that are needed to compute the probability distribution over trees. xA is the
sequence of previously generated subtrees with root nodes labeled A. Having been cached by the
grammar, these now have higher probability than other subtrees. nA lists the counts associated with
the subtrees in xA . The adaptor state can thus be summarized as CA = (aA , bA , xA , nA ).
A Pitman-Yor adaptor grammar analysis u = (t, ) is a pair consisting of a parse tree t TS
together with an index function (). If q is a nonterminal node in t labeled A, then (q) gives the
index of the entry in xA for the subtree t of t rooted at q, i.e., such that xA (q) = t . The sequence
of analyses u = (u1 , . . . , un ) generated by an adaptor grammar contains sufficient information to
compute the adaptor state C(u) after generating u: the elements of xA are the distinctly indexed
subtrees of u with root label A, and their frequencies nA can be found by performing a top-down
traversal of each analysis in turn, only visiting the children of a node q when the subanalysis rooted
at q is encountered for the first time (i.e., when it is added to xA ).

Examples of Pitman-Yor adaptor grammars

Pitman-Yor adaptor grammars provide a framework in which it is easy to define compositional nonparametric Bayesian models. The use of adaptors based on the Pitman-Yor process allows us to
specify grammars that correspond to Dirichlet processes [4] and hierarchical Dirichlet processes
[5]. Once expressed in this framework, a general-purpose inference algorithm can be used to calculate the posterior distribution over analyses produced by a model. In this section, we illustrate how
existing nonparametric Bayesian models used for word segmentation [7] and morphological analysis [6] can be expressed as adaptor grammars, and describe the results of applying our inference
algorithm in these models. We postpone the presentation of the algorithm itself until Section 5.
4.1

Dirichlet processes and word segmentation

Adaptor grammars can be used to define Dirichlet processes with discrete base distributions. It is
straightforward to write down an adaptor grammar that defines a Dirichlet process over all strings:
Word
Chars
Chars

Chars
Char
Chars Char

(5)

The productions expanding Char to all possible characters are omitted to save space. The start symbol for this grammar is Word. The parameters aChar and aChars are set to 1, so the adaptors for
Char and Chars are the identity function and HChars = GChars is the distribution over words produced by sampling each character independently (i.e., a monkeys at typewriters model). Finally,
aWord is set to 0, so the adaptor for Word is a Dirichlet process with concentration parameter bWord .
This grammar generates all possible strings of characters and assigns them simple right-branching
structures of no particular interest, but the Word adaptor changes their distribution to one that reflects
the frequencies of previously generated words. Initially, the Word adaptor is empty (i.e., xWord is
empty), so the first word s1 generated by the grammar is distributed according to GChars . However,
the second word can be generated in two ways: either it is retrieved from the adaptors cache (and

hence is s1 ) with probability 1/(1 + bWord ), or else with probability bWord /(1 + bWord ) it is a new
word generated by GChars . After n words have been emitted, Word puts mass n/(n + bWord ) on
those words and reserves mass bWord /(n + bWord ) for new words (i.e., generated by Chars).
We can extend this grammar to a simple unigram word segmentation model by adding the following
productions, changing the start label to Words and setting aWords = 1.
Words
Words

Word
Word Words

This grammar generates sequences of Word subtrees, so it implicitly segments strings of terminals
into a sequence of words, and in fact implements the word segmentation model of [7]. We applied the
grammar above with the algorithm described in Section 5 to a corpus of unsegmented child-directed
speech [8]. The input strings are sequences of phonemes such as WAtIzIt. A typical parse might
consist of Words dominating three Word subtrees, each in turn dominating the phoneme sequences
Wat, Iz and It respectively. Using the sampling procedure described in Section 5 with bWord =
30, we obtained a segmentation which identified words in unsegmented input with 0.64 precision,
0.51 recall, and 0.56 f-score, which is consistent with the results presented for the unigram model
of [7] on the same data.
4.2

Hierarchical Dirichlet processes and morphological analysis

An adaptor grammar with more than one adapted nonterminal can implement a hierarchical Dirichlet
process. A hierarchical Dirichlet process that uses the Word process as a generator can be defined
by adding the production Word1 Word to (5) and making Word1 the start symbol. Informally,
Word1 generates words either from its own cache xWord1 or from the Word distribution. Word
itself generates words either from xWord or from the monkeys at typewriters model Chars.
A slightly more elaborate grammar can implement the morphological analysis described in [6].
Words are analysed into stem and suffix substrings; e.g., the word jumping is analysed as a stem
jump and a suffix ing. As [6] notes, one of the difficulties in constructing a probabilistic account
of such suffixation is that the relative frequencies of suffixes varies dramatically depending on the
stem. That paper used a Pitman-Yor process to effectively dampen this frequency variation, and
the adaptor grammar described here does exactly the same thing. The productions of the adaptor
grammar are as follows, where Chars is monkeys at typewriters once again:
Word
Word
Stem
Suffix

Stem Suffix
Stem
Chars
Chars

We now give an informal description of how samples might be generated by this grammar. The
nonterminals Word, Stem and Suffix are associated with Pitman-Yor adaptors. Stems and suffixes
that occur in many words are associated with highly probable cache entries, and so have much higher
probability than under the Chars PCFG subgrammar.
Figure 1 depicts a possible state of the adaptors in this adaptor grammar after generating the three
words walking, jumping and walked. Such a state could be generated as follows. Before any strings
are generated all of the adaptors are empty. To generate the first word we must sample from HWord ,
as there are no entries in the Word adaptor. Sampling from HWord requires sampling from GStem
and perhaps also GSuffix , and eventually from the Chars distributions. Supposing that these return
walk and ing as Stem and Suffix strings respectively, the adaptor entries after generating the first
word walking consist of the first entries for Word, Stem and Suffix.
In order to generate another Word we first decide whether to select an existing word from the
adaptor, or whether to generate the word using GWord . Suppose we choose the latter. Then we must
sample from HStem and perhaps also from HSuffix . Suppose we choose to generate the new stem
jump from GStem (resulting in the second entry in the Stem adaptor) but choose to reuse the existing
Suffix adaptor entry, resulting in the word jumping. The third word walked is generated in a similar
fashion: this time the stem is the first entry in the Stem adaptor, but the suffix ed is generated from
GSuffix and becomes the second entry in the Suffix adaptor.

Word

Word:

Stem

Word

Suffix

Stem

w a l k i n g

Word

Suffix

Stem

j u m p i n g

Stem

Stem

w a l k

j u m p

Suffix

Suffix

i n g

e d

Suffix

w a l k e d

Stem:

Suffix:

Figure 1: A depiction of a possible state of the Pitman-Yor adaptors in the adaptor grammar of
Section 4.2 after generating walking, jumping and walked.
The model described in [6] is more complex than the one just described because it uses a hidden
morphological class variable that determines which stem-suffix pair is selected. The morphological class variable is intended to capture morphological variation; e.g., the present continuous form
skipping is formed by suffixing ping instead of the ing form using in walking and jumping. This can
be expressed using an adaptor grammar with productions that instantiate the following schema:
Word
Wordc
Wordc

Wordc
Stemc Suffixc
Stemc

Stemc
Suffixc

Chars
Chars

Here c ranges over the hidden morphological classes, and the productions expanding Chars and
Char are as before. We set the adaptor parameter aWord = 1 for the start nonterminal symbol
Word, so we adapt the Wordc , Stemc and Suffixc nonterminals for each hidden class c.
Following [6], we used this grammar with six hidden classes c to segment 170,015 orthographic
verb tokens from the Penn Wall Street Journal corpus, and set a = 0 and b = 500 for the adapted
nonterminals. Although we trained on all verbs in the corpus, we evaluated the segmentation produced by the inference procedure described below on just the verbs whose infinitival stems were a
prefix of the verb itself (i.e., we evaluated skipping but ignored wrote, since its stem write is not a
prefix). Of the 116,129 tokens we evaluated, 70% were correctly segmented, and of the 7,170 verb
types, 66% were correctly segmented. Many of the errors were in fact linguistically plausible: e.g.,
eased was analysed as a stem eas followed by a suffix ed, permitting the grammar to also generate
easing as eas plus ing.

Bayesian inference for Pitman-Yor adaptor grammars

The results presented in the previous section were obtained by using a Markov chain Monte Carlo
(MCMC) algorithm to sample from the posterior distribution over PYAG analyses u = (u1 , . . . , un )
given strings s = (s1 , . . . , sn ), where si W and ui is the analysis of si . We assume we are given
a CFG (N, W, R, S), vectors of Pitman-Yor adaptor parameters a and b, and a Dirichlet prior with
hyperparameters over production probabilities , i.e.:
Y
Y
1
A A 1 where:
P( | ) =
B(A )
AN

ARA

B(A )

(A )
P
( ARA A )
ARA

with (x) being the generalized factorial function, and A is the subsequence of indexed by RA
(i.e., corresponding to productions that expand A). The joint probability of u under this PYAG, integrating over the distributions HA generated from the two-parameter Poisson-Dirichlet distribution
associated with each adaptor, is
Y B(A + fA (xA ))
PY(nA (u)|a, b)
(6)
P(u | , a, b) =
B(A )
AN

where fA (xA ) is the number of times the root node of a tree in xA is expanded by production
A , and fA (xA ) is the sequence of such counts (indexed by r RA ). Informally, the first term
in (6) is the probability of generating the topmost node in each analysis in adaptor CA (the rest of
the tree is generated by another adaptor), while the second term (from Equation 3) is the probability
of generating a Pitman-Yor adaptor with counts nA .

The posterior distribution over analyses u given strings s is obtained by normalizing P(u | , a, b)
over all analyses u that have s as their yield. Unfortunately, computing this distribution is intractable.
Instead, we draw samples from this distribution using a component-wise Metropolis-Hastings sampler, proposing changes to the analysis ui for each string si in turn. The proposal distribution is
constructed to approximate the conditional distribution over ui given si and the analyses of all other
strings ui , P(ui |si , ui ). Since there does not seem to be an efficient (dynamic programming) algorithm for directly sampling from P(ui |si , ui ),2 we construct a PCFG G (ui ) on the fly whose
parse trees can be transformed into PYAG analyses, and use this as our proposal distribution.
5.1

The PCFG approximation G (ui )

A PYAG can be viewed as a special kind of PCFG which adapts its production probabilities depending on its history. The PCFG approximation G (ui ) = (N, W, R , S, ) is a static snapshot of the
adaptor grammar given the sentences si (i.e., all of the sentences in s except si ). Given an adaptor
grammar H = (N, W, R, S, C), let:
[
R = R
{A Y IELD(x) : x xA }
AN

mA aA + bA
nA + bA

fA (xA ) + A
P
mA + ARA A

k: Y IELD(XAk )=

nAk aA
nA + bA

where Y IELD(x) is the terminal string or yield of the tree x and mA is the length of xA . R contains
all of the productions R, together with productions representing the adaptor entries xA for each
A N . These additional productions rewrite directly to strings of terminal symbols, and their
probability is the probability of the adaptor CA generating the corresponding value xAk .
The two terms to the left of the summation specify the probability of selecting a production from
the original productions R. The first term is the probability of adaptor CA generating a new value,
and the second term is the MAP estimate of the productions probability, estimated from the root
expansions of the trees xA .
It is straightforward to map parses of a string s produced by G to corresponding adaptor analyses
for the adaptor grammar H (it is possible for a single production of R to correspond to several
adaptor entries so this mapping may be non-deterministic). This means that we can use the PCFG
G with an efficient PCFG sampling procedure [9] to generate possible adaptor grammar analyses
for ui .
5.2

A Metropolis-Hastings algorithm

The previous section described how to sample adaptor analyses u for a string s from a PCFG approximation G to an adaptor grammar H. We use this as our proposal distribution in a Metropolis2
The independence assumptions of PCFGs play an important role in making dynamic programming possible. In PYAGs, the probability of a subtree adapts dynamically depending on the other subtrees in u, including
those in ui .

Hastings algorithm. If ui is the current analysis of si and ui 6= ui is a proposal analysis sampled


from P(Ui |si , G (ui )) we accept the proposal ui with probability A(ui , ui ), where:


P(u | , a, b) P(ui | si , G (ui ))
A(ui , ui ) = min 1,
P(u | , a, b) P(ui | si , G (ui ))

where u is the same as u except that ui replaces ui . Except when the number of training strings s
is very small, we find that only a tiny fraction (less than 1%) of proposals are rejected, presumably
because the probability of an adaptor analysis does not change significantly within a single string.
Our inference procedure is as follows. Given a set of training strings s we choose an initial set of
analyses for them at random. At each iteration we pick a string si from s at random, and sample a
parse for si from the PCFG approximation G (ui ), updating u when the Metropolis-Hastings procedure accepts the proposed analysis. At convergence the u produced by this procedure are samples
from the posterior distribution over analyses given s, and samples from the posterior distribution
over adaptor states C(u) and production probabilities can be computed from them.

Conclusion

The strong independence assumptions of probabilistic context-free grammars tightly couple compositional structure with the probabilistic generative process that produces that structure. Adaptor
grammars relax that coupling by inserting an additional stochastic component into the generative
process. Pitman-Yor adaptor grammars use adaptors based on the Pitman-Yor process. This choice
makes it possible to express Dirichlet process and hierarchical Dirichlet process models over discrete domains as simple context-free grammars. We have proposed a general-purpose inference
algorithm for adaptor grammars, which can be used to sample from the posterior distribution over
analyses produced by any adaptor grammar. While our focus here has been on demonstrating that
this algorithm can be used to produce equivalent results to existing nonparametric Bayesian models
used for word segmentation and morphological analysis, the great promise of this framework lies in
its simplification of specifying and using such models, providing a basic toolbox that will facilitate
the construction of more sophisticated models.
Acknowledgments
This work was performed while all authors were at the Cognitive and Linguistic Sciences Department at Brown University and supported by the following grants: NIH R01-MH60922 and RO1DC000314, NSF 9870676, 0631518 and 0631667, the DARPA CALO project and DARPA GALE
contract HR0011-06-2-0001.

References
[1] J. Pitman. Exchangeable and partially exchangeable random partitions. Probability Theory and Related
Fields, 102:145158, 1995.
[2] J. Pitman and M. Yor. The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator.
Annals of Probability, 25:855900, 1997.
[3] H. Ishwaran and L. F. James. Generalized weighted Chinese restaurant processes for species sampling
mixture models. Statistica Sinica, 13:12111235, 2003.
[4] T. Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1:209230,
1973.
[5] Y. W. Teh, M. Jordan, M. Beal, and D. Blei. Hierarchical Dirichlet processes. Journal of the American
Statistical Association, to appear.
[6] S. Goldwater, T. L. Griffiths, and M. Johnson. Interpolating between types and tokens by estimating powerlaw generators. In Advances in Neural Information Processing Systems 18, 2006.
[7] S. Goldwater, T. L. Griffiths, and M. Johnson. Contextual dependencies in unsupervised word segmentation. In Proceedings of the 44th Annual Meeting of the Association for Computational Linguistics, 2006.
[8] M. Brent. An efficient, probabilistically sound algorithm for segmentation and word discovery. Machine
Learning, 34:71105, 1999.
[9] J. Goodman.
Parsing inside-out.
PhD thesis, Harvard University, 1998.
available from
http://research.microsoft.com/joshuago/.

You might also like