ABSTRACT
Dynamic probabilistic networks are a compact representation of complex stochastic processes. In this paper we examine how to learn the structure of a DPN from data. We extend structure scoring rules for standard probabilistic networks to the dynamic case, and show how to search for structure when some of the variables are hidden. Finally, we examine two applications where such a technology might be useful: predicting and classifying dynamic behaviors, and learning causal orderings in biological processes. We provide empirical results that demonstrate the applicability of our methods in both domains.
- J. Binder, D. Koller, S. Russell, and K. Kanazawa. Adaptive probabilistic networks with hidden variables. Mach. Learning, 29:213-244, 1997. Google ScholarDigital Library
- X. Boyen and D. Koller. Tractable Inference for Complex Stochastic Processes. In UAI, 1998. Google ScholarDigital Library
- W. Buntine. Theory refinement on Bayesian networks. In UAI-91, pp. 52-60. 1991. Google ScholarDigital Library
- G. Cooper and E. Herskovits. A Bayesian method for the induction of probabilistic networks from data. Mach. Learning, 9:309-347, 1992. Google ScholarDigital Library
- T. Dean and K. Kanazawa. Probabilistic temporal reasoning. In AAAI-88, pp. 524-528, 1988.Google Scholar
- M. DeGroot. Optimal Statistical Decisions. 1970.Google Scholar
- A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. J. Royal Stat. Soc., B 39: 1-39, 1977.Google ScholarCross Ref
- J. Forbes, T. Huang, K. Kanazawa, and S. Russell. The BATmobile: Towards a Bayesian automated taxi. In IJCAI- 95, 1995. Google ScholarDigital Library
- J. Forbes, N. Oza, R. Parr, and S. Russell. Feasibility study of fully automated traffic using decision-theoretic control. Cal. PATH Research Report UCB-ITS-PRR-97-18, Inst. of Transportation Studies, U. C. Berkeley, 1997.Google Scholar
- N. Friedman. Learning belief networks in the presence of missing values and hidden variables. In ICML-97, 1997. Google ScholarDigital Library
- N. Friedman. The Bayesian Structural EM Algorithm. In UAI-98, 1998. Google ScholarDigital Library
- N. Friedman and M. Goldszmidt. Learning Bayesian networks with local structure. In M. I. Jordan, ed., Learning in Graphical Models, 1998. A preliminary version appeared in UAI '96. Google ScholarDigital Library
- D. Geiger and D. Heckerman. Learning Gaussian Networks. In UAI, 1994.Google ScholarDigital Library
- Z. Ghahramani and M. I. Jordan. Factorial hidden Markov models. Mach. Learning, 29:245-274, 1997. Google ScholarDigital Library
- D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Mach. Learning, 20: 197-243, 1995. Google ScholarDigital Library
- Z. Ghahramani and G. Hinton. Switching State-Space Models. Submitted for publication, 1998.Google Scholar
- T.S. Jaakkola and M.I. Jordan. Recursive algorithms for approximating probabilities in graphical models. In NIPS 9, pp. 487-93, 1997.Google Scholar
- K. Kanazawa, D. Koller, and S. Russell. Stochastic simulation algorithms for dynamic probabilistic networks. In UAI-95, pp. 346-351, 1995. Google ScholarDigital Library
- U. Kjaerulff. A computational scheme for reasoning in dynamic probabilistic networks. In UAI-92, 1992. Google ScholarDigital Library
- W. Lam and F. Bacchus. Learning Bayesian belief networks: An approach based on the MDL principle. Comp. Intel., 10: 269-293, 1994.Google ScholarCross Ref
- S. L. Lauritzen. The EM algorithm for graphical association models with missing data. Comp. Stat. and Data Anal., 19:191-201, 1995. Google ScholarDigital Library
- S. Liang, S. Fuhrman, and R. Somogyi. Reveal, a general reverse engineering algorithm for inference of genetic network architectures. In Pacgc Symp. on Biocomputing, vol. 3, pp. 18-29, 1998.Google Scholar
- L. Ljung. System Identification: Theory for the User. Prentice Hall, 1987. Google ScholarDigital Library
- J. Malik and S. Russell. Traffic surveillance and detection technology development: New sensor technology final report. Research Report UCB-ITS-PRR-97-6, Cal. PATH Program, 1997.Google Scholar
- H. H. McAdams and A. Arkin. Stochastic mechanisms in gene expression. Proc. of the Nat. Acad. of Sci., 94:814-819, 1997.Google ScholarCross Ref
- H. H. McAdams and L. Shapiro. Circuit simulation of genetic networks. Science, 269:650-656, 1995.Google ScholarCross Ref
- C. Meek and D. Heckerman. Structure and parameter learning for causal independence and causal interaction models. In UAI-97, pp. 366-375, 1997. Google ScholarDigital Library
- G. Schwarz. Estimating the dimension of a model. Ann. Stat., 6:461-464, 1978.Google ScholarCross Ref
- P. Smyth, D. Heckerman, and M. Jordan. Probabilistic independence networks for hidden Markov probability models. Neural Computation, 9(2):227-269, 1997. Google ScholarDigital Library
- H. Xu. Computing marginals for arbitrary subsets from marginal representations in Markov trees. Art. Intel., 74: 177-189, 1995. Google ScholarDigital Library
- G. Zweig and S. Russell. Speech recognition with dynamic Bayesian networks. In AAAI-98, 1998. Google ScholarDigital Library
Index Terms
- Learning the structure of dynamic probabilistic networks
Recommendations
Local learning in probabilistic networks with hidden variables
IJCAI'95: Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2Probabilistic networks which provide compact descriptions of complex stochastic relationships among several random variables are rapidly becoming the tool of choice for uncertain reasoning in artificial intelligence. We show that networks with fixed ...
"Ideal Parent" structure learning for continuous variable networks
UAI '04: Proceedings of the 20th conference on Uncertainty in artificial intelligenceIn recent years, there is a growing interest in learning Bayesian networks with continuous variables. Learning the structure of such networks is a computationally expensive procedure, which limits most applications to parameter learning. This problem is ...
Influence Maximization in Dynamic Networks Using Reinforcement Learning
AbstractInfluence maximization (IM) has been widely studied in recent decades, aiming to maximize the spread of influence over networks. Despite many works for static networks, fewer research studies have been dedicated to the IM problem for dynamic ...
Comments