Cover image for Approximation Methods for Efficient Learning of Bayesian Networks.
Approximation Methods for Efficient Learning of Bayesian Networks.
Title:
Approximation Methods for Efficient Learning of Bayesian Networks.
Author:
Riggelsen, C.
ISBN:
9781607502982
Personal Author:
Physical Description:
1 online resource (148 pages)
Series:
Frontiers in Artificial Intelligence and Applications
Contents:
Title page -- Contents -- Foreword -- Introduction -- Preliminaries -- Random variables and conditional independence -- Graph theory -- Markov properties -- The Markov blanket -- Equivalence of DAGs -- Bayesian networks -- Bayesian network specification -- Bayesian parameter specification -- Learning Bayesian Networks from Data -- The basics -- Learning parameters -- The Maximum-Likelihood approach -- The Bayesian approach -- Learning models -- The penalised likelihood approach -- The Bayesian approach -- Learning via the marginal likelihood -- Determining the hyper parameter -- Marginal and penalised likelihood -- Search methodologies -- Model search space -- Traversal strategy -- Monte Carlo Methods and MCMC Simulation -- Monte Carlo methods -- Importance sampling -- Choice of the sampling distribution -- Markov chain Monte Carlo-MCMC -- Markov chains -- The invariant target distribution -- Reaching the invariant distribution -- Metropolis-Hastings sampling -- Gibbs sampling -- Mixing, burn-in and convergence of MCMC -- The importance of blocking -- Learning models via MCMC -- Sampling models -- Sampling edges -- Blocking edges -- Blocks and Markov blankets -- Sampling blocks -- Validity of the sampler -- The MB-MCMC model sampler -- Evaluation -- Conclusion -- Learning from Incomplete Data -- The concept of incomplete data -- Missing data mechanisms -- Learning from incomplete data -- Likelihood decomposition -- Complications for learning parameters -- Bayesian sequential updating -- Complications for learning models -- Principled iterative methods -- Expectation Maximisation-EM -- Structural EM-SEM -- Data Augmentation-DA -- DA and eliminating the P-step-DA-P -- DA-P and model learning-MDA-P -- Efficiency issues of MDA-P -- Properties of the sub-MCMC samplers -- Interdependence between samplers -- Imputation via importance sampling.

The general idea -- Importance sampling in the I-step-ISMDA-P -- Generating new population vs. re-weighing -- The marginal likelihood as predictive distribution -- The eMC4 sampler -- Evaluation-proof of concept -- Conclusion -- Ad-hoc and heuristic methods -- Available cases analysis -- Bound and Collapse-BC -- Markov Blanket Predictor-MBP -- The general idea -- Approximate predictive distributions -- Parameter estimation -- Prediction and missing parents -- Predictive quality -- Selecting predictive variables -- Implementation of MBP -- Parameter estimation -- Model learning -- Conclusion and discussion -- Conclusion -- References.
Abstract:
This publication offers and investigates efficient Monte Carlo simulation methods in order to realize a Bayesian approach to approximate learning of Bayesian networks from both complete and incomplete data. For large amounts of incomplete data when Monte Carlo methods are inefficient, approximations are implemented, such that learning remains feasible, albeit non-Bayesian. The topics discussed are: basic concepts about probabilities, graph theory and conditional independence; Bayesian network learning from data; Monte Carlo simulation techniques; and, the concept of incomplete data. In order to provide a coherent treatment of matters, thereby helping the reader to gain a thorough understanding of the whole concept of learning Bayesian networks from (in)complete data, this publication combines in a clarifying way all the issues presented in the papers with previously unpublished work.
Local Note:
Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2017. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.
Electronic Access:
Click to View
Holds: Copies: