Date | Nov 27, 2019 |
---|---|

Speaker | 류한백 |

Dept. | UCLA |

Room | 27-220 |

Time | 17:00-18:00 |

Online Matrix Factorization (OMF) is a fundamental tool for dictionary learning problems, giving an approximate representation of complex data sets in terms of a reduced number of extracted features. Convergence guarantees for most of the OMF algorithms in the literature assume independence between data matrices, and the case of a dependent data stream remains largely unexplored. We show that the well-known OMF algorithm for i.i.d. stream of data proposed in \cite{mairal2010online}, in fact converges almost surely to the set of critical points of the expected loss function, even when the data matrices form a Markov chain satisfying a mild mixing condition. Furthermore, we extend the convergence result to the case when we can only approximately solve each step of the optimization problems in the algorithm. For applications, we demonstrate dictionary learning from a sequence of images generated by a Markov Chain Monte Carlo (MCMC) sampler. Lastly, by combining online non-negative matrix factorization and a recent MCMC algorithm for sampling motifs from networks, we propose a novel framework of *Network Dictionary Learning*, which extracts `network dictionary patches' from a given network in an online manner that encodes main features of the network. We demonstrate this technique on real-world text data. This is a joint work with Deanna Needell and Laura Balzano.

TEL 02-880-5857,6530,6531 / FAX 02-887-4694