Aravindan Vijayaraghavan, CMU

Tuesday, December 3, 2013 - 1:00pm to 2:00pm

Pierce Hall 320

Low-rank tensor decompositions, the high dimensional analog of matrix decompositions, are a powerful tool that arise in statistics, signal processing, data mining and machine learning. However, tensors pose significant algorithmic challenges and tensors analogs of much of the matrix algebra toolkit are unlikely to exist because of hardness results. For instance, efficient tensor decompositions in the overcomplete case (where rank exceeds dimension) are particularly challenging.

In this talk, I will address this by describing two recent results about tensor decompositions, with applications to learning generative models:

1. I will present a robust version of a classic theorem of Kruskal which shows uniqueness of tensor decompositions under mild rank conditions i.e. the decomposition is unique even if the entries of the tensor have inverse polynomial error. This powerful uniqueness property of tensor decompositions gives a significant advantage over matrix decomposition methods in learning parameters of many latent variable models.

2. I will introduce a smoothed analysis model for studying tensor decompositions and give efficient algorithms for decomposing tensors, even in the highly overcomplete case (rank polynomial in the dimension). This gives new efficient algorithms for learning probabilistic models like mixtures of gaussians and multi-view models, in the natural smoothed analysis setting. We believe this an appealing way to analyze realistic instances of learning problems, since this framework allows us to overcome many of the usual limitations of using tensor methods.

Based on joint works with Aditya Bhaskara, Moses Charikar and Ankur Moitra.