2011 Women and Mathematics Course Descriptions

Beginning Lecture Course - week 1

Title: Methods for sparse analysis of high-dimensional data, I

Lecturer: Becca Wilett, Duke University

Teaching Assistant:Kalyani Krishnamurthy, Duke University

Course Description: This mini-course will cover fundamental aspects of estimating a function (e.g., a communication signal, medical image, surveillance video, or social network interaction model) from error-corrupted data. We will see how the "curse of dimensionality" limits our ability to control the estimation error in general settings. However, when the function of interest admits a sparse representation, we can exploit this sparsity to achieve significantly higher estimation accuracy. This course will describe various sparse representations of functions and data, methods for computing sparse estimates from noisy or error-riddled observations, and the role of sparsity in fundamental performance limits. In particular, we will examine recent work in compressed sensing, low-rank matrix completion, sparse covariance matrix estimation, and manifold learning.

Pre-requisites: Intro to linear algebra

References: A wavelet tour of signal processing, third edition: The sparse way by Stephane Mallat

Sparse and redundant representations: From theory to applications in signal and image processing by Michael Elad

 

Advanced Lecture Course - week 1

Title: Sublinear-time algorithms

Lecturer: Sofya Raskhodnikova, Penn State University

Teaching Assistant:Shubhangi Saraf, MIT

Course Description: The course will cover the design and analysis of algorithms that are restricted to run in sublinear time. Such algorithms are typically randomized and produce only approximate answers. A characteristic feature of sublinear algorithms is that they do not have time to access the entire input. Therefore, input representation and the model for accessing the input play an important role. We will study different models appropriate for sublinear algorithms. The course will cover sublinear algorithms discovered in a variety of areas, including graph theory, algebra, geometry, image analysis and discrete mathematics, and introduce many techniques that are applied to analyzing sublinear algorithms.

Pre-requisites: a course on the analysis of algorithms would be very helpful; familiarity with asymptotic notation is required.

References:

http://people.csail.mit.edu/ronitt/COURSE/S07/index.html

http://stellar.mit.edu/S/course/6/fa10/6.896/


 

 

Beginning Lecture Course - week 2

Title: Methods for sparse analysis of high-dimensional data, II

Lecturer: Rachel Ward, New York University

Teaching Assistant: Sarah Constantin, Yale University

Course Description: Massive data sets are all around us; often, however, the high-dimensional data is well-parametrized by only a few (known or unknown) variables. Embedding such data into lower dimensions is important for reducing storage cost and speeding up computation in several applications. We will discuss methods for dimensionality reduction using random linear projections. We will then discuss the application of fast decoding algorithms to reconstruct the data from their lower-dimensional projections. Finally, we will focus on the use of random projections in numerical linear algebra for speeding up high-dimensional matrix computations.

Pre-requisites: Linear algebra and probability (necessary) and a semester of real analysis (preferred)

 

 

Advanced Lecture Course - week 2

Title: Algorithms for sparse analysis

Lecturer:Anna Gilbert, University of Michigan

Teaching Assistant: Mary Wootters, University of Michigan

Course Description: The past 10 years have seen a confluence of research in sparse approximation amongst computer science, mathematics, and electrical engineering. Sparse approximation encompasses a large number of mathematical, algorithmic, and signal processing problems which all attempt to balance the size of a (linear) representation of data and the fidelity of that representation. I will discuss several of the basic algorithmic problems (Lecture 1), their hardness or rigorous analysis of the complexity of these problems (Lecture 2), and algorithms for their (approximate) solutions (Lectures 3 and 4). These algorithms include two main paradigms: convex optimization and greedy algorithms. I will include along the way applications in signal processing and more theoretical aspects of computer science.