The key goal of Compressive Sensing (CS) is: how and when can I reconstruct a signal using fewer measurements than the signal length, but using the fact that the signal is "sparse", in some domain. It answers the following question: if I am given a set of measurements y := Ax where A is a fat matrix, and x is the unknown sparse signal, under what assumptions on A can I exactly reconstruct x from y using efficient (polynomial time) recovery algorithms? Extensions to noisy measurements and approximately sparse signals are considered.

CS uses the well known fact that has always been exploited by lossy compression algorithms such as JPEG and JPEG2000 that natural signals and images are approximately sparse (compressible) in some domain. JPEG assumes compressibility in the DCT domain, JPEG2000 assumes compressibility in the wavelet transform domain (valid for piecewise smooth signals/images). But traditional compression techniques first acquire all the data, then compress it. The question is do we need to acquire all the data at all?

CS is particularly useful for MRI or CT applications since both acquire measurements one at a time (or one radial line at a time) and in the Fourier domain (CT measurements can be converted to Fourier domain). Thus their scan time is directly proportional to number of measurements needed. Thus if we can reconstruct accurately with less measurements, we would reduce scan time!

A new type of camera, the single-pixel camera, has been built at Rice that tries to acquire measurements sequentially!

- Announcements
- I have posted the Class Schedule and
Corresponding Reading List. Please help
me fill in the dates and topics for your presentations.

- Class time and Location: 9:30-10:50 Tues-Thurs in 1012 Coover

- Instructor: Prof. Namrata Vaswani

- Office Hours: TBD

- Email: namrata AT iastate.edu, Phone: 515-294-4012
- Office: 3121 Coover Hall
- Class Schedule and
Corresponding Reading List

- Topics
- General
Reading
List

- We will cover some key results in full detail including proofs, for the rest we will only discuss the algorithms and main ideas of theoretical guarantees
- Grading:

- Option 1: (a) present one theoretical paper in full detail by
yourself and (b) present a second paper along with a team member

- Option 2: (a) pick an application and do one project by yourself and (b) present a paper along with a team member
- Prerequisites/Corequisites: EE 523, EE 527, good if you also did EE 570
- Disability accommodation: If you have a documented disability and anticipate needing accommodations in this course, please make arrangements to meet with me soon. You will need to provide documentation of your disability to Disability Resources (DR) office, located on the main floor of the Student Services Building, Room 1076, 515-294-7220.
- Feedback: Your feedback is welcome (e.g. is class going too fast or too slow), please email me (namrata AT iastate.edu). Put "Feedback" in the subject line.

Topics (more suggestions are welcome)

- Introduction: a set of old introductory slides and some old scanned notes
- We will recap some linear algebra, probability and optimization, either as we go along or in the beginning
- Linear Algebra : Parts of Chapters 0, 1, 4, 5 and a few things from Chapter 2 of Matrix Analysis, Horn and Johnson
- Optimization: Subset of notes of Vandenberghe and Boyd
- Probability: Quick recap of EE322 notes

- Compressive Sensing Recovery: Convex Optimization Approaches

- Basic ideas: Restricted isometry property (RIP), Discrete
uniform
uncertainty principles (UUP), Exact recovery condition (ERC) and their
implications

- Noiseless measurements, sparse signal : exact reconstruction
by solving a linear program

- Noisy measurements, sparse signal: stable recovery by solving
a few different convex programs

- Noiseless and noisy measurements, compressible signal
- Compressive Sensing Recovery: Greedy approaches (faster but need more measurements)
- Matching Pursuit, Orthogonal MP : theoretical
guarantees only for noiseless measurements

- Compressive Sampling MP (CoSaMP)
- Measurement Matrices that can be used for CS

- Key properties of random matrices

- Verifying RIP for measurement matrices of existing and new
CS-based technologies

- Memory efficiency issues and how to address them

- Practical Issues for Large Problems

- Developing computational and memory efficient implementations

- Applications
- Existing technology: MRI and tomography

- new CS-based technologies: single-pixel camera, new sensing
protocol in sensor networks, analog image
compression/transmission, more

- introductory slides
- CS resource page
- I will teach from a subset of these papers - the others can be
used by you for seminars

- CS Recovery: Convex Optimization
approaches

- Noiseless, sparse signal, random Fourier measurements: Emmanuel Candès, Justin Romberg, and Terence Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. (IEEE Trans. on Information Theory, 52(2) pp. 489 - 509, February 2006)
- Noiseless, sparse signal: Emmanuel Candès and Terence Tao, Decoding by linear programming. (IEEE Trans. on Information Theory, 51(12), pp. 4203 - 4215, December 2005)
- Noiseless, compressible signal: Emmanuel Candès and Terence Tao, Near optimal signal recovery from random projections: Universal encoding strategies? (IEEE Trans. on Information Theory, 52(12), pp. 5406 - 5425, December 2006)
- Noisy, sparse or compressible signal cases: Emmanuel Candès and Terence Tao, The Dantzig Selector: Statistical estimation when p is much larger than n (To appear in Annals of Statistics)
- Noisy, sparse or
compressible signal cases: Joel A. Tropp, Just
Relax: Convex
programming methods for identifying sparse signals
*, IEEE Trans. Info. Theory*, vol. 51, num. 3, pp. 1030-1051, Mar. 2006 and its correction - CS Recovery: Greedy algorithms

- Noiseless, sparse signals, greedy recovery algorithm: Joel Tropp and Anna Gilbert, Signal recovery from random measurements via orthogonal matching pursuit. (IEEE Trans. on Information Theory, 53(12) pp. 4655-4666, December 2007)
- Noisy, sparse or compresible
signals, greedy recovery algorithm: CoSAMP, Subspace Pursuit, ...

- Measurement matrices
- Emmanuel Candès, Justin Romberg, and Terence Tao, Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. (IEEE Trans. on Information Theory, 52(2) pp. 489 - 509, February 2006)
- Emmanuel Candès and Terence Tao, Decoding by linear programming. (IEEE Trans. on Information Theory, 51(12), pp. 4203 - 4215, December 2005)
- Thong T. Do, Trac D. Tran, and Lu Gan, Fast compressive sampling with structurally random matrices. (Preprint, 2007)
- more to be added

- Practical Issues: implementing
algorithms without storing the large sized measurement matrices

- Applications
- Single-pixel camera: Marco Duarte, Mark Davenport, Dharmpal Takhar, Jason Laska, Ting Sun, Kevin Kelly, and Richard Baraniuk, Single-pixel imaging via compressive sampling. (IEEE Signal Processing Magazine, 25(2), pp. 83 - 91, March 2008)
- Computed Tomography (CT): Emmanuel
Candès, Justin Romberg, and Terence Tao, Robust
uncertainty principles: Exact signal reconstruction from highly
incomplete frequency information.
(IEEE Trans. on Information Theory, 52(2) pp. 489 - 509, February 2006)

- MRI: Michael Lustig, David Donoho, and John M. Pauly, Sparse MRI: The application of compressed sensing for rapid MR imaging. (Magnetic Resonance in Medicine, 58(6) pp. 1182 - 1195, December 2007)
- MRI - 2: Hong Jung, Kyunghyun Sung, Krishna S. Nayak, Eung Yeop Kim, and Jong Chul Ye, k-t FOCUSS: A general compressed sensing framework for high resolution dynamic MRI. (To appear in Magnetic Resonance in Medicine, 2008)
- Sensor networks: W. Bajwa, J. Haupt, A. Sayeed and R. Nowak, Joint source-channel communication for distributed estimation in sensor networks. (IEEE Trans. on Information Theory, 53(10) pp. 3629-3653, October 2007)
- Sensor networks - 2: Waheed Bajwa, Jarvis Haupt, Akbar Sayeed, and Rob Nowak, Compressive wireless sensing. (Int. Conf. on Information Processing in Sensor Networks (IPSN), Nashville, Tennessee, April 2006)
- Distributed compression: Dror Baron, Michael Wakin, Marco Duarte, Shriram Sarvotham, and Richard Baraniuk, Distributed compressed sensing. (Preprint, 2005) [See also related technical report and conference publications: Allerton 2005, Asilomar 2005, NIPS 2005, IPSN 2006]
- Image compression: V. Stankovic, L. Stankovic, and S. Cheng, Compressive video sampling. (European Signal Processing Conf. (EUSIPCO), Lausanne, Switzerland, August 2008)
- RADAR: Richard Baraniuk and Philippe Steeghs, Compressive radar imaging. (IEEE Radar Conference, Waltham, Massachusetts, April 2007)
- Bayesian and model-based approaches
- a
- More papers: see the Rice collection