RSS

Compressed Sensing: Filling in the Blanks

01 Mar

I stumbled across this article today, Fill in the Blanks: Using Math to Turn Lo-Res Datasets Into Hi-Res Samples by By Jordan Ellenberg, Wired Magazine, February 22, 2010, and found it so fascinating – in particular because of my recent research into fractals – that I had to take a little tour around the internet to find a little more information on compressed sensing.

Wikipedia describes compressed sensing as: “a technique for acquiring and reconstructing a signal utilizing the prior knowledge that it is sparse or compressible. The field has existed for at least four decades, but recently the field has exploded, in part due to several important results by David Donoho, Emmanuel Candès, Justin Romberg and Terence Tao.”

“The ideas behind compressive sensing came together in 2004 when Emmanuel J. Candès, a  mathematician at Caltech, was working on a problem in magnetic resonance imaging. He discovered that a test image [a badly corrupted version of the image shown here] could be P = phantom('Modified Shepp-Logan',200);reconstructed exactly even with data deemed insufficient by the Nyquist-Shannon criterion.”

According to the story Fill in the Blanks: Using Math to Turn Lo-Res Datasets Into Hi-Res Samples by By Jordan Ellenberg, Wired Magazine, February 22, 2010, the mathematical technique called l1 minimization is now being looked at in a number of experimental applications, such as DARPA funded research into acquisition of enemy communication signals:

DARPA ~ …mathematics thrust area have successfully applied a methodology of discovery of physics-based structure within a sensing problem, from which it was often possible to determine and algorithmically exploit efficient low-dimensional representations of those problems even though they are originally posed in high-dimensional settings. Computational complexity and statistical performance of fielded algorithms within the DSP component of sensor systems have both been substantially improved through this approach. The aim of ISP is much more ambitious: to develop and amplify this concept across all components of an entire sensor system and then across networks of sensor systems.

data storage:

Wired ~ …the technique will help us in the future as we struggle with how to treat the vast amounts of information we have in storage. The world produces untold petabytes of data every day — data that we’d like to see packed away securely, efficiently, and retrievably. At present, most of our audiovisual info is stored in sophisticated compression formats. If, or when, the format becomes obsolete, you’ve got a painful conversion project on your hands.

and further into the future, perhaps CS will even live in our digital camera’s:

Wired ~ Candès believes, we’ll record just 20 percent of the pixels in certain images, like expensive-to-capture infrared shots of astronomical phenomena. Because we’re recording so much less data to begin with, there will be no need to compress. And instead of steadily improving compression algorithms, we’ll have steadily improving decompression algorithms that reconstruct the original image more and more faithfully from the stored data.

Today though, CS is already rewriting the way we capture medical information. A team at the University of Wisconsin, with participation from GE Healthcare, is combining CS with technologies called HYPR and VIPR to speed up certain kinds of magnetic resonance scans, in some cases by a factor of several thousand.  GE Healthcare is also experimenting with a novel protocol that promises to use CS to vastly improve observations of the metabolic dynamics of cancer patients. Meanwhile, the CS-enabled MRI machines at Packard can record images up to three times as quickly as conventional scanners.

Wired ~ In the early spring of 2009, a team of doctors at the Lucile Packard Children’s Hospital at Stanford University lifted a 2-year-old into an MRI scanner. The boy, whom I’ll call Bryce, looked tiny and forlorn inside the cavernous metal device. The stuffed monkey dangling from the entrance to the scanner did little to cheer up the scene. Bryce couldn’t see it, in any case; he was under general anesthesia, with a tube snaking from his throat to a ventilator beside the scanner. Ten months earlier, Bryce had received a portion of a donor’s liver to replace his own failing organ. For a while, he did well. But his latest lab tests were alarming. Something was going wrong — there was a chance that one or both of the liver’s bile ducts were blocked.

Shreyas Vasanawala, a pediatric radiologist at Packard, didn’t know for sure what was wrong, and hoped the MRI would reveal the answer. Vasanawala needed a phenomenally hi-res scan, but if he was going to get it, his young patient would have to remain perfectly still. If Bryce took a single breath, the image would be blurred. That meant deepening the anesthesia enough to stop respiration. It would take a full two minutes for a standard MRI to capture the image, but if the anesthesiologists shut down Bryce’s breathing for that long, his glitchy liver would be the least of his problems.

However, Vasanawala and one of his colleagues, an electrical engineer named Michael Lustig, were going to use a new and much faster scanning method. Their MRI machine used an experimental algorithm called compressed sensing — a technique that may be the hottest topic in applied math today. In the future, it could transform the way that we look for distant galaxies. For now, it means that Vasanawala and Lustig needed only 40 seconds to gather enough data to produce a crystal-clear image of Bryce’s liver.

Method

The main idea behind compressed sensing is to exploit that there is some structure and redundancy in the majority of interesting signals—they are not pure noise. In particular, most signals are sparse, that is, they contain many coefficients close to or equal to zero, when represented in some domain. (This is the same insight used in many forms of lossy compression.)

Compressed sensing typically starts with taking a limited (possibly randomized) number of samples in a basis different from the basis in which the signal is known to be sparse. Since the number of samples is limited, the task of converting the image back into the intended domain would involve solving an underdetermined matrix equation—that is, there are a huge number of different candidate images that could all result in the given samples, since the number of samples taken is smaller than the number of coefficients in the full image. Thus, one must introduce some additional constraint to select the “best” candidate.

The classical solution to such problems would be minimizing the L2 norm—that is, minimizing the amount of energy in the system. This is usually simple mathematically (involving only a matrix multiplication by the pseudo-inverse of the basis sampled in). However, this leads to poor results for most practical applications, as the unknown (not sampled) coefficients seldom have zero energy.

A more attractive solution would be minimizing the L0 norm, or equivalently maximize the number of zero coefficients in the new basis. However, this is NP-hard (it contains the subset-sum problem), and so is computationally infeasible for all but the tiniest data sets. Thus, following Tao et al., the L1 norm, or the sum of the absolute values, is usually what is minimized. Finding the candidate with the smallest L1 norm can be expressed relatively easily as a linear program, for which efficient solution methods already exist. This leads to comparable results as using the L0 norm, often yielding results with many coefficients being zero.

Information on this technique is growing and only few specialists understand how each of these pieces fit each other within the big picture. While currently much emphasis is rightfully spent on performing faster reconstruction of the original signal from the Compressed Measurements, we are also beginning to see the emergence of other tools that complete this framework.

These tools include

  • the ability to search for bases or dictionaries in which sets of signals can be decomposed in a sparse manner and,
  • the ability to find and quantify specific measurements tools that are incoherent with said dictionaries.
    Compressed_sensing-full In other words, once a signal is known to be sparse in a specific basis, one of the main challenge is to find a set of measurement tools (producing the compressed measurements) and the attendant nonlinear solver that reconstructs the original full signal. There are theoretical results yielding the minimum number of required measurements needed to produce the original signal given a specific pair of measurement matrices and nonlinear solvers. In all cases, the expected number of compressed measurements is expected to be low relative to traditional Nyquist sampling constraints.
    Further Reading:

The Nuit Blanche blog: Daily news on the subject of compressive sensing.

Fill in the Blanks: Using Math to Turn Lo-Res Datasets Into Hi-Res Samples by By Jordan Ellenberg, Wired Magazine, February 22, 2010

Jordan Ellenberg’s blog.

 

Advertisements
 

Tags: , , , , , , , , , ,

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: