Home
This Title All WIREs
WIREs RSS Feed
How to cite this WIREs title:
WIREs Data Mining Knowl Discov
Impact Factor: 2.111

Multidimensional compressed sensing and their applications

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Compressed sensing (CS) comprises a set of relatively new techniques that exploit the underlying structure of data sets allowing their reconstruction from compressed versions or incomplete information. CS reconstruction algorithms are essentially nonlinear, demanding heavy computation overhead and large storage memory, especially in the case of multidimensional signals. Excellent review papers discussing CS state‐of‐the‐art theory and algorithms already exist in the literature, which mostly consider data sets in vector forms. In this paper, we give an overview of existing techniques with special focus on the treatment of multidimensional signals (tensors). We discuss recent trends that exploit the natural multidimensional structure of signals (tensors) achieving simple and efficient CS algorithms. The Kronecker structure of dictionaries is emphasized and its equivalence to the Tucker tensor decomposition is exploited allowing us to use tensor tools and models for CS. Several examples based on real world multidimensional signals are presented, illustrating common problems in signal processing such as the recovery of signals from compressed measurements for magnetic resonance imaging (MRI) signals or for hyper‐spectral imaging, and the tensor completion problem (multidimensional inpainting). WIREs Data Mining Knowl Discov 2013, 3:355–380. doi: 10.1002/widm.1108

Conflict of interest: The authors have declared no conflicts of interest for this article.

Analysis of the application of the Kronecker dictionary learning algorithm to (8 × 8)‐patches taken from ‘Barbara’ and ‘Text’ images. A total number of Ns = 10, 000 patches are selected randomly from each of these datasets. Kronecker dictionaries are obtained by applying Algorithm 5 using Gaussian initial matrices D1 and D2. The global error per pixel is defined as EG. A sparsity level of K = 6 was used.
[ Normal View | Magnified View ]
(a) Memory requirements in order to storage a explicit dictionary for 1D, 2D, and 3D signals (tensors) for the case of having Mn = In/4 (N = 1, 2, 3). (b) Comparison of computation times versus sampling ratio M3/I3 for vectorized and tensor compressed sensing (CS) algorithms with (32 × 32 × 32) signals having a (4 × 4 × 4)‐block sparse representation with a Kronecker discrete cosine transform (DCT) dictionary. (c) Percentage of correctly recovered tensors over a total of 100 simulations with (32 × 32 × 32) signals having a (4 × 4 × 4)‐block sparse representation with a Kronecker DCT dictionary.
[ Normal View | Magnified View ]
Illustration of a two‐dimensional (2D) signal having a Kronecker block‐sparse representation and their recovery by the NBOMP algorithm. In (a), a 32 × 32 patch image is modeled as a linear combination of only K = 9 atoms (out of 1024) of the separable DCT basis , i.e. Y=D1XD2T+E with the nonzero entries coefficients concentrated in a 3 × 3 block. Nonzero coefficients and errors are generated by using a Gaussian distribution with standard deviation σ = 1 and σ = 0.01, respectively. Compressive measurements are taken by multiplying each mode by a random (Gaussian) sensing matrices, i.e. Z=Φ1YΦ2T with Φ1,2 ∈ 16 × 32 (sampling ratio = 25%). In (b), the successive correlations of the residual with the Kronecker basis B2 ⊗ B1 are shown for each iteration of the N‐way block orthogonal matching pursuit (NBOMP) algorithm and detected indices are incorporated into the block‐support of the signal. The sparsity pattern is correctly recovered after exactly K = 3 iterations.
[ Normal View | Magnified View ]
Illustration of a two‐dimensional (2D) signal having a Kronecker sparse representation and its recovery by the Kron‐OMP algorithm. In (a), a 32 × 32 patch image is modeled as a linear combination of only K = 9 atoms (out of 1024) of the separable discrete cosine transform (DCT) basis, i.e. Y=D1XD2T+E with ||X||0 = 9. Nonzero coefficients and errors are generated by using a Gaussian distribution with standard deviation σ = 1 and σ = 0.01, respectively. Compressive measurements are taken by multiplying each mode by a random (Gaussian) sensing matrices, i.e. Z=Φ1YΦ2T with Φ1,2 ∈ 16 × 32 (sampling ratio = 25%). In (b), the successive correlations of the residual with the Kronecker basis B2 ⊗ B1 are shown for each iteration of the Kron‐OMP algorithm. The sparsity pattern is correctly recovered after exactly K = 9 iterations.
[ Normal View | Magnified View ]
(a) Illustration of the tensor completion algorithm: the original tensor is divided in a set of overlapped tensor patches Y_t. For every tensor patch P_, its available entries (blue dots) are arranged in a vector zRLt which is shown to verify a set of linear equations with Khatri–Rao structure and a sparsity constraint. (b) Results of applying the tensor completion algorithm with Kronecker dictionary [orthogonal discrete cosine transform (DCT)] to 32 × 32 patches of ‘Barbara’ data set (512 × 512). A PSNR = 27.66 dB is obtained which is comparable to the state‐of‐the‐arts methods. For example, in Ref it was reported a PSNR=27.4 dB for the same data set and the same sampling ratio, and in Ref the authors have reported a PSNR=27.65 dB, also for the same conditions.
[ Normal View | Magnified View ]
Sparse Tucker approximation of multidimensional signals. In (a), the original tensor three‐[dimensional magnetic resonance imaging (3D MRI) brain image] is represented by the Tucker model Y_=X_×1D1×2D2×3D3, where factors DnRIn×In (n = 1, 2, 3) are orthogonal, thus, the optimal core tensor is given by X_=Y_×1D1T×2D2T×3D3T. In (b), the tensor is reconstructed by keeping only the largest (absolute values) entries of X_. The resulting PSNR (dB), for different amount of kept coefficients, is shown for the case of using Daubechies WT dictionaries. In (c) unstructured sparse core tensors and block core tensors are illustrated.
[ Normal View | Magnified View ]
One‐dimensional Compressed sensing (CS) example illustrating the application of the basis pursuit (BP) and matching pursuit (MP) sparsity recovery methods. A 1D signal y ∈ I is modeled by combining only K = 5 atoms (out of 256) of the Daubechies wavelet transform (WT) basis, i.e. y = Dx + e with ||x||0 = 5. Nonzero coefficients and errors are generated by using a Gaussian distribution with standard deviation σ = 1 and σ = 0.01, respectively. The compressive measurements are obtained by z = Φy, with a sensing matrix Φ ∈ 64 × 256 being a random (Gaussian) matrix determining a sampling ratio =25%. In (a), the sparsity patterns recovered by the SPGL1 algorithm in iteration k = 1, 2, …, 79 are shown. In (b), the correlations between the residual and basic elements for iterations k = 1, 2, …, 5 of the OMP algorithm are displayed.
[ Normal View | Magnified View ]
Results of applying the tensor completion algorithm with Kronecker dictionary (orthogonal DCT) to 8 × 8 × 8 tensor patches of the three‐dimensional (3D) brain data set (256 × 256 × 100).
[ Normal View | Magnified View ]
Hyper‐spectral compressed sensing (CS) imaging example. Block‐sparsity is assumed on a separable (Kronecker) orthogonal basis given by the Daubechies wavelet transform (WT). A random (Gaussian) separable two‐dimensional (2D) operator is used for sensing every slice of the hyper‐spectral cube (1024 × 1024 × 32).The reconstruction is obtained by applying the N‐way block orthogonal matching pursuit (NBOMP) algorithm on the three‐dimensional 3D compressive signal Z_.
[ Normal View | Magnified View ]
(a) Illustration of Kronecker‐compressed sensing (CS) applied to a three‐dimensional (3D) magnetic resonance image (MRI). (b) Reconstructions using the full 3D Kronecker‐basis pursuit (BP) algorithm (PSNR = 42.7 dB), the slice by slice 2D Kronecker‐BP algorithm (PSNR = 40.8 dB) and the minimum energy method (PSNR = 37.5 dB) using compressive measurements with a sampling ratios of (M1M2M3)/(I1I2I3) = 70% and assuming sparse representations based on the separable 3D Daubechies wavelet transform (WT) basis.
[ Normal View | Magnified View ]

Related Articles

Imaging: An Interdisciplinary View

Browse by Topic

Algorithmic Development > Spatial and Temporal Data Mining
Algorithmic Development > Structure Discovery
Application Areas > Science and Technology

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts