Abstract: Compressed Sensing is a new emerging mathematical theory which has broad applications in many areas of science and engineering. In Compressed Sensing one takes n < N samples of an N-dimensional vector x0 using an nxN sensing matrix A, obtaining under-sampled measurements y = A x0. If x0 is k-sparse and A has Gaussian i.i.d entries, there is precisely determined region in (k/n, n/N)-phase diagram over which l1-minimization is successful to recover x0 exactly. Gaussian random sensing matrices are outside the mainstream of signal processing and therefore a significant interest has recently been devoted to Compressed Sensing using deterministic matrices. little is known about the phase transition of deterministic matrices. In this talk, I will give a broad overview of Compressed Sensing, introduce the concept of phase transition and finally present some recent findings on the phase transitions of commonly used deterministic matrices.
Abstract: The conventional view holds that girih (geometric star-and-polygon) patterns in medieval Islamic architecture were conceived by their designers as a network of zigzagging lines, where the lines were drafted directly with a straightedge and a compass. We show that by 1200 C.E. a conceptual breakthrough occurred in which girih patterns were reconceived as tessellations of a special set of equilateral polygons (girih tiles) decorated with lines. These tiles enabled the creation of increasingly complex periodic girih patterns, and by the 15th century, the tessellation approach was combined with self-similar transformations to construct nearly perfect quasi-crystalline Penrose patterns, five centuries before their discovery in the West.
Institute for Computational and Experimental Research in Mathematics
"Importance sampling for rare events in light-wave systems"
Abstract: This talk is concerned with the application of importance sampling for numerical simulations of large noise-induced optical pulse perturbations in soliton based light-wave systems, e.g., optical communication systems and mode-locked lasers. Importance sampling allows one to concentrate Monte Carlo simulations around those noise realizations that are most likely to produce large pulse deformations, resulting in computational efficiently improvements of several orders of magnitude over the standard Monte Carlo scheme when simulating rare error/failure events. The main challenge in implementing importance sampling is selecting a suitable biasing distribution. This is done here using a low-dimensional reduction for the pulse dynamics. I will demonstrate the method by using it to calculate the probability density functions associated with pulse amplitude, frequency, timing, and phase fluctuations.
"Integration; Development; and Performace of the 500 TFLOP Heterogeneous Cluster (CONDOR)"
Abstract: The Air Force Research Laboratory Information Directorate Advanced Computing Division (AFRL/RIT) High Performance Computing Affiliated Resource Center (HPC-ARC) is host to a very large scale interactive computing cluster consisting of 1800 nodes. CONDOR, the largest interactive Cell cluster in the world, consists of integrated heterogeneous processors of IBM Cell Broadband Engine multicore CPUs, nVidia GPGPUs and powerful Intel x86 server nodes in a 10GbE Star Hub network and 20Gb/s Infiniband Mesh, with a combined capability of 500 Teraflops. Applications developed and running on CONDOR include computational intelligence computing, video synthetic aperture radar (SAR) backprojection, Space Situational Awareness (SSA), video target tracking, linear algebra and others. This presentation will discuss design and integration of the system. It will also show progress on performance optimization using the heterogeneous cluster and lessons learned on scalability using heterogeneous architectures.
"Efficient Numerical Methods for Water Waves in Unbounded Domains"
Abstract: Water waves are often studied on unbounded domains; for example, when modeling flow around a vessel on an open sea. To be able to compute such solutions, the domain needs to be truncated to finite size. This is done by introducing an artificial boundary at some arbitrary (ideally, not too large) distance. Boundary conditions are then needed on the artificial boundary that mimic the unbounded domain and ideally make the boundary invisible to outgoing waves. In this talk, we derive an absorbing boundary for the linearization at zero of the water wave equation, based on a one-way version of the equation, which we impose in a layer near the artificial boundary. We discuss the one-way model, its numerical approximation, and present numerical results.
"A fast algorithm for the Helmholtz integral operator in layered media"
Abstract: In this talk, a parallel fast algorithm to calculate Helmholtz integral operator in layered media will be presented. First, spectral Green's function is found using the transfer matrix method and Sommerfeld-type integral (Hankel transform) in order to recover the real space Green's function. The Sommerfeld integral has two numerical difficulties: 1. Slow decay of the integrand and 2. existence of surface pole along the integral domain. Both problems are overcome with the window function and generalized gaussian quadrature points with discrete wavelet transform, respectively. Then, the Helmholtz integral operator is calculated with discretized Sommerfeld integral using the wideband Fast Multipole Method (wFMM) and local-expansion treecodes in a fast and parallel manner. Numerical results will be presented to show the efficiency of the new solver.
"Positive-Deniteness of the Blended Force-Based Quasicontinuum Method"
Abstract: The development of consistent and stable atomistic-to-continuum coupling models for multi-dimensional crystalline solids remains a challenge. For example, proving stability of the force-based quasicontinuum (QCF) model remains an open problem. In 1D and 2D, we show that by blending atomistic and Cauchy-- Born continuum forces (instead of a sharp transition as in the QCF method) one obtains positive-definite blended force-based quasicontinuum (B-QCF) models. We establish sharp conditions on the required blending width.
Abstract: The Fast Fourier Transform (FFT) is one of the most fundamental numerical algorithms. It computes the Discrete Fourier Transform (DFT) of an n-dimensional signal in O(n log n) time. The algorithm plays an important role in many areas. In many applications (e.g., audio, image or video compression), most of the Fourier coefficients of a signal are "small" or equal to zero, i.e., the output of the transform is (approximately) sparse. In this case, there are algorithms that enable computing the non-zero coefficients faster than the FFT. However, in practice, the exponents in the runtime of these algorithms and their complex structure have limited their applicability to only very sparse signals. In this talk, I will describe a new set of algorithms for sparse Fourier Transform. Their key feature is simplicity, which leads to efficient running time with low overhead, both in theory and in practice. In particular, we can achieve a runtime of O(k log n), where k is the number of non-zero Fourier coefficients of the signal. This improves over the runtime of the FFT for any k = o(n). Joint work with Piotr Indyk, Dina Katabi and Eric Price.
"Stability of vortex and wave flows from bifurcation diagrams exploiting a variational argument"
Abstract: Steady fluid solutions play a special role in the dynamics of a flow: stable states may be realized in practice, while unstable ones may act as attractors. Unfortunately, determining stability is often a process far more laborious than finding steady states; indeed, even for simple vortex or wave flows, stability properties have often been the subject of debate. We consider here a stability idea originating with Lord Kelvin (1876), which involves using the second variation of the energy E to establish bounds on a perturbation. However, for numerically obtained flows, computing the second variation of E explicitly is often not feasible. To circumvent this issue, Saffman & Szeto (1980) proposed an argument linking changes in the second variation to turning points in a bifurcation diagram, for families of steady flows. Later work has shown that this argument is unreliable; the two key issues are associated with the absence of a formal turning-point theory, and with the inability to detect bifurcations (Dritschel 1995, and references therein). In this work, we build on ideas from bifurcation theory, and link turning points in a velocity-impulse diagram to changes in the second variation of the energy; in addition, this diagram delivers the direction of the change of the second variation, thereby providing information as to whether stability is gained or lost. To detect hidden solution branches, we introduce to these fluid problems concepts from imperfection theory. The resulting approach, involving "imperfect velocity-impulse'' diagrams, leads us to new and surprising results for a wide range of fundamental vortex and wave flows; we mention here the calculation of the first steady vortices without any symmetry, and the uncovering of the complete solution structure for vortex pairs. In addition, we find precise agreement with available results from linear stability analysis.
"High-Order and High-Performance Algorithms for Petascale Transport Problems and Beyond"
Abstract: Petascale computing platforms having millions of processing units are coming on-line in 2012 and it is anticipated that billion-way parallelism will be a reality within the decade. The extreme levels of parallelism in these architectures influences many design choices in the development of next-generation algorithms and software for scientific computing. In two parts, this talk describes our approach to numerical simulation of transport problems on current-generation petascale platforms and key considerations as we move towards exascale. In the first part, we give a brief overview of our spectral element code, Nek5000, for simulation of fluid mechanics, thermal convection, magnetohydrodynamics, and electromagnetics (Misun Min's code, NekCEM). Kreiss and Oliger noted in the early 70s that high-order numerical discretizations would be particularly efficient for capturing scale interactions in large simulations where the propagation of small scale features over long times leads to accumulated dispersion error. We show that it is possible to develop spectral element methods (SEMs) having per-gridpoint costs equivalent to low-order schemes, while at the same time requiring an order-of-magnitude fewer points. Key aspects of the approach include fast matrix-free operator evaluation, projection-based iterative solvers coupled with multilevel preconditioners, spectrally accurate stabilizing filters and fully-dealiased treatment of the transport operators. Scalable code development and portability have been significantly eased through the identification of a few kernels to handle the unstructured communication required for the problems interest. Key points are illustrated throughout with examples from heat transfer, MHD, ocean modeling, and vascular flow simulation. In the second part, we explore fundamental computational complexity considerations that will drive algorithmic design choices for PDE-based simulation beyond current petaflops architectures. We present performance data from leading-edge platforms over the past two decades and couple this with communication and work models to predict the performance of domain decomposition methods and global spectral methods at exascale. We identify the key performance bottlenecks and expected performance limits at these scales.