Home
  Conference Themes
  Committees
  Invited Presentations
  Prize Lecture
  Accepted Minisymposia
  Minitutorials
  Accepted Contributed Talks
  Accepted Posters
  The Vicent Caselles Student Award
  Student/Post-doc Travel Awards Recipients
  Conference Program and Arrangement
  Registration
  Submission
  Important Dates
  Sponsors
  Conference Photos
  Conference Poster
  Conference Postcard
  Conference Venue
  Travel Information
  Hotel Information
  Visa Information
  Useful Links
  Contact
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
   
 
Invited Presentations
 
 
  • Antonin Chambolle (Ecole Polytechnique, France)
    Convex Representations for Imaging Problems (Abstract)

    This talk will address several results on convex representations for variational problems in imaging such as image partitioning, Mumford-Shah segmentation or matching problems. We will review recent results (obtained in collaboration with D. Cremers, T. Pock, E. Strekalovskiy) and discuss some difficulties and open problems.


  • Michael Elad (Technion, Israel)
    Wavelet for Graphs and its Deployment to Image Processing (Abstract)

    What if we take all the overlapping patches from a given image and organize them to create the shortest path by using their mutual Euclidean distances? This suggests a reordering of the image pixels in a way that creates a maximal 1D regularity. What could we do with such a construction? In this talk we consider a wider perspective of the above, and introduce a wavelet transform for graph-structured data. The proposed transform is based on a 1D wavelet decomposition coupled with a pre-reordering of the input so as to best sparsify the given data. We adopt this transform to image processing tasks by considering the image as a graph, where every patch is a node, and edges are obtained by Euclidean distances between corresponding patches. We show several ways to use the above ideas in practice, leading to state-of-the-art image denoising, deblurring, inpainting, and face-image compression results. (This is a joint work with Idan Ram and Israel Cohen.)


  • Leo Grady (HeartFlow, USA)
    Personalized Blood Flow Simulation from an Image-Derived Model: Changing the Paradigm for Cardiovascular Diagnostics (Abstract)

    Coronary heart disease is the leading cause of mortality worldwide, accounting for 1/3 of all global deaths. Treatment of stable coronary heart disease is typically perfomed by medication/lifestyle for a lower disease burden or PCI (stenting) for a greater disease burden. The choice between these treatments is best determined by an invasive diagnostic test that measures blood flow through a diseased area. Unfortunately, this invasive diagnostic test is expensive, dangerous and usually finds a lower disease burden. We are working to change the diagnostics paradigm with a blood flow simulation on a personalized heart model that is derived from cardiac CT angiography images. This simulation-based diagnostic is much safer and more comfortable for the patient as well as less expensive. Our diagnostic depends on a hyperaccurate vessel tree image segmentation, physiological modeling and accurate computational fluid dynamics. In this talk I will discuss the mathematics that drive this technology and the successful clinical trials that have proven the simulation accuracy in patients.


  • Yi Ma (ShanghaiTech University, China)
    Pursuit of Low-dimensional Structures in High-dimensional Data (Abstract)

    In this talk, we will discuss a new class of models and techniques that can effectively model and extract rich low-dimensional structures in high-dimensional data such as images and videos, despite nonlinear transformation, gross corruption, or severely compressed measurements. This work leverages recent advancements in convex optimization for recovering low-rank or sparse signals that provide both strong theoretical guarantees and efficient and scalable algorithms for solving such high-dimensional combinatorial problems. These results and tools actually generalize to a large family of low-complexity structures whose associated regularizers are decomposable. We illustrate how these new mathematical models and tools could bring disruptive changes to solutions to many challenging tasks in computer vision, image processing, and pattern recognition. We will also illustrate some emerging applications of these tools to other data types such as web documents, image tags, microarray data, audio/music analysis, and graphical models. (This is joint work with John Wright of Columbia, Emmanuel Candes of Stanford, Zhouchen Lin of Peking University, and my students Zhengdong Zhang, Xiao Liang of Tsinghua University, Arvind Ganesh, Zihan Zhou, Kerui Min and Hossein Mobahi of UIUC.)


  • Carola-Bibiane Schönlieb (University of Cambridge, United Kingdom)
    Optimizing the Optimizers - What is the Right Image and Data Model ? (Abstract)

    When assigned with the task of reconstructing an image from given data the first challenge one faces is the derivation of a truthful image and data model. Such a model can be determined by the a-priori knowledge about the image, the data and their relation to each other. The source of this knowledge is either our understanding of the type of images we want to reconstruct and of the physics behind the acquisition of the data or we can thrive to learn parametric models from the data itself. The common question arises: how can we optimise our model choice? Starting from the first modelling strategy this talk will lead us from the total variation as the most successful image regularisation model today to non-smooth second- and third-order regularisers, with data models for Gaussian and Poisson distributed data as well as impulse noise. Applications for image denoising, inpainting and surface reconstruction are given. After a critical discussion of these different image and data models we will turn towards the second modelling strategy and propose to combine it with the first one using a bilevel optimization method. In particular, we will consider optimal parameter derivation for total variation denoising with multiple noise distributions and optimising total generalised variation regularisation for its application in photography.


  • Rebecca Willett (University of Wisconsin-Madison, USA)
    Emerging Methods in Photon-Limited Imaging (Abstract)

    Many scientific and engineering applications rely upon the accurate reconstruction of spatially, spectrally, and temporally distributed phenomena from photon-limited data. When the number of observed events is very small, accurately extracting knowledge from this data requires the development of both new computational methods and novel theoretical analysis frameworks. This task is particularly challenging since sensing is often indirect in nature, such as in compressed sensing or with tomographic projections in medical imaging, resulting in complicated inverse problems. Furthermore, limited system resources, such as data acquisition time and sensor array size, lead to complex tradeoffs between sensing and processing. All of these issues combine to make accurate image reconstruction a complicated task, involving a myriad of system-level and algorithmic tradeoffs.

    In this talk, I will describe novel algorithms and performance tradeoffs between reconstruction accuracy and system resources when the underlying intensity exhibits some low-dimensional structure. The theory supporting these methods facilitates characterization of fundamental performance limits. Examples include lower bounds on the best achievable error performance in photon-limited image reconstruction and upper bounds on the data acquisition time required to achieve a target reconstruction accuracy. The effectiveness of the theory and methods will be demonstrated for several important applications, including astronomy, night vision, and biological imaging.