Date and Times
Thursday, July 12 at 8:30 AM & 4:00 PM
In the past decade, deep learning methods have achieved unprecedented performance on a broad range of problems in various fields from computer vision to speech recognition. However, so far research has mainly focused on developing deep learning methods for Euclidean-structured data. However, many important applications have to deal with non-Euclidean structured data, such as graphs and manifolds. Such geometric data are becoming increasingly important in computer graphics and 3D vision, sensor networks, drug design, biomedicine, high energy physics, recommendation systems, and web applications. The adoption of deep learning in these fields has been lagging behind until recently, primarily since the non-Euclidean nature of objects dealt with makes the very definition of basic operations used in deep networks rather elusive.
The purpose of this minitutorial is to introduce the emerging field of geometric deep learning on graphs and manifolds, overview existing solutions and applications for this class of problems, as well as key difficulties and future research directions.
Michael Bronstein, Università della Svizzera italiana, Switzerland
This two-part minitutorial will take place on Thursday, July 12 at 8:30 AM and 4:00 PM.
Date and Time
Tuesday, July 10 at 8:30 AM
This talk is to introduce people who work with data to simulation-based methods for statistics: bootstrap standard errors and confidence intervals, and permutation tests. These "resampling" methods draw samples from the data and recompute the statistic(s) of interest for each such sample, to estimate random variation; they substitute computer simulation for mathematical derivations. These methods are easier to use because they don't require new formulas for every application, and are typically more accurate because they don't require the same assumptions (like normal distributions). They produce nice figures helpful in understanding and communicating results. While the simulation-based methods are usually more expensive computationally than using formulas, in some big-data applications they are far cheaper. Moreover, the simulation-based approaches let us do better statistics--we can use robust methods to handle the outliers that occur with real data, rather than being stuck with older methods that have easy formulas.
Participants will learn to use the bootstrap to calculate standard errors and to calculate quick-and-dirty confidence intervals, to use permutation tests to perform statistical tests, and to better understand the relevant statistical concepts.
Tim Hesterberg, Google, USA
This minitutorial will take place on Tuesday, July 10 at 8:30 AM.