2:30-3:15 | Tutorial: Jeff Phillips (University of Utah) | A Primer on the Geometry in Machine Learning |
3:20-3:40 | Alejandro Flores-Velazco (University of Maryland) | Condensation for the Approximate Nearest-Neighbor Rule |
3:40-4:00 | Wai Ming Tai (University of Utah) | Relative Error RKHS Embeddings for Gaussian Kernels |
4:30-5:15 | Invited: Thomas G. Dietterich (Oregon State University) | Approaches to Robust Artificial Intelligence: Can Geometry Help? |
5:20-5:40 | Marc Khoury (UC Berkeley) | On the Geometry of Adversarial Examples |
5:40-6:00 | Chao Chen (Stony Brook University) | A Topological Regularizer for Classifiers via Persistent Homology |
2:30-3:15 | |
University of Utah | |
Machine Learning is a discipline filled with many geometric algorithms, the central task of which is usually classification. These varied approaches all take as input a set of n points in d dimensions, each with a label. In learning, the goal is to use this input data to build a function which predicts a label accurately on new data drawn from the same unknown distribution as the input data. The main difference in the many algorithms is largely a result of the chosen class of functions considered.
This talk will take a quick tour through many approaches from simple to comlex and modern, and show the geometry inherent at each step. Pit stops will include connections to geometric data structures, duality, random projections, range spaces, and coresets.
Bio: Dr. Phillips is an Associate Professor in the School of Computing at the University of Utah. He recieved a BS in Computer Science and BA in Math from Rice University in 2003, and a PhD in Computer Science from Duke University in 2009. He has been a NSF GRF, CI Fellow, and CAREER Award recipient. He serves as the director of the new Data Science program at the University of Utah. He is writing a new book on the Mathematical Foundations of Data Analysis. |
3:20-4:00 | |
Greedy Is Good, But Needs Randomization (canceled for Visa delay) | |
Condensation for the Approximate Nearest-Neighbor Rule | |
Relative Error RKHS Embeddings for Gaussian Kernels |
4:30-5:20 | |
Oregon State University | |
Advances in machine learning are encouraging high-stakes applications of this emerging technology. However, machine learning can be very brittle. How can we convert it into a robust technology? This talk will review some of the approaches being pursued and then focus on methods for anomaly detection. I'll describe some opportunities to apply geometric techniques and make a plea for help.
Bio: Dr. Dietterich (AB Oberlin College 1977; MS University of Illinois 1979; PhD Stanford University 1984) is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University. Dietterich is one of the pioneers of the field of Machine Learning and has authored more than 200 refereed publications and two books. His research is motivated by challenging real world problems with two areas of special focus: robust artificial intelligence and ecological sustainability. He is best known for his work on ensemble methods in machine learning including the development of error-correcting output coding. Dietterich has also invented important reinforcement learning algorithms including the MAXQ method for hierarchical reinforcement learning. Dietterich has devoted many years of service to the research community. He is a former President of the Association for the Advancement of Artificial Intelligence, and the founding president of the International Machine Learning Society. Other major roles include Executive Editor of the journal Machine Learning, co-founder of the Journal for Machine Learning Research, and program chair of AAAI 1990 and NIPS 2000. He led the writing of the machine learning component of the NSF's 20-year Roadmap for AI Research. Dietterich is a Fellow of the ACM, AAAI, and AAAS. |
5:20-6:00 | |
On the Geometry of Adversarial Examples | |
A Topological Regularizer for Classifiers via Persistent Homology |