Sunday, July 4, 2010

ICML 2010 Highlights

636: Sparse Gaussian Process Regression via L1 Penalization
Feng Yan (Purdue University); Yuan Qi (Purdue University)
They introduced a way to do sparse GPs for large amounts of data by adding a L1 penalization to the influence of data points. It effectively removes irrelevant data points using a convex optimization. It avoids the local optima problems of normal sparse GPs.

I liked all the papers in the application track:
901: Web-Scale Bayesian Click-Through Rate Prediction for Sponsored Search Advertising in Microsoft's Bing Search Engine
Thore Graepel (Microsoft Research); Joaquin QuiƱonero Candela (Microsoft Research); Thomas Borchert (Microsoft Research); Ralf Herbrich (Microsoft Research)

902: Detecting Large-Scale System Problems by Mining Console Logs
Wei Xu (UC Berkeley); Ling Huang (Intel Labs Berkeley); Armando Fox (UC Berkeley); David Patterson (UC Berkeley); Michael I. Jordan (UC Berkeley)
I liked this since it is somewhat related to my project.

903: The Role of Machine Learning in Business Optimization
Chid Apte (IBM T. J. Watson Research Center)
IBM is increasing the efficiency of collecting back taxes in NY state using machine learning (which some people found scary).

374: Local Minima Embedding
Minyoung Kim (CMU); Fernando De la Torre (CMU)
The idea is to visualize the a high dimensional objective function in a lower dimension that can be visualized while preserving the local optima. Its a really good idea, but it is not good enough to help with the hard problems we would want to solve with it (ie visualizing local optima in high dimensional neural network optimization).

495: Hilbert Space Embeddings of Hidden Markov Models
Le Song (Cmu); Byron Boots (Carnegie Mellon University); Sajid Siddiqi (Google); Geoffrey Gordon (Carnegie Mellon University); Alex Smola (Yahoo! Research)

551: Distance Dependent Chinese Restaurant Processes
David Blei (Princeton University); Peter Frazier (Cornell)

No comments: