Loading…
NIPS 2013 has ended
Saturday, December 7 • 7:00pm - 11:59pm
Learning with Invariance via Linear Functionals on Reproducing Kernel Hilbert Space

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Incorporating invariance information is important for many learning problems. To exploit invariances, most existing methods resort to approximations that either lead to expensive optimization problems such as semi-definite programming, or rely on separation oracles to retain tractability. Some methods further limit the space of functions and settle for non-convex models. In this paper, we propose a framework for learning in reproducing kernel Hilbert spaces (RKHS) using local invariances that explicitly characterize the behavior of the target function around data instances. These invariances are \emph{compactly} encoded as linear functionals whose value are penalized by some loss function. Based on a representer theorem that we establish, our formulation can be efficiently optimized via a convex program. For the representer theorem to hold, the linear functionals are required to be bounded in the RKHS, and we show that this is true for a variety of commonly used RKHS and invariances. Experiments on learning with unlabeled data and transform invariances show that the proposed method yields better or similar results compared with the state of the art.
None

Speakers
YW

Yee Whye Teh

Yee Whye Teh is a reader at the Gatsby Computational Neuroscience Unit, UCL. He obtained his PhD from the University of Toronto, and did postdoctoral work at the University of California, Berkeley and National University of Singapore. He is interested in developing probabilistic and... Read More →


Saturday December 7, 2013 7:00pm - 11:59pm PST
Harrah's Special Events Center, 2nd Floor
  Posters
  • posterid Sat07
  • location Poster# Sat07