Loading…
NIPS 2013 has ended
Tuesday, December 10 • 7:30am - 6:30pm
New Directions in Transfer and Multi-Task: Learning Across Domains and Tasks

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

The main objective of the workshop is to document and discuss the recent rise of new research questions on the general problem of learning across domains and tasks. This includes the main topics of transfer [1,2,3] and multi-task learning [4], together with several related variants as domain adaptation [5,6] and dataset bias [7]. In the last years there has been an increasing boost of activity in these areas, many of them driven by practical applications, such as object categorization. Different solutions were studied for the considered topics, mainly separately and without a joint theoretical framework. On the other hand, most of the existing theoretical formulations model regimes that are rarely used in practice (e.g. adaptive methods that store all the source samples). The workshop will focus on closing this gap by providing an opportunity for theoreticians and practitioners to get together in one place, to share and debate over current theories and empirical results. The goal is to promote a fruitful exchange of ideas and methods between the different communities, leading to a global advancement of the field. Transfer Learning - Transfer Learning (TL) refers to the problem of retaining and applying the knowledge available for one or more source tasks, to efficiently develop an hypothesis for a new target task. Each task may contain the same (domain adaptation) or different label sets (across category transfer). Most of the effort has been devoted to binary classification, while most interesting practical transfer problems are intrinsically multi-class and the number of classes can often increase in time. Hence, it is natural to ask: - How to formalize knowledge transfer across multi-class tasks and provide theoretical guarantees on this setting? - Moreover, can interclass transfer and incremental class learning be properly integrated? - Can learning guarantees be provided when the adaptation relies only on pre-trained source hypotheses without explicit access to the source samples, as it is often the case in real world scenarios? Multi-task Learning - Learning over multiple related tasks can outperform learning each task in isolation. This is the principal assertion of Multi-task learning (MTL) and implies that the learning process may benefit from common information shared across the tasks. In the simplest case, transfer process is symmetric and all the tasks are considered as equally related and appropriate for joint training. - What happens when this condition does not hold, e.g., how to avoid negative transfer? - Moreover, can RHKS embeddings be adequately integrated into the learning process to estimate and compare the distributions underlying the multiple tasks? - How may embedding probability distributions help learning from data clouds? - Recent methods, like deep learning or multiple kernel learning, can help to get a step closer towards the complete automatization of multi-task learning? - How can notions from reinforcement learning such as source task selection be connected to notions from convex multi-task learning such as the task similarity matrix? References [1] I. Kuzborskij and F. Orabona. Stability and Hypothesis Transfer Learning. ICML 2013 [2] T. Tommasi, F. Orabona, B. Caputo. Safety in Numbers: Learning Categories from Few Examples with Multi Model Knowledge Transfer. CVPR 2010. [3] U. Rückert, M. Kloft. Transfer Learning with Adaptive Regularizers. ECML 2011. [4] A. Maurer, M. Pontil, B. Romera-Paredes. Sparse coding for multitask and transfer learning. ICML 2013. [5] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, J. Wortman Vaughan. A theory of learning from different domains. Machine Learning 2010. [6] K. Saenko, B. Kulis, M. Fritz, T. Darrell. Adapting Visual Category Models to New Domains. ECCV 2010. [7] A. Torralba, A. Efros. Unbiased Look at Dataset Bias. CVPR 2011.
https://sites.google.com/site/learningacross/


Tuesday December 10, 2013 7:30am - 6:30pm PST
Harrah's Fallen+Marla
  Workshops
  • Program_Schedule <br>The workshop will be held on December 10, 2013 in conjunction with the NIPS conference at Lake Tahoe, Nevada, United States. Location: Harrah's Fallen+Marla <br> <br>Morning Session <br> <br>7:30 Overview and Workshop Goals <br>7:40 Sinno Jialin Pan: An Overview of Transfer Learning <br>8:15 Massimiliano Pontil: Sparse Coding for Multi-task and Transfer Learning <br>8:50 Coffee break + Poster session <br>9:50 Shai Ben-David: Understanding Domain Adaptation Learning - the good and the not so good <br> <br>Break (10:30-15:30) <br> <br>Afternoon Session <br> <br>15:30 Arthur Gretton: Nonparametric Bayesian inference using kernel distribution embeddings <br>16:05 Poster spotlight and Oral presentations <br>17:10 Coffee break + Poster session <br>17:30 Fei Sha: Learning kernels for visual domain adaptation <br>18:05 Panel discussion and Concluding remarks <br> <br>End of Workshop (18:30)

Attendees (0)