Adapting Visual Category Models to New Domains

Adapting Visual Category Models to New Domains

2010 | Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell
Domain adaptation is an important emerging topic in computer vision. This paper presents one of the first studies of domain shift in the context of object recognition. The authors introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While the paper focuses on object recognition tasks, the transform-based adaptation technique is general and could be applied to non-image data. Another contribution is a new multi-domain object database, freely available for download. The authors experimentally demonstrate the ability of their method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions. Supervised classification methods, such as kernel-based and nearest-neighbor classifiers, have been shown to perform very well on standard object recognition tasks. However, many such methods expect the test images to come from the same distribution as the training images and often fail when presented with a novel visual domain. The problem of domain adaptation has received significant recent attention in the natural language processing community but has been largely overlooked in the object recognition field. This paper explores the issue of domain shift in the context of object recognition and presents a novel method that adapts existing classifiers to new domains where labeled data is scarce. The paper introduces a novel domain adaptation technique based on cross-domain transformations. The key idea is to learn a regularized non-linear transformation that maps points in the source domain closer to those in the target domain using supervised data from both domains. The input consists of labeled pairs of inter-domain examples that are known to be either similar or dissimilar. The output is the learned transformation, which can be applied to previously unseen test data points. One of the key advantages of the transform-based approach is that it can be applied over novel test samples from categories seen at training time and can also generalize to new categories which were not present at training time. The authors develop a general framework for learning regularized cross-domain transformations and present an algorithm based on a specific regularizer which results in a symmetric transform.Domain adaptation is an important emerging topic in computer vision. This paper presents one of the first studies of domain shift in the context of object recognition. The authors introduce a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution. The transformation is learned in a supervised manner and can be applied to categories for which there are no labeled examples in the new domain. While the paper focuses on object recognition tasks, the transform-based adaptation technique is general and could be applied to non-image data. Another contribution is a new multi-domain object database, freely available for download. The authors experimentally demonstrate the ability of their method to improve recognition on categories with few or no target domain labels and moderate to large changes in the imaging conditions. Supervised classification methods, such as kernel-based and nearest-neighbor classifiers, have been shown to perform very well on standard object recognition tasks. However, many such methods expect the test images to come from the same distribution as the training images and often fail when presented with a novel visual domain. The problem of domain adaptation has received significant recent attention in the natural language processing community but has been largely overlooked in the object recognition field. This paper explores the issue of domain shift in the context of object recognition and presents a novel method that adapts existing classifiers to new domains where labeled data is scarce. The paper introduces a novel domain adaptation technique based on cross-domain transformations. The key idea is to learn a regularized non-linear transformation that maps points in the source domain closer to those in the target domain using supervised data from both domains. The input consists of labeled pairs of inter-domain examples that are known to be either similar or dissimilar. The output is the learned transformation, which can be applied to previously unseen test data points. One of the key advantages of the transform-based approach is that it can be applied over novel test samples from categories seen at training time and can also generalize to new categories which were not present at training time. The authors develop a general framework for learning regularized cross-domain transformations and present an algorithm based on a specific regularizer which results in a symmetric transform.
Reach us at info@futurestudyspace.com