DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

6 Oct 2013 | Jeff Donahue*, Yangqing Jia*, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, Trevor Darrell
The paper "DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition" evaluates whether features extracted from a deep convolutional network trained on a large set of object recognition tasks can be repurposed for novel generic tasks. The authors investigate the semantic clustering of deep convolutional features across various tasks, including scene recognition, domain adaptation, and fine-grained recognition. They compare the efficacy of using different network levels to define a fixed feature and report significant improvements over state-of-the-art methods on several vision challenges. The paper introduces DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters, to enable researchers to experiment with deep representations across a range of visual concept learning paradigms. The results demonstrate that DeCAF outperforms conventional visual representations on standard benchmark object recognition tasks and shows good performance in domain adaptation and fine-grained recognition tasks. The authors also analyze the semantic salience of deep convolutional representations and find that convolutional features cluster semantic topics more effectively than conventional features.The paper "DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition" evaluates whether features extracted from a deep convolutional network trained on a large set of object recognition tasks can be repurposed for novel generic tasks. The authors investigate the semantic clustering of deep convolutional features across various tasks, including scene recognition, domain adaptation, and fine-grained recognition. They compare the efficacy of using different network levels to define a fixed feature and report significant improvements over state-of-the-art methods on several vision challenges. The paper introduces DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters, to enable researchers to experiment with deep representations across a range of visual concept learning paradigms. The results demonstrate that DeCAF outperforms conventional visual representations on standard benchmark object recognition tasks and shows good performance in domain adaptation and fine-grained recognition tasks. The authors also analyze the semantic salience of deep convolutional representations and find that convolutional features cluster semantic topics more effectively than conventional features.
Reach us at info@study.space
[slides] DeCAF%3A A Deep Convolutional Activation Feature for Generic Visual Recognition | StudySpace