A feedforward architecture accounts for rapid categorization

A feedforward architecture accounts for rapid categorization

April 10, 2007 | Thomas Serre*, Aude Oliva*, and Tomaso Poggio*†‡
The authors present a computational model that explains the rapid categorization of objects in primates. The model is a feedforward architecture that extends the Hubel and Wiesel's simple-to-complex cell hierarchy and accounts for anatomical and physiological constraints. The model is trained using unsupervised learning to build a generic dictionary of shape-tuned units, which are then used for task-specific categorization in the prefrontal cortex. The model's performance is compared to human observers in an animal vs. non-animal categorization task, showing high accuracy and similar response patterns. The model's robustness to image rotation and mask conditions further supports its validity. The findings suggest that a feedforward architecture with a task-independent, unsupervised learning stage can explain the rapid and accurate object recognition in primates.The authors present a computational model that explains the rapid categorization of objects in primates. The model is a feedforward architecture that extends the Hubel and Wiesel's simple-to-complex cell hierarchy and accounts for anatomical and physiological constraints. The model is trained using unsupervised learning to build a generic dictionary of shape-tuned units, which are then used for task-specific categorization in the prefrontal cortex. The model's performance is compared to human observers in an animal vs. non-animal categorization task, showing high accuracy and similar response patterns. The model's robustness to image rotation and mask conditions further supports its validity. The findings suggest that a feedforward architecture with a task-independent, unsupervised learning stage can explain the rapid and accurate object recognition in primates.
Reach us at info@study.space